DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Low-Code Development: Leverage low and no code to streamline your workflow so that you can focus on higher priorities.

DZone Security Research: Tell us your top security strategies in 2024, influence our research, and enter for a chance to win $!

Launch your software development career: Dive head first into the SDLC and learn how to build high-quality software and teams.

Open Source Migration Practices and Patterns: Explore key traits of migrating open-source software and its impact on software development.

Java

Java is an object-oriented programming language that allows engineers to produce software for multiple platforms. Our resources in this Zone are designed to help engineers with Java program development, Java SDKs, compilers, interpreters, documentation generators, and other tools used to produce a complete application.

icon
Latest Refcards and Trend Reports
Trend Report
Low Code and No Code
Low Code and No Code
Trend Report
Modern Web Development
Modern Web Development
Refcard #024
Core Java
Core Java

DZone's Featured Java Resources

A Complete Guide To Implementing GraphQL for Java

A Complete Guide To Implementing GraphQL for Java

By David Ventimiglia
This guide is a valuable resource for Java developers seeking to create robust and efficient GraphQL API servers. This detailed guide will take you through all the steps for implementing GraphQL in Java for real-world applications. It covers the fundamental concepts of GraphQL, including its query language and data model, and highlights its similarities to programming languages and relational databases. It also offers a practical step-by-step process for building a GraphQL API server in Java utilizing Spring Boot, Spring for GraphQL, and a relational database. The design emphasizes persistence, flexibility, efficiency, and modernity. Additionally, the blog discusses the trade-offs and challenges involved in the process. Finally, it presents an alternative path beyond the conventional approach, suggesting the potential benefits of a "GraphQL to SQL compiler" and exploring the option of acquiring a GraphQL API instead of building one. What Is GraphQL and Why Do People Want It? GraphQL is a significant evolution in the design of Application Performance Interfaces (API). Still, even today, it can be challenging to know how to get started with GraphQL, how to progress, and how to move beyond the conventional wisdom of GraphQL. This is especially true for Java. This guide attempts to cover all these bases in three steps. First, I'll tell you what GraphQL is, and as a bonus, I'll let you know what GraphQL really is. Second, I'll show you how to implement state-of-the-art GraphQL in Java for an actual application. Third, I'll offer you an alternative path beyond the state-of-the-art that may suit your needs better in every dimension. So, what is GraphQL? Well, GraphQL.org says: "GraphQL is a query language for your API and a server-side runtime for executing queries using a type system you define for your data. GraphQL isn’t tied to any specific database or storage engine and is instead backed by your existing code and data." That's correct, but let's look at it from different directions. Sure, GraphQL is "a query language for your API," but you might as well just say that it is an API or a way of building an API. That contrasts it with REST, which GraphQL is an evolution from and an alternative to. GraphQL offers several improvements over REST: Expressivity: A client can say what data they need from a server, no more and no less. Efficiency: Expressivity leads to efficiency gains, reducing network chatter and wasted bandwidth. Discoverability: To know what to say to a server, a client needs to know what can be said to a server. Discoverability allows data consumers to know exactly what's available from data producers. Simplicity: GraphQL puts clients in the driver's seat, so good ergonomics for driving should exist. GraphQL's highly-regular machine-readable syntax, simple execution model, and simple specifications lend themselves to inter-operable and composable tools: Query tools Schema registries Gateways Code generators Client libraries But GraphQL is also a data model for its query language, and despite the name, neither the query language nor the data model is very "graphy." The data model is essentially just JSON. The query language looks like JSON and can be boiled down to a few simple features: Types: A type is a simple value (a scalar) or a set of fields (an object). While you naturally introduce new types for your own problem domain, there are a few special types (called Operations). One of these is Query, which is the root of requests for data (setting aside Subscription for now, for the sake of simplicity). A type essentially is a set of rules for determining if a piece of data–or a request for that piece of data–validly conforms to the given type. A GraphQL type is very much like a user-defined type in programming languages like C++, Java, and Typescript, and is very much like a table in a relational database. Field: A field within one type contains one or more pieces of data that validly conform to another type, thus establishing relationships among types. A GraphQL field is very much like a property of a user-defined type in a programming language and is very much like a column in a relational database. Relationships between GraphQL types are very much like pointers or references in programming languages and are very much like foreign key constraints in relational databases. There's more to GraphQL, but that's pretty much the essence. Note the similarities between concepts in GraphQL and programming languages, and especially between concepts in GraphQL and relational databases. OK, we’ve covered what GraphQL is, but what is GraphQL for? Why should we consider it as an alternative to REST? I listed above some of GraphQL's improvements over typical REST–expressivity, efficiency, discoverability, simplicity–but another perhaps more concise way to put it is this: GraphQL's expressivity, efficiency, discoverability, and simplicity make life easier for data consumers. However, there's a corollary: GraphQL's expressivity, efficiency, discoverability, and simplicity make life harder for data producers. That's you! If you're a Java programmer working with GraphQL, your job is probably to produce GraphQL API servers for clients to consume (there are relatively few Java settings on the client). Offering all that expressivity, discoverability, etc. is not easy, so how do you do it? How Do I Provide the GraphQL That People Want, Especially as a Java Developer? On the journey to providing a GraphQL API, we confront a series of interdependent choices that can make life easier (or harder) for data producers. One choice concerns just how "expressive, efficient, discoverable, and simple" our API is, but let's set that aside for a moment and treat it as an emergent property of the other choices we make. Life is about trade-offs, after all. Another choice is over build-versus-buy [PDF], but let's also set that aside for a moment, accept that we're building a GraphQL API server (in Java), explore how that is done, and evaluate the consequences. If you’re building a GraphQL API server in Java, another choice is whether to build it completely from scratch or to use libraries and frameworks and if the latter, then which libraries and frameworks to use. Let's set aside a complete DIY solution as pointless masochism, and survey the landscape of Java libraries and frameworks for GraphQL. As of writing (May 2024) there are three important interdependent players in this space: Graphql-java:graphql-java is a lower-level foundational library for working with GraphQL in Java, which began in 2015. Since the other players depend on and use graphql-java, consider graphql-java to be non-optional. Another crucial choice is whether you are or are not using the Spring Boot framework. If you're not using Spring Boot then stop here! Since this is a prerequisite, in the parlance of the ThoughtWorks Radar this is unavoidably adopt. Netflix DGS: DGS is a higher-level library for working with GraphQL in Java with Spring Boot, which began in 2021. If you're using DGS then you will also be using graphql-java under the hood, but typically you won't come into contact with graphql-java. Instead, you will be sprinkling annotations throughout the Java code to identify the code segments called "resolvers" or "data fetchers” that execute GraphQL requests. ThoughtWorks said Trial as of 2023 for DGS but this is a dynamic space and their opinion may have changed. I say Hold for the reasons given below. Spring for GraphQL: Spring for GraphQL is another higher-level library for working with GraphQL in Java with Spring Boot, which began around 2023, and is also based on annotations. It may be too new for ThoughtWorks, but it's not too new for me. I say Adopt and read on for why. The makers of Spring for GraphQL say: "It is a joint collaboration between the GraphQL Java team and Spring engineering…It aims to be the foundation for all Spring, GraphQL applications." Translation: The Spring team has a privileged collaboration with the makers of the foundational library for GraphQL in Java, and intends to "win" in this space. Moreover, the makers of Netflix DGS have much to say about that library's relationship to Spring for GraphQL. "Soon after we open-sourced the DGS framework, we learned about parallel efforts by the Spring team to develop a GraphQL framework for Spring Boot. The Spring GraphQL project was in the early stages at the time and provided a low level of integration with graphql-java. Over the past year, however, Spring GraphQL has matured and is mostly at feature parity with the DGS Framework. We now have 2 competing frameworks that solve the same problems for our users. Today, new users must choose between the DGS Framework or Spring GraphQL, thus missing out on features available in one framework but not the other. This is not an ideal situation for the GraphQL Java community. For the maintainers of DGS and Spring GraphQL, it would be far more effective to collaborate on features and improvements instead of having to solve the same problem independently. Finally, a unified community would provide us with better channels for feedback. The DGS framework is widely used and plays a vital role in the architecture of many companies, including Netflix. Moving away from the framework in favor of Spring-GraphQL would be a costly migration without any real benefits. From a Spring Framework perspective, it makes sense to have an out-of-the-box GraphQL offering, just like Spring supports REST." Translation: If you're a Spring Boot shop already using DGS, go ahead and keep using it for now. If you're a Spring Boot shop starting afresh, you should probably just use Spring for GraphQL. In this guide, I've explained GraphQL in detail, setting the stage by providing some background on the relevant libraries and frameworks in Java. Now, let me show you how to implement state-of-the-art GraphQL in Java for a real application. Since we're starting afresh, we'll take the advice from DGS and just use Spring for GraphQL. How Exactly Do I Build a GraphQL API Server in Java for a Real Application? Opinions are free to differ on what it even means to be a "real application." For the purpose of this guide, what I mean by "real application" in this setting is an application that has at least these features: Persistence: Many tutorials, getting-started guides, and overviews only address in-memory data models, stopping well short of interacting with a database. This guide shows you some ways to cross this crucial chasm and discusses some of the consequences, challenges, and trade-offs involved. This is a vast topic so I barely scratch the surface, but it's a start. The primary goal is to support Query operations. A stretch goal is to support Mutation operations. Subscription operations are thoroughly off-the-table for now. Flexibility: I wrote above that just how expressive, efficient, discoverable, and simple we make our GraphQL API is technically a choice we make, but is practically a property that emerges from other choices we make. I also wrote that building GraphQL API servers is difficult for data producers. Consequently, many data producers cope with that difficulty by dialing way back on those other properties of the API. Many GraphQL API servers in the real world are inflexible, superficial, shallow, and are, in many ways, "GraphQL-in-name-only." This guide shows some of what's involved in going beyond the status quo and how that comes into tension with other properties, like efficiency. Spoiler Alert: It isn't pretty. Efficiency: In fairness, many GraphQL API servers in the real world achieve decent efficiency, albeit at the expense of flexibility, by essentially encoding REST API endpoints into a shallow GraphQL schema. The standard approach in GraphQL is the data-loader pattern, but few tutorials really show how this is used even with an in-memory data model let alone with a database. This guide offers one implementation of the data loader pattern to combat the N+1 problem. Again, we see how that comes into tension with flexibility and simplicity. Modernity: Anyone writing a Java application that accesses a database will have to make choices about how to access a database. That could involve just JDBC and raw SQL (for a relational database) but arguably the current industry standard is still to use an Object-Relational Mapping (ORM) layer like Hibernate, jooq, or the standard JPA. Getting an ORM to play nice with GraphQL is a tall order, may not be prudent, and may not even be possible. Few if any other guides touch this with a ten-foot pole. This guide at least will make an attempt with an ORM in the future! The recipe I follow in this guide for building a GraphQL API server in Java for a relational database is the following: Choose Spring Boot for the overall server framework. Choose Spring for GraphQL for the GraphQL-specific parts. Choose Spring Data for JDBC for data access in lieu of an ORM for now. Choose Maven over Gradle because I prefer the former. If you choose the latter, you're on your own. Choose PostgreSQL for the database. Most of the principles should apply to pretty much any relational database, but you've got to start somewhere. Choose Docker Compose for orchestrating a development database server. There are other ways of bringing in a database, but again, you've got to start somewhere. Choose the Chinook data model. Naturally, you will have your own data model, but Chinook is a good choice for illustration purposes because it's fairly rich, has quite a few tables and relationships, goes well beyond the ubiquitous but trivial To-Do apps, is available for a wide variety of databases, and is generally well-understood. Choose the Spring Initializr for bootstrapping the application. There's so much ceremony in Java, any way to race through some of it is welcomed. Create a GraphQL schema file. This is a necessary step for graphql-java, for DGS, and for Spring for GraphQL. Weirdly, the Spring for GraphQL overview seems to overlook this step, but the DGS "Getting Started" guide is there to remind us. Many "thought leaders" will exhort you to isolate your underlying data model from your API. Theoretically, you could do this by having different GraphQL types from your database tables. Practically, this is a source of busy work. Write Java model classes, one for every GraphQL type in the schema file and every table in the database. You're free to make other choices for this data model or for any other data model, and you can even write code or SQL views to isolate your underlying data model from your API but do ask how important this really is when the number of tables/classes/types grows to the hundreds or thousands. Write Java controller classes, with one method at least for every root field. In practice, this is the bare minimum. There probably will be many more. By the way, these methods are your "resolvers". Annotate every controller class with @Controller to tell Spring to inject it as a Java Bean that can serve network traffic. Annotate every resolver/data-fetcher method with @SchemaMapping or QueryMapping to tell Spring for GraphQL how to execute the parts of a GraphQL operation. Implement those resolver/data-fetcher methods by whatever means necessary to mediate interactions with the database. In version 0, this will be just simple raw SQL statements. Upgrade some of those resolver/data-fetcher methods by replacing @SchemaMapping or @QueryMapping with @BatchMapping. This latter annotation signals to Spring for GraphQL that we want to make the execution more efficient by combating the N+1 problem, and we're prepared to pay the price in more code in order to do it. Refactor those @BatchMapping annotated methods to support the data loader pattern by accepting (and processing) a list of identifiers for related entities rather than a single identifier for a single related entity. Write copious test-cases for every possible interaction. Just use a fuzz-tester on the API and call it a day. But Really, How Exactly Do I Build a GraphQL API Server in Java for a Real Application? That is a long recipe above! Instead of going into chapter and verse for every single step, in this guide, I do two things. First, I provide a public repository (Steps 1-5) with working code that is easy to use, easy to run, easy to read, and easy to understand. Second, I highlight some of the important steps, put them in context, discuss the choices involved, and offer some alternatives. Step 6: Choose Docker Compose for Orchestrating a Development Database Server Again, there are other ways to pull this off, but this is one good way. YAML version: "3.6" services: postgres: image: postgres:16 ports: - ${PGPORT:-5432}:5432 restart: always environment: POSTGRES_PASSWORD: postgres PGDATA: /var/lib/pgdata volumes: - ./initdb.d-postgres:/docker-entrypoint-initdb.d:ro - type: tmpfs target: /var/lib/pg/data Set an environment variable for PGPORT to expose PostgreSQL on a host port, or hard-code it to whatever value you like. Step 7: Choose the Chinook Data Model The Chinook files from YugaByte work out-of-the-box for PostgreSQL and are a good choice. Just make sure that there is a sub-directory initdb.d-postgres and download the Chinook DDL and DML files into that directory, taking care to give them numeric prefixes so that they're run by the PostgreSQL initialization script in the proper order. Shell mkdir -p ./initdb.d-postgres wget -O ./initdb.d-postgres/04_chinook_ddl.sql https://raw.githubusercontent.com/YugaByte/yugabyte-db/master/sample/chinook_ddl.sql wget -O ./initdb.d-postgres/05_chinook_genres_artists_albums.sql https://raw.githubusercontent.com/YugaByte/yugabyte-db/master/sample/chinook_genres_artists_albums.sql wget -O ./initdb.d-postgres/06_chinook_songs.sql https://raw.githubusercontent.com/YugaByte/yugabyte-db/master/sample/chinook_songs.sql Now, you can start the database service using Docker Compose. docker compose up -d Or docker-compose up -d There are many ways to spot-check the database's validity. If the Docker Compose service seems to have started correctly, here's one way using psql. psql "postgresql://postgres:postgres@localhost:5432/postgres" -c '\d' SQL List of relations Schema | Name | Type | Owner --------+-----------------+-------+---------- public | Album | table | postgres public | Artist | table | postgres public | Customer | table | postgres public | Employee | table | postgres public | Genre | table | postgres public | Invoice | table | postgres public | InvoiceLine | table | postgres public | MediaType | table | postgres public | Playlist | table | postgres public | PlaylistTrack | table | postgres public | Track | table | postgres public | account | table | postgres public | account_summary | view | postgres public | order | table | postgres public | order_detail | table | postgres public | product | table | postgres public | region | table | postgres (17 rows) You should at least see Chinook-specific tables like Album, Artist, and Track. Step 8: Choose the Spring Initializr for Bootstrapping the Application The important thing with this form is to make these choices: Project: Maven Language: Java Spring Boot: 3.2.5 Packaging: Jar Java: 21 Dependencies: Spring for GraphQL PostgreSQL Driver You can make other choices (e.g., Gradle, Java 22, MySQL, etc.), but bear in mind that this guide has only been tested with the choices above. Step 9: Create a GraphQL Schema File Maven projects have a standard directory layout and a standard place within that layout for resource files to be packaged into the build artifact (a JAR file) is ./src/main/java/resources. Within that directory, create a sub-directory graphql and deposit a schema.graphqls file. There are other ways to organize the GraphQL schema files needed by graphql-java, DGS, and Spring for GraphQL, but they all are rooted in ./src/main/java/resources (for a Maven project). Within the schema.graphqls file (or its equivalent), first there will be a definition for the root Query object, with root-level fields for every GraphQL type that we want in our API. As a starting point, there will be a root-level field under Query for every table, and a corresponding type for every table. For example, for Query: Java type Query { Artist(limit: Int): [Artist] ArtistById(id: Int): Artist Album(limit: Int): [Album] AlbumById(id: Int): Album Track(limit: Int): [Track] TrackById(id: Int): Track Playlist(limit: Int): [Playlist] PlaylistById(id: Int): Playlist PlaylistTrack(limit: Int): [PlaylistTrack] PlaylistTrackById(id: Int): PlaylistTrack Genre(limit: Int): [Genre] GenreById(id: Int): Genre MediaType(limit: Int): [MediaType] MediaTypeById(id: Int): MediaType Customer(limit: Int): [Customer] CustoemrById(id: Int): Customer Employee(limit: Int): [Employee] EmployeeById(id: Int): Employee Invoice(limit: Int): [Invoice] InvoiceById(id: Int): Invoice InvoiceLine(limit: Int): [InvoiceLine] InvoiceLineById(id: Int): InvoiceLine } Note the parameters on these fields. I have written it so that every root-level field that has a List return type accepts one optional limit parameter which accepts an Int. The intention is to support limiting the number of entries that should be returned from a root-level field. Note also that every root-level field that has a Scalar object return type accepts one optional id parameter which also accepts an Int. The intention is to support fetching a single entry by its identifier (which happens to all be integer primary keys in the Chinook data model). Next, here is an illustration of some of the corresponding GraphQL types: Java type Album { AlbumId : Int Title : String ArtistId : Int Artist : Artist Tracks : [Track] } type Artist { ArtistId: Int Name: String Albums: [Album] } type Customer { CustomerId : Int FirstName : String LastName : String Company : String Address : String City : String State : String Country : String PostalCode : String Phone : String Fax : String Email : String SupportRepId : Int SupportRep : Employee Invoices : [Invoice] } Fill out the rest of the schema.graphqls file as you see fit, exposing whatever table (and possibly views, if you create them) you like. Or, just use the complete version from the shared repository. Step 10: Write Java Model Classes Within the standard Maven directory layout, Java source code goes into ./src/main/java and its sub-directories. Within an appropriate sub-directory for whatever Java package you use, create Java model classes. These can be Plain Old Java Objects (POJOs). They can be Java Record classes. They can be whatever you like, so long as they have "getter" and "setter" property methods for the corresponding fields in the GraphQL schema. In this guide's repository, I choose Java Record classes just for the minimal amount of boilerplate. Java package com.graphqljava.tutorial.retail.models; public class ChinookModels { public static record Album ( Integer AlbumId, String Title, Integer ArtistId ) {} public static record Artist ( Integer ArtistId, String Name ) {} public static record Customer ( Integer CustomerId, String FirstName, String LastName, String Company, String Address, String City, String State, String Country, String PostalCode, String Phone, String Fax, String Email, Integer SupportRepId ) {} ... } Steps 11-14: Write Java Controller Classes, Annotate Every Controller, Annotate Every Resolver/Data-Fetcher, and Implement Those Resolver/Data-Fetcher These are the Spring @Controller classes, and within them are the Spring for GraphQL QueryMapping and @SchemaMapping resolver/data-fetcher methods. These are the real workhorses of the application, accepting input parameters, mediating interaction with the database, validating data, implementing (or delegating) to business logic code segments, arranging for SQL and DML statements to be sent to the database, returning the data, processing the data, and sending it along to the GraphQL libraries (graphql-java, DGS, Spring for GraphQL) to package up and send off to the client. There are so many choices one can make in implementing these and I can't go into every detail. Let me just illustrate how I have done it, highlight some things to look out for, and discuss some of the options that are available. For reference, we will look at a section of the ChinookControllers file from the example repository. Java package com.graphqljava.tutorial.retail.controllers; // It's got to go into a package somewhere. import java.sql.ResultSet; // There's loads of symbols to import. import java.sql.SQLException; // This is Java and there's no getting around that. import java.util.List; import java.util.Map; import java.util.stream.Collectors; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.graphql.data.ArgumentValue; import org.springframework.graphql.data.method.annotation.BatchMapping; import org.springframework.graphql.data.method.annotation.QueryMapping; import org.springframework.graphql.data.method.annotation.SchemaMapping; import org.springframework.jdbc.core.RowMapper; import org.springframework.jdbc.core.simple.JdbcClient; import org.springframework.jdbc.core.simple.JdbcClient.StatementSpec; import org.springframework.stereotype.Controller; import com.graphqljava.tutorial.retail.models.ChinookModels.Album; import com.graphqljava.tutorial.retail.models.ChinookModels.Artist; import com.graphqljava.tutorial.retail.models.ChinookModels.Customer; import com.graphqljava.tutorial.retail.models.ChinookModels.Employee; import com.graphqljava.tutorial.retail.models.ChinookModels.Genre; import com.graphqljava.tutorial.retail.models.ChinookModels.Invoice; import com.graphqljava.tutorial.retail.models.ChinookModels.InvoiceLine; import com.graphqljava.tutorial.retail.models.ChinookModels.MediaType; import com.graphqljava.tutorial.retail.models.ChinookModels.Playlist; import com.graphqljava.tutorial.retail.models.ChinookModels.PlaylistTrack; import com.graphqljava.tutorial.retail.models.ChinookModels.Track; public class ChinookControllers { // You don't have to nest all your controllers in one file. It's just what I do. @Controller public static class ArtistController { // Tell Spring about this controller class. @Autowired JdbcClient jdbcClient; // Lots of ways to get DB access from the container. This is one way in Spring Data. RowMapper<Artist> // I'm not using an ORM, and only a tiny bit of help from Spring Data. mapper = new RowMapper<>() { // Consequently, there are these RowMapper utility classes involved. public Artist mapRow (ResultSet rs, int rowNum) throws SQLException { return new Artist(rs.getInt("ArtistId"), rs.getString("Name"));}; @SchemaMapping Artist Artist (Album album) { // @QueryMapping when we can, @SchemaMapping when we have to return // Here, we're getting an Artist for a given Album. jdbcClient .sql("select * from \"Artist\" where \"ArtistId\" = ? limit 1") // Simple PreparedStatement wrapper .param(album.ArtistId()) // Fish out the relating field ArtistId and pass it into the PreparedStatement .query(mapper) // Use our RowMapper to turn the JDBC Row into the desired model class object. .optional() // Use optional to guard against null returns! .orElse(null);} @QueryMapping(name = "ArtistById") Artist // Another resolver, this time to get an Artist by its primary key identifier artistById (ArgumentValue<Integer> id) { // Note the annotation "name" parameter, when the GraphQL field name doesn't match exactly the method name for (Artist a : jdbcClient.sql("select * from \"Artist\" where \"ArtistId\" = ?").param(id.value()).query(mapper).list()) return a; return null;} @QueryMapping(name = "Artist") List<Artist> // Yet another resolver, this time to get a List of Artists. artist (ArgumentValue<Integer> limit) { // Note the one "limit" parameter. ArgumentValue<T> is the way you do this with GraphQL for Java. StatementSpec spec = limit.isOmitted() ? // Switch SQL on whether we did or did not get the limit parameter. jdbcClient.sql("select * from \"Artist\"") : jdbcClient.sql("select * from \"Artist\" limit ?").param(limit.value()); return // Run the SQL, map the results, return the List. spec .query(mapper) .list();} ... There's a lot to unpack here, so let's go through it step by step. First, I included the package and import statements in the example because all too often, tutorials and guides that you find online elide these details for brevity. The problem with that, however, is that it's not compilable or runnable code. You don't know where these symbols are coming from, what packages they're in, and what libraries they're coming from. Any decent editor like IntelliJ, VSCode, or even Emacs will help sort this out for you when you're writing code, but you don't have that when reading a blog article. Moreover, there can be name conflicts and ambiguities among symbols across libraries, so even with a smart editor it can leave the reader scratching their head. Next, please forgive the nested inner classes. Feel free to explode your classes into their own individual files as you see fit. This is just how I do it, largely for pedagogical purposes like this one, to promote Locality of Behavior, which is just a fancy way of saying, "Let's not make the reader jump through a lot of hoops to understand the code." Now for the meat of the code. Aside from niggling details like "How do I get a database connection?", "How do I map data?", etc., the patterns I want you to see through the forest of code are these: Every field in our schema file (schema.graphqls) which isn't a simple scalar field (e.g., Int, String, Boolean) will probably need a resolver/data-fetcher. Every resolver is implemented with a Java method. Every resolver method gets annotated with @SchemaMapping, @QueryMapping, or @BatchMapping. Use @QueryMapping when you can because it's simpler. Use @SchemaMapping when you have to (your IDE should nag you). If you keep the Java method names in sync with the GraphQL field names, it's a little less code, but don't make a federal case out of it. You can fix it with a name parameter in the annotations. Unless you do something different (such as adding filtering, sorting, and pagination), you probably will be fetching either a single entry by its primary key or a list of entries. You won't be fetching "child" entries; that's handled by the GraphQL libraries and the recursive divide-and-conquer way they process GraphQL operations. Note: This has implications for performance, efficiency, and code complexity. The "something different" in the above item refers to the richness that you want to add to your GraphQL API. Want limit operations? Filter predicates? Aggregations? Supporting those cases will involve more ArgumentValue<> parameters, more SchemaMapping resolver methods, and more combinations thereof. Deal with it. You will experience the urge to be clever, to create abstractions that dynamically respond to more and more complex combinations of parameters, filters, and other conditions. Step 15: Upgrade Some of Those Resolver/Data-Fetcher Methods With the Data Loader Pattern You will quickly realize that this can lead to overly chatty interaction with the database, sending too many small SQL statements and impacting performance and availability. This is the proverbial "N+1" problem. In a nutshell, the N+1 problem can be illustrated by our Chinook data model. Suppose we have this GraphQL query. query { Artist(limit: 10) { ArtistId Album { AlbumId Track { TrackId } } } } Get up to 10 Artist entry. For each Artist, get all of the related Album entries. For each Album, get all of the related Track entries. For each entry, just get its identifier field: ArtistId, AlbumId, TrackId. This query is nested 2 levels below Artist. Let n=2. Albumis a List wrapping type on Artist, as is Track is a List wrapping type on Album. Suppose the typical cardinality is m. How many SQL statements will typically be involved 1 to fetch 10 Artist entries. 10*m to fetch the Album entries. 10*m^m to fetch the Track entries. In general, we can see that the number of queries scales as m^n, which is exponential in n. Of course, observe that the amount of data retrieved also scales as m^n. In any case, on its face, this seems like an alarmingly inefficient way to go about fetching this data. There is another way, and it is the standard answer within the GraphQL community for combating this N+1 problem: the data loader pattern (aka "batching"). This encompasses three ideas: Rather than fetch the related child entities (e.g., Album) for a single parent entity (e.g., Artist) using one identifier, fetch the related entities for all of the parent entities in one go, using a list of identifiers. Group the resulting child entities according to their respective parent entities (in code). While we're at it, we might as well cache the entities for the lifetime of executing the one GraphQL operation, in case a given entity appears in more than one place in the graph. Now, for some code. Here's how this looks in our example. Java @BatchMapping(field = "Albums") public Map<Artist, List<Album>> // Switch to @BatchMapping albumsForArtist (List<Artist> artists) { // Take in a List of parents rather than a single parent return jdbcClient .sql("select * from \"Album\" where \"ArtistId\" in (:ids)") // Use a SQL "in" predicate taking a list of identifiers .param("ids", artists.stream().map(x -> x.ArtistId()).toList()) // Fish the list of identifiers out of the list of parent objects .query(mapper) // Can re-use our usual mapper .list() .stream().collect(Collectors.groupingBy(x -> artists.stream().collect(Collectors.groupingBy(Artist::ArtistId)).get(x.ArtistId()).getFirst())); // ^ Java idiom for grouping child Albums according to their parent Albums } Like before, let's unpack this. First, we switch from either the @QueryMapping or @SchemaMapping annotation to @BatchMapping to signal to Spring for GraphQL that we want to use the data loader pattern. Second, we switch from a single Artist parameter to a List<Artist> parameter. Third, we somehow have to arrange the necessary SQL (with an in predicate in this case) and the corresponding parameter (a List<Integer> extracted from the List<Album> parameter). Fourth, we somehow have to arrange for the child entries (Album in this case) to get sorted to the right parent entries (Album in this case). There are many ways to do it, and this is just one way. The important point is that however it's done, it has to be done in Java. One last thing: note the absence of the limit parameter. Where did that go? It turns out that InputValue<T> is not supported by Spring for GraphQL for @BatchMapping. Oh well! In this case, it's no great loss because arguably these limit parameters make little sense. How often does one really need a random subset of an artist's albums? It would be a more serious issue if we had filtering and sorting, however. Filtering and sorting parameters are more justified, and if we had them we would somehow have to find a way to sneak them into the data loader pattern. Presumably, it can be done, but it will not be so easy as just slapping a @BatchMapping annotation onto the method and tinkering with Java streams. This raises an important point about the "N+1 problem" that is never addressed, and that neglect just serves to exaggerate the scale of the problem in a real-world setting. If we have limits and/or filtering, then we have a way of reducing the cardinality of related child entities below m (recall that we took m to be the typical cardinality of a child entity). In the real world, setting limits or, more precisely filtering are necessary for usability. GraphQL APIs are meant for humans, in that at the end of the day, the data are being painted onto a screen or in some other way presented to a human user who then has to absorb and process those data. Humans have severe limits in perception, cognition, and memory, for the quantity of data we can process. Only another machine (i.e., computers) could possibly process a large volume of data, but if you're extracting large volumes of data from one machine to another, then you are building an ETL pipeline. If you are using GraphQL for ETL, then you are doing it wrong and should stop immediately! In any event, in a real-world setting, with human users, both m and n will be very small. The number of SQL queries will not scale as m^n to very large numbers. Effectively, the N+1 problem will inflate the number of SQL queries not by an arbitrarily large factor, but by approximately a constant factor. In a well-designed application, it probably will be a constant factor well below 100. Consider this when balancing the trade-offs in developer time, complexity, and hardware scaling when confronting the N+1 problem. Is This the Only Way To Build a GraphQL API Server? We saw that the "easy way" of building GraphQL servers is the one typically offered in tutorials and "Getting Started" guides, and is over tiny unrealistic in-memory data models, without a database. We saw that the "real way" of building GraphQL servers (in Java) described in some detail above, regardless of library or framework, involves: Writing schema file entries, possibly for every table Writing Java model classes, possibly for every table Writing Java resolver methods, possibly for every field in every table Eventually writing code to solve arbitrarily complex compositions of input parameters Writing code to budget SQL operations efficiently We also observe that GraphQL lends itself to a "recursive divide-and-conquer with an accumulator approach": a GraphQL query is recursively divided and sub-divided along type and field boundaries into a "graph," internal nodes in the graph are processed individually by resolvers, but the data are passed up the graph dataflow style, accumulating into a JSON envelope that is returned to the user. The GraphQL libraries decompose the incoming queries into something like an Abstract Syntax Tree (AST), firing SQL statements for all the internal nodes (ignoring the data loader pattern for a moment), and then re-composing the data. And, we are its willing accomplices! We also observe that building GraphQL servers according to the above recipes leads to other outcomes: Lots of repetition Lots of boilerplate code Bespoke servers Tied to a particular data model Build a GraphQL server more than once according to the above recipes and you will make these observations and will naturally feel a powerful urge to build more sophisticated abstractions that reduce the repetition, reduce the boilerplate, generalize the servers, and decouple them from any particular data model. This is what I call the "natural way" of building a GraphQL API, as it's a natural evolution from the trivial "easy way" of tutorials and "Getting Started" guides, and from the cumbersome "real way" of resolvers and even data loaders. Building a GraphQL server with a network of nested resolvers offers some flexibility and dynamism, and requires a lot of code. Adding in more flexibility and dynamism with limits, pagination, filtering, and sorting, requires more code still. And while it may be dynamic, it will also be very chatty with the database, as we saw. Reducing the chattiness necessitates composing the many fragmentary SQL statements into fewer SQL statements which individually do more work. That's what the data loader pattern does: it reduces the number of SQL statements from "a few tens" to "less than 10 but more than 1." In practice, that may not be a huge win and it comes at the cost of developer time and lost dynamism, but it is a step down the path of generating fewer, more sophisticated queries. The terminus of that path is "1": the optimal number of SQL statements (ignoring caching) is 1. Generate one giant SQL statement that does all the work of fetching the data, teach it to generate JSON while you're at it, and this is the best you will ever do with a GraphQL server (for a relational database). It will be hard work, but you can take solace in having done it once, it need not ever be done again if you do it right, by introspecting the database to generate the schema. Do that, and what you will build won't be so much a "GraphQL API server" as a "GraphQL to SQL compiler." Acknowledge that building a GraphQL to SQL compiler is what you have been doing all along, embrace that fact, and lean into it. You may never need to build another GraphQL server again. What could be better than that? One thing that could be better than building your last GraphQL server, or your only GraphQL server, is never building a GraphQL server in the first place. After all, your goal wasn't to build a GraphQL API but rather to have a GraphQL API. The easiest way to have a GraphQL API is just to go get one. Get one for free if you can. Buy one if the needs justify it. This is the final boss on the journey of GraphQL maturity. How To Choose "Build" Over "Buy" Of course, "buy" in this case is really just a stand-in for the general concept, which is to "acquire" an existing solution rather than building one. That doesn't necessarily require purchasing software, since it could be free and open-source. The distinction that I want to draw here is over whether or not to build a custom solution. When it's possible to acquire an existing solution (whether commercial or open-source), there are several options: Apollo Hasura PostGraphile Prisma If you do choose to build GraphQL servers with Java, I hope you will find this article helpful in breaking out of the relentless tutorials, "Getting Started" guides, and "To-Do" apps. These are vast topics in a shifting landscape that require an iterative approach and a modest amount of repetition. More
Tackling Records in Spring Boot

Tackling Records in Spring Boot

By Anghel Leonard DZone Core CORE
Java records fit perfectly in Spring Boot applications. Let’s have several scenarios where Java records can help us increase readability and expressiveness by squeezing the homologous code. Using Records in Controllers Typically, a Spring Boot controller operates with simple POJO classes that carry our data back over the wire to the client. For instance, check out this simple controller endpoint returning a list of authors, including their books: Java @GetMapping("/authors") public List<Author> fetchAuthors() { return bookstoreService.fetchAuthors(); } Here, the Author (and Book) can be simple carriers of data written as POJOs. But, they can be replaced by records as well. Here it is: Java public record Book(String title, String isbn) {} public record Author(String name, String genre, List<Book> books) {} That’s all! The Jackson library (which is the default JSON library in Spring Boot) will automatically marshal instances of type Author/Book into JSON. In the bundled code, you can practice the complete example via the localhost:8080/authors endpoint address. Using Records With Templates Thymeleaf is probably the most used templating engine in Spring Boot applications. Thymeleaf pages (HTML pages) are typically populated with data carried by POJO classes, which means that Java records should work as well. Let’s consider the previous Author and Book records, and the following controller endpoint: Java @GetMapping("/bookstore") public String bookstorePage(Model model) { model.addAttribute("authors", bookstoreService.fetchAuthors()); return "bookstore"; } The List<Author> returned via fetchAuthors() is stored in the model under a variable named authors. This variable is used to populate bookstore.html as follows: HTML ... <ul th:each="author : ${authors}"> <li th:text="${author.name} + ' (' + ${author.genre} + ')'" /> <ul th:each="book : ${author.books}"> <li th:text="${book.title}" /> </ul> </ul> ... Done! You can check out the application Java Coding Problems SE. Using Records for Configuration Let’s assume that in application.properties we have the following two properties (they could be expressed in YAML as well): Properties files bookstore.bestseller.author=Joana Nimar bookstore.bestseller.book=Prague history Spring Boot maps such properties to POJO via @ConfigurationProperties. But, a record can be used as well. For instance, these properties can be mapped to the BestSellerConfig record as follows: Java @ConfigurationProperties(prefix = "bookstore.bestseller") public record BestSellerConfig(String author, String book) {} Next, in BookstoreService (a typical Spring Boot service), we can inject BestSellerConfig and call its accessors: Java @Service public class BookstoreService { private final BestSellerConfig bestSeller; public BookstoreService(BestSellerConfig bestSeller) { this.bestSeller = bestSeller; } public String fetchBestSeller() { return bestSeller.author() + " | " + bestSeller.book(); } } In the bundled code, we have added a controller that uses this service as well. Record and Dependency Injection In the previous examples, we have injected the BookstoreService service into BookstoreController using the typical mechanism provided by SpringBoot – dependency injection via constructor (it can be done via @Autowired as well): Java @RestController public class BookstoreController { private final BookstoreService bookstoreService; public BookstoreController(BookstoreService bookstoreService) { this.bookstoreService = bookstoreService; } @GetMapping("/authors") public List<Author> fetchAuthors() { return bookstoreService.fetchAuthors(); } } But, we can compact this class by re-writing it as a record as follows: Java @RestController public record BookstoreController(BookstoreService bookstoreService) { @GetMapping("/authors") public List<Author> fetchAuthors() { return bookstoreService.fetchAuthors(); } } The canonical constructor of this record will be the same as our explicit constructor. The application is available on GitHub. Feel free to challenge yourself to find more use cases of Java records in Spring Boot applications. More
Benchmarking Java Streams
Benchmarking Java Streams
By Bartłomiej Żyliński DZone Core CORE
How to Fully Validate URLs in Java
How to Fully Validate URLs in Java
By Brian O'Neill DZone Core CORE
TestNG vs. JUnit: A Comparative Analysis of Java Testing Frameworks
TestNG vs. JUnit: A Comparative Analysis of Java Testing Frameworks
By pranshu sharma
Dependency Injection
Dependency Injection

Dependency Injection is one of the foundational techniques in Java backend development, helping build resilient and scalable applications tailored to modern software demands. DI is used to simplify dependency management by externalizing dependencies from the class itself, streamlining code maintenance, fostering modularity, and enhancing testability. Why is this technique crucial for Java developers? How does it effectively address common pain points? In this article, I present to you the practical benefits, essential practices, and real-world applications of Dependency Injection. Let's explore the practical strategies that underlie Dependency Injection in Java backend applications. What Do We Need Dependency Injection For? Testability Testability – the extent to which you can test a system – is a critical aspect of Java backend development, and Dependency Injection is indispensable here. Say, you have a Java class fetching data from an external database. If you don’t use DI, the class will likely tightly couple itself to the database connection, which will complicate unit testing. By employing DI, you can inject database dependencies, simplifying mocking during unit tests. For instance, Mockito, a popular Java mocking framework, will let you inject mock DataSource objects into classes, facilitating comprehensive testing without actual database connections. Another illustrative example is the testing of classes that interact with external web services. Suppose a Java service class makes HTTP requests to a third-party API. By injecting a mock HTTP client dependency with DI, you can simulate various responses from the API during unit tests, achieving comprehensive test coverage. Static calls within a codebase can also be mocked, although it’s both trickier to implement and less efficient, performance-wise. You will also have to use specialized libraries like PowerMock. Additionally, static methods and classes marked as final are much more challenging to mock. Compared to the streamlined approach facilitated by DI, this complexity undermines the agility and effectiveness of unit testing. Abstraction of Implementation Achieving abstraction of implementation is a crucial technique for building flexible and maintainable codebases. DI can help you achieve this goal by decoupling classes from concrete implementations and promoting programming to interfaces. In practical terms, imagine you have a Java service class responsible for processing user data. You can use DI to inject the validation utility dependency instead of directly instantiating a validation utility class. For example, you can define a common interface for validation and inject different validation implementations at runtime. With this, you’ll be able to switch between different validation strategies without modifying the service class. Let me illustrate this idea with a simple example: Java public interface Validator { boolean isValid(String data); } public class RegexValidator implements Validator { @Override public boolean isValid(String data) { // Regular expression-based logic return true; } } public class CustomValidator implements Validator { @Override public boolean isValid(String data) { // Custom logic return true; } } public class DataService { private final Validator validator; public DataService(Validator validator) { this.validator = validator; } public void processData(String data) { if (validator.isValid(data)) { // Processing valid data } else { // Handling invalid data } } } Here, the DataService class depends on a Validator interface, allowing different validation implementations to be injected. This approach makes your code more flexible and maintainable, as different validation strategies can be easily swapped without modifying the DataService class. Readability and Understanding of Code The third area where DI shines is ensuring the readability of code. Let’s say that, during a Java codebase review, you encounter a class with external dependencies. Without DI, these dependencies might be tightly coupled within the class, making it challenging to decipher the code's logic. Using DI and constructor injection, for example, you make the dependencies explicit in the class's constructor signature, enhancing code readability and simplifying understanding of its functionality. Moreover, DI promotes modularization and encapsulation by decoupling classes from their dependencies. With this approach, each class has a clearly defined responsibility and can be easily understood in isolation. Additionally, DI encourages the use of interfaces, further enhancing code readability by abstracting implementation details and promoting a contract-based approach to software design. And this was the second time I mentioned interfaces. An interface is a common Java class, but in conjunction with DI, it serves as a powerful tool for decoupling dependencies and promoting flexibility in codebases. Below, I will talk about how this combo can be implemented in code – among other practical insights that will help you make the most of DI. Best Practices for Dependency Injection Use Interfaces Interfaces serve as contracts defining the behavior expected from implementing classes, allowing for interchangeable implementations without modifying client code. As I mentioned above, if a change is required later for some dependency (e.g., to change implementation from v1 to v2), then, if you are lucky, it may require zero changes on the caller's side. You’ll just have to change the configuration to provide one actual implementation instead of another; and since the classes depend on an interface and not on implementation, they won’t require any changes. For instance, let’s say you have a Java service class requiring database access. By defining a DataAccess interface representing the database access operations and injecting it into the service class, you decouple the class from specific database implementations. With this approach, you simplify swapping of database providers (e.g., from MySQL to PostgreSQL) without impacting the service class's functionality: Java public interface DataAccess { void saveData(String data); } public class MySQLDataAccess implements DataAccess { @Override public void saveData(String data) { // Saving data to MySQL } } public class PostgreSQLDataAccess implements DataAccess { @Override public void saveData(String data) { // Saving data to PostgreSQL } } public class DataService { private final DataAccess dataAccess; public DataService(DataAccess dataAccess) { this.dataAccess = dataAccess; } public void processData(String data) { dataAccess.saveData(data); } } Here, the DataService class depends on the DataAccess interface, allowing different database access implementations to be injected as needed. Use DI to Wrap External Libraries Incorporating external libraries into your Java backend may make maintaining testability a challenge due to tight coupling. DI enables you to encapsulate these dependencies within your own abstractions. Imagine that your Java class requires the functionality of an external library, like cryptographic operations. Without DI, your class becomes closely tied to this library, making testing and adaptability difficult. Through DI, you can wrap the external library in an interface or abstraction layer. This artificial dependency can be subsequently injected into your class, enabling easy substitution during testing: Java public interface CryptoService { String encrypt(String data); } public class ExternalCryptoLibrary implements CryptoService { @Override public String encrypt(String data) { // Encryption logic using the external library return encryptedData; } } public class DataProcessor { private final CryptoService cryptoService; public DataProcessor(CryptoService cryptoService) { this.cryptoService = cryptoService; } public String processData(String data) { String encryptedData = cryptoService.encrypt(data); // Additional data processing logic return processedData; } } In this example, the DataProcessor class depends on the CryptoService interface. During production, you can use the ExternalCryptoLibrary implementation, which utilizes the external library for encryption. However, during testing, you can provide a mock implementation of the CryptoService interface, simulating encryption without invoking the actual external library. Use Dependency Injection Judiciously However powerful a technique DI is, you don’t want to overuse it; its excessive use may overcomplicate your code where and when it doesn’t even help that much. Let’s say, you need to extract some functionality to a utility class (e.g., comparing two dates). If the logic is straightforward enough and is not likely to change, utilizing a static method will be a sufficient solution. In such cases, static utility methods are simple and efficient, eliminating the overhead of DI when unnecessary. On the other hand, if you deal with a business logic that can evolve within your app’s lifetime, or it’s something domain-related – this is a great candidate for dependency injection. So, ultimately, you should base your decision to use or not use DI on the nature of the functionality in question and its expected development. Yes, DI shines when we speak about flexibility and adaptability, but traditional static methods offer simplicity for static and unchanging logic. Leverage Existing DI Frameworks Try to use existing DI frameworks rather than building your own, even though creating one might be tempting – I should know, I've made one myself! ;) However, the advantages of existing frameworks often outweigh the allure of crafting your solution from scratch. Established frameworks offer reliability, predictability, and extensive documentation. They've been refined through real-world use, ensuring stability in your projects. Plus, leveraging them grants you access to a trove of community knowledge and support – therefore, opting for an existing framework may save time and effort. So, while it might be tempting to reinvent the wheel, not actually doing it can streamline your development process and set you up for success. * * * Although this article just touches on a few of Dependency Injection's vast benefits, I hope it served as a helpful and engaging exploration of this splendid technique. If you haven't already embraced DI in your Java development practices, I hope this piece has piqued your interest and inspired you to give it a try. And so – here's to smooth, maintainable code and a brighter future in your coding endeavors. Happy coding!

By German Urikh
Effective Java Application Testing With Cucumber and BDD
Effective Java Application Testing With Cucumber and BDD

Increase your testing efficiency by utilizing Cucumber for Java application testing, fully integrated with Behavior-Driven Development (BDD). This guide provides comprehensive steps for project setup, scenario writing, step implementation, and reporting. Introduction Cucumber is a tool that supports Behavior-Driven Development (BDD). A good starting point in order to learn more about BDD and Cucumber, are the Cucumber guides. BDD itself was introduced by Dan North in 2006, you can read his blog introducing BDD. Cucumber, however, is a tool that supports BDD, this does not mean you are practicing BDD just by using Cucumber. The Cucumber myths is an interesting read in this regard. In the remainder of this blog, you will learn more about the features of Cucumber when developing a Java application. Do know, that Cucumber is not limited to testing Java applications, a wide list of languages is supported. The sources used in this blog can be found on GitHub. Prerequisites Prerequisites for this blog are: Basis Java knowledge, Java 21 is used; Basic Maven knowledge; Basic comprehension of BDD, see the resources in the introduction. Project Setup An initial project can be setup by means of the Maven cucumber-archetype. Change the groupId, artifactId and package to fit your preferences and execute the following command: Shell $ mvn archetype:generate \ "-DarchetypeGroupId=io.cucumber" \ "-DarchetypeArtifactId=cucumber-archetype" \ "-DarchetypeVersion=7.17.0" \ "-DgroupId=mycucumberplanet" \ "-DartifactId=mycucumberplanet" \ "-Dpackage=com.mydeveloperplanet.mycucumberplanet" \ "-Dversion=1.0.0-SNAPSHOT" \ "-DinteractiveMode=false" The necessary dependencies are downloaded and the project structure is created. The output ends with the following: Shell [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 2.226 s [INFO] Finished at: 2024-04-28T10:25:16+02:00 [INFO] ------------------------------------------------------------------------ Open the project with your favorite IDE. If you are using IntelliJ, a message is shown in order to install a plugin. Take a closer look at the pom: The dependencyManagement section contains BOMs (Bill of Materials) for Cucumber and JUnit; Several dependencies are added for Cucumber and JUnit; The build section contains the compiler plugin and the surefire plugin. The compiler is set to Java 1.8, change it into 21. XML <dependencyManagement> <dependencies> <dependency> <groupId>io.cucumber</groupId> <artifactId>cucumber-bom</artifactId> <version>7.17.0</version> <type>pom</type> <scope>import</scope> </dependency> <dependency> <groupId>org.junit</groupId> <artifactId>junit-bom</artifactId> <version>5.10.2</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> <dependencies> <dependency> <groupId>io.cucumber</groupId> <artifactId>cucumber-java</artifactId> <scope>test</scope> </dependency> <dependency> <groupId>io.cucumber</groupId> <artifactId>cucumber-junit-platform-engine</artifactId> <scope>test</scope> </dependency> <dependency> <groupId>org.junit.platform</groupId> <artifactId>junit-platform-suite</artifactId> <scope>test</scope> </dependency> <dependency> <groupId>org.junit.jupiter</groupId> <artifactId>junit-jupiter</artifactId> <scope>test</scope> </dependency> </dependencies> <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <version>3.13.0</version> <configuration> <encoding>UTF-8</encoding> <source>21</source> <target>21</target> </configuration> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-surefire-plugin</artifactId> <version>3.2.5</version> </plugin> </plugins> </build> In the test directory, you will see a RunCucumberTest, StepDefinitions and an example.feature file in the resources section. The RunCucumberTest file is necessary to run the feature files and the corresponding steps. The feature files and steps will be discussed later on, do not worry too much about it now. Java @Suite @IncludeEngines("cucumber") @SelectPackages("com.mydeveloperplanet.mycucumberplanet") @ConfigurationParameter(key = PLUGIN_PROPERTY_NAME, value = "pretty") public class RunCucumberTest { } Run the tests, the output should be successful. Shell $ mvn test Write Scenario When practicing BDD, you will need to write a scenario first. Taken from the Cucumber documentation: When we do Behavior-Driven Development with Cucumber we use concrete examples to specify what we want the software to do. Scenarios are written before production code. They start their life as an executable specification. As the production code emerges, scenarios take on a role as living documentation and automated tests. The application you need to build for this blog is a quite basic one: You need to be able to add an employee; You need to retrieve the complete list of employees; You need to be able to remove all employees. A feature file follows the Given-When-Then (GWT) notation. A feature file consists of: A feature name. It is advised to maintain the same name as the file name; A feature description; One or more scenarios containing steps in the GWT notation. A scenario illustrates how the application should behave. Plain Text Feature: Employee Actions Actions to be made for an employee Scenario: Add employee Given an empty employee list When an employee is added Then the employee is added to the employee list Run the tests and you will notice now that the feature file is executed. The tests fail of course, but an example code is provided in order to create the step definitions. Shell [INFO] ------------------------------------------------------- [INFO] T E S T S [INFO] ------------------------------------------------------- [INFO] Running com.mydeveloperplanet.mycucumberplanet.RunCucumberTest Scenario: Add employee # com/mydeveloperplanet/mycucumberplanet/employee_actions.feature:4 Given an empty employee list When an employee is added Then the employee is added to the employee list [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.104 s <<< FAILURE! -- in com.mydeveloperplanet.mycucumberplanet.RunCucumberTest [ERROR] Add an employee.Add employee -- Time elapsed: 0.048 s <<< ERROR! io.cucumber.junit.platform.engine.UndefinedStepException: The step 'an empty employee list' and 2 other step(s) are undefined. You can implement these steps using the snippet(s) below: @Given("an empty employee list") public void an_empty_employee_list() { // Write code here that turns the phrase above into concrete actions throw new io.cucumber.java.PendingException(); } @When("an employee is added") public void an_employee_is_added() { // Write code here that turns the phrase above into concrete actions throw new io.cucumber.java.PendingException(); } @Then("the employee is added to the employee list") public void the_employee_is_added_to_the_employee_list() { // Write code here that turns the phrase above into concrete actions throw new io.cucumber.java.PendingException(); } at io.cucumber.core.runtime.TestCaseResultObserver.assertTestCasePassed(TestCaseResultObserver.java:69) at io.cucumber.junit.platform.engine.TestCaseResultObserver.assertTestCasePassed(TestCaseResultObserver.java:22) at io.cucumber.junit.platform.engine.CucumberEngineExecutionContext.lambda$runTestCase$4(CucumberEngineExecutionContext.java:114) at io.cucumber.core.runtime.CucumberExecutionContext.lambda$runTestCase$5(CucumberExecutionContext.java:136) at io.cucumber.core.runtime.RethrowingThrowableCollector.executeAndThrow(RethrowingThrowableCollector.java:23) at io.cucumber.core.runtime.CucumberExecutionContext.runTestCase(CucumberExecutionContext.java:136) at io.cucumber.junit.platform.engine.CucumberEngineExecutionContext.runTestCase(CucumberEngineExecutionContext.java:109) at io.cucumber.junit.platform.engine.NodeDescriptor$PickleDescriptor.execute(NodeDescriptor.java:168) at io.cucumber.junit.platform.engine.NodeDescriptor$PickleDescriptor.execute(NodeDescriptor.java:90) at java.base/java.util.ArrayList.forEach(ArrayList.java:1596) at java.base/java.util.ArrayList.forEach(ArrayList.java:1596) [INFO] [INFO] Results: [INFO] [ERROR] Errors: [ERROR] The step 'an empty employee list' and 2 other step(s) are undefined. You can implement these steps using the snippet(s) below: @Given("an empty employee list") public void an_empty_employee_list() { // Write code here that turns the phrase above into concrete actions throw new io.cucumber.java.PendingException(); } @When("an employee is added") public void an_employee_is_added() { // Write code here that turns the phrase above into concrete actions throw new io.cucumber.java.PendingException(); } @Then("the employee is added to the employee list") public void the_employee_is_added_to_the_employee_list() { // Write code here that turns the phrase above into concrete actions throw new io.cucumber.java.PendingException(); } [INFO] [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0 Add Step Definitions Add the example code from the output above into the StepDefinitions file. Run the tests again. Of course, they fail, but this time a PendingException is thrown indicating that the steps need to be implemented. Shell [INFO] ------------------------------------------------------- [INFO] T E S T S [INFO] ------------------------------------------------------- [INFO] Running com.mydeveloperplanet.mycucumberplanet.RunCucumberTest Scenario: Add employee # com/mydeveloperplanet/mycucumberplanet/employee_actions.feature:4 Given an empty employee list # com.mydeveloperplanet.mycucumberplanet.StepDefinitions.an_empty_employee_list() io.cucumber.java.PendingException: TODO: implement me at com.mydeveloperplanet.mycucumberplanet.StepDefinitions.an_empty_employee_list(StepDefinitions.java:12) at ✽.an empty employee list(classpath:com/mydeveloperplanet/mycucumberplanet/employee_actions.feature:5) When an employee is added # com.mydeveloperplanet.mycucumberplanet.StepDefinitions.an_employee_is_added() Then the employee is added to the employee list # com.mydeveloperplanet.mycucumberplanet.StepDefinitions.the_employee_is_added_to_the_employee_list() [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.085 s <<< FAILURE! -- in com.mydeveloperplanet.mycucumberplanet.RunCucumberTest [ERROR] Add an employee.Add employee -- Time elapsed: 0.032 s <<< ERROR! io.cucumber.java.PendingException: TODO: implement me at com.mydeveloperplanet.mycucumberplanet.StepDefinitions.an_empty_employee_list(StepDefinitions.java:12) at ✽.an empty employee list(classpath:com/mydeveloperplanet/mycucumberplanet/employee_actions.feature:5) [INFO] [INFO] Results: [INFO] [ERROR] Errors: [ERROR] TODO: implement me [INFO] [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0 Implement Application The first scenario is defined, let’s implement the application. Create a basic EmployeeService which adds the needed functionality. An employee can be added to an employee list which is just a map of employees. The list of employees can be retrieved and the list can be cleared. Java public class EmployeeService { private final HashMap<Long, Employee> employees = new HashMap<>(); private Long index = 0L; public void addEmployee(String firstName, String lastName) { Employee employee = new Employee(firstName, lastName); employees.put(index, employee); index++; } public Collection<Employee> getEmployees() { return employees.values(); } public void removeEmployees() { employees.clear(); } } The employee is a basic record. Java public record Employee(String firstName, String lastName) { } Implement Step Definitions Now that the service exists, you can implement the step definitions. It is rather straightforward, you create the service and invoke the methods for the Given-When implementations. Verifying the result is done by Assertions, just as you would do for your unit tests. Java public class StepDefinitions { private final EmployeeService service = new EmployeeService(); @Given("an empty employee list") public void an_empty_employee_list() { service.removeEmployees(); } @When("an employee is added") public void an_employee_is_added() { service.addEmployee("John", "Doe"); } @Then("the employee is added to the employee list") public void the_employee_is_added_to_the_employee_list() { assertEquals(1, service.getEmployees().size()); } } Run the tests, which are successful now. Shell [INFO] ------------------------------------------------------- [INFO] T E S T S [INFO] ------------------------------------------------------- [INFO] Running com.mydeveloperplanet.mycucumberplanet.RunCucumberTest Scenario: Add employee # com/mydeveloperplanet/mycucumberplanet/employee_actions.feature:4 Given an empty employee list # com.mydeveloperplanet.mycucumberplanet.StepDefinitions.an_empty_employee_list() When an employee is added # com.mydeveloperplanet.mycucumberplanet.StepDefinitions.an_employee_is_added() Then the employee is added to the employee list # com.mydeveloperplanet.mycucumberplanet.StepDefinitions.the_employee_is_added_to_the_employee_list() [INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.081 s -- in com.mydeveloperplanet.mycucumberplanet.RunCucumberTest [INFO] [INFO] Results: [INFO] [INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0 Extra Scenario Add a second scenario that tests the removal of employees. Add the scenario to the feature file. Plain Text Scenario: Remove employees Given a filled employee list When the employees list is removed Then the employee list is empty Implement the step definitions. Java @Given("a filled employee list") public void a_filled_employee_list() { service.addEmployee("John", "Doe"); service.addEmployee("Miles", "Davis"); assertEquals(2, service.getEmployees().size()); } @When("the employees list is removed") public void the_employees_list_is_removed() { service.removeEmployees(); } @Then("the employee list is empty") public void the_employee_list_is_empty() { assertEquals(0, service.getEmployees().size()); } Tags In order to run a subset of scenarios, you can add tags to features and scenarios. Shell @regression Feature: Employee Actions Actions to be made for an employee @TC_01 Scenario: Add employee Given an empty employee list When an employee is added Then the employee is added to the employee list @TC_02 Scenario: Remove employees Given a filled employee list When the employees list is removed Then the employee list is empty Run only the test annotated with TC_01 by using a filter. Shell $ mvn clean test -Dcucumber.filter.tags="@TC_01" ... [INFO] ------------------------------------------------------- [INFO] T E S T S [INFO] ------------------------------------------------------- [INFO] Running com.mydeveloperplanet.mycucumberplanet.RunCucumberTest [WARNING] Tests run: 2, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.233 s -- in com.mydeveloperplanet.mycucumberplanet.RunCucumberTest [INFO] [INFO] Results: [INFO] [WARNING] Tests run: 2, Failures: 0, Errors: 0, Skipped: 1 Reporting When executing tests, it is often required that appropriate reporting is available. Up till now, only console output has been shown. Generate an HTML report by adding the following configuration parameter to the RunCucumberTest. Java @Suite @IncludeEngines("cucumber") @SelectPackages("com.mydeveloperplanet.mycucumberplanet") @ConfigurationParameter(key = PLUGIN_PROPERTY_NAME, value = "pretty") @ConfigurationParameter(key = PLUGIN_PROPERTY_NAME, value = "html:target/cucumber-reports.html") public class RunCucumberTest { } After running the test, a rather basic HTML report is available in the specified path. Several third-party reporting plugins are available. The cucumber-reporting-plugin offers a more elaborate report. Add the dependency to the pom. XML <dependency> <groupId>me.jvt.cucumber</groupId> <artifactId>reporting-plugin</artifactId> <version>5.3.0</version> </dependency> Enable the report in RunCucumberTest. Java @Suite @IncludeEngines("cucumber") @SelectPackages("com.mydeveloperplanet.mycucumberplanet") @ConfigurationParameter(key = PLUGIN_PROPERTY_NAME, value = "pretty") @ConfigurationParameter(key = PLUGIN_PROPERTY_NAME, value = "html:target/cucumber-reports.html") @ConfigurationParameter(key = PLUGIN_PROPERTY_NAME, value = "me.jvt.cucumber.report.PrettyReports:target/cucumber") public class RunCucumberTest { } Run the tests and in the target/cucumber directory the report is generated. Open the file starting with report-feature. Conclusion Cucumber has great support for BDD. It is quite easy to use and in this blog, you only scratched the surface of its capabilities. An advantage is that you can make use of JUnit and Assertions and the steps can be implemented by means of Java. No need to learn a new language when your application is also built in Java.

By Gunter Rotsaert DZone Core CORE
The Magic of Quarkus With Vert.x in Reactive Programming
The Magic of Quarkus With Vert.x in Reactive Programming

Reactive programming has significantly altered how developers tackle modern application development, particularly in environments that demand top-notch performance and scalability. Quarkus, a Kubernetes-native Java framework specifically optimized for GraalVM and HotSpot, fully embraces the principles of reactive programming to craft applications that are responsive, resilient, and elastic. This article comprehensively explores the impact and effectiveness of reactive programming in Quarkus, providing detailed insights and practical examples in Java to illustrate its transformative capabilities. What Is Reactive Programming? Reactive programming is a programming paradigm that focuses on handling asynchronous data streams and the propagation of change. It provides developers with the ability to write code that responds to changes in real time, such as user inputs, data updates, or messages from other services. This approach is particularly well-suited for building applications that require real-time responsiveness and the ability to process continuous streams of data. By leveraging reactive programming, developers can create more interactive and responsive applications that can adapt to changing conditions and events. Key features of reactive programming include: Asynchronous: Non-blocking operations that allow multiple tasks to run concurrently Event-driven: Actions are triggered by events such as user actions or data changes Resilient: Systems remain responsive under load by handling failures gracefully Scalable: Efficient resource usage to handle a high number of requests Why Quarkus for Reactive Programming? Quarkus, a framework designed to harness the advantages of reactive programming, aims to provide a streamlined and efficient environment for developing reactive applications. There are several compelling reasons to consider Quarkus for such applications: Native support for Reactive frameworks: Quarkus seamlessly integrates with popular reactive libraries such as Vert.x, Mutiny, and Reactive Streams. This native support allows developers to leverage the full power of these frameworks within the Quarkus environment. Efficient resource usage: Quarkus's native image generation and efficient runtime result in lower memory consumption and faster startup times. This means that applications built with Quarkus can be more resource-efficient, leading to potential cost savings and improved performance. Developer productivity: Quarkus offers features like live coding, significantly improving the development experience. This means developers can iterate more quickly, leading to faster development cycles and ultimately more productive software development. Getting Started With Reactive Programming in Quarkus Let’s dive into a simple example to demonstrate reactive programming in Quarkus using Java. We’ll create a basic REST API that fetches data asynchronously. Step 1: Setting Up the Project First, create a new Quarkus project: Shell mvn io.quarkus:quarkus-maven-plugin:create \ -DprojectGroupId=com.example \ -DprojectArtifactId=reactive-quarkus \ -DclassName="com.example.GreetingResource" \ -Dpath="/greeting" cd reactive-quarkus Add the necessary dependencies in your pom.xml: XML <dependencies> <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-resteasy-reactive</artifactId> </dependency> <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-smallrye-mutiny</artifactId> </dependency> <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-vertx</artifactId> </dependency> </dependencies> Step 2: Start Coding Now, create a simple REST endpoint using Mutiny, a reactive programming library designed for simplicity and performance: Java import io.smallrye.mutiny.Uni; import javax.ws.rs.GET; import javax.ws.rs.Path; import javax.ws.rs.Produces; import javax.ws.rs.core.MediaType; @Path("/greeting") public class GreetingResource { @GET @Produces(MediaType.APPLICATION_JSON) public Uni<Greeting> greeting() { return Uni.createFrom().item(() -> new Greeting("Hello, Reactive World!")) .onItem().delayIt().byMillis(1000); // Simulate delay } public static class Greeting { public String message; public Greeting(String message) { this.message = message; } } } In this example: We define a REST endpoint /greeting that produces JSON. The greeting method returns a Uni<Greeting> which represents a single value or failure, a concept from Mutiny. We simulate a delay using onItem().delayIt().byMillis(1000) to mimic an asynchronous operation Step 3: Running the Application To run the application, use the Quarkus development mode: Shell ./mvnw quarkus:dev Now, visit http://localhost:8080/greeting to see the response: JSON { "message": "Hello, Reactive World!" } Unit Testing Reactive Endpoints When testing reactive endpoints in Quarkus, it's important to verify that the application functions correctly in response to various conditions. Quarkus facilitates seamless integration with JUnit 5, allowing developers to effectively write and execute unit tests to ensure the proper functionality of their applications. Step 1: Adding Test Dependencies Ensure you have the following dependencies in your pom.xml for testing: XML <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-junit5</artifactId> <scope>test</scope> </dependency> <dependency> <groupId>io.rest-assured</groupId> <artifactId>rest-assured</artifactId> <scope>test</scope> </dependency> Step 2: Writing a Unit Test Create a test class to verify the behavior of the GreetingResource: Java import io.quarkus.test.junit.QuarkusTest; import io.rest-assured.RestAssured; import org.junit.jupiter.api.Test; import static org.hamcrest.CoreMatchers.is; @QuarkusTest public class GreetingResourceTest { @Test public void testGreetingEndpoint() { RestAssured.when().get("/greeting") .then() .statusCode(200) .body("message", is("Hello, Reactive World!")); } } In this test: We use the @QuarkusTest annotation to enable Quarkus testing features. We use RestAssured to send an HTTP GET request to the /greeting endpoint and verify the response status code and body. Step 3: Running the Tests To run the tests, use the Maven test command: Shell ./mvnw test The test will execute and verify that the /greeting endpoint returns the expected response. Advanced Usage: Integrating With Databases Let’s extend the example by integrating a reactive database client. We’ll use the reactive PostgreSQL client provided by Vert.x. Add the dependency for the reactive PostgreSQL client: XML <dependency> <groupId>io.quarkiverse.reactive</groupId> <artifactId>quarkus-reactive-pg-client</artifactId> </dependency> Configure the PostgreSQL client in application.properties: Shell quarkus.datasource.db-kind=postgresql quarkus.datasource.username=your_username quarkus.datasource.password=your_password quarkus.datasource.reactive.url=postgresql://localhost:5432/your_database Create a repository class to handle database operations: Java import io.smallrye.mutiny.Uni; import io.vertx.mutiny.pgclient.PgPool; import io.vertx.mutiny.sqlclient.Row; import io.vertx.mutiny.sqlclient.RowSet; import javax.enterprise.context.ApplicationScoped; import javax.inject.Inject; @ApplicationScoped public class GreetingRepository { @Inject PgPool client; public Uni<String> findGreeting() { return client.query("SELECT message FROM greetings WHERE id = 1") .execute() .onItem().transform(RowSet::iterator) .onItem().transform(iterator -> iterator.hasNext() ? iterator.next().getString("message") : "Hello, default!"); } } Update the GreetingResource to use the repository: Java import io.smallrye.mutiny.Uni; import javax.inject.Inject; import javax.ws.rs.GET; import javax.ws.rs.Path; import javax.ws.rs.Produces; import javax.ws.rs.core.MediaType; @Path("/greeting") public class GreetingResource { @Inject GreetingRepository repository; @GET @Produces(MediaType.APPLICATION_JSON) public Uni<Greeting> greeting() { return repository.findGreeting() .onItem().transform(Greeting::new); } public static class Greeting { public String message; public Greeting(String message) { this.message = message; } } } This setup demonstrates how to perform asynchronous database operations using the reactive PostgreSQL client. The findGreeting method queries the database and returns a Uni<String> representing the greeting message. Handling Errors in Reactive Programming Handling errors gracefully is a critical aspect of building resilient reactive applications. Mutiny provides several operators to handle errors effectively. Update the GreetingRepository to include error handling: Java public Uni<String> findGreeting() { return client.query("SELECT message FROM greetings WHERE id = 1") .execute() .onItem().transform(RowSet::iterator) .onItem().transform(iterator -> iterator.hasNext() ? iterator.next().getString("message") : "Hello, default!") .onFailure().recoverWithItem("Hello, fallback!"); } In this updated method: We use onFailure().recoverWithItem("Hello, fallback!") to provide a fallback message in case of any failure during the database query. Reactive Event Bus With Vert.x Quarkus seamlessly integrates with Vert.x, a powerful reactive toolkit, to provide a high-performance event bus for developing sophisticated event-driven applications. This event bus allows various components of your application to communicate asynchronously, facilitating efficient and scalable interaction between different parts of the system. Add the necessary Vert.x dependencies: XML <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-vertx</artifactId> </dependency> Create a Vert.x consumer to handle events: Java import io.quarkus.vertx.ConsumeEvent; import io.smallrye.mutiny.Uni; import javax.enterprise.context.ApplicationScoped; @ApplicationScoped public class GreetingService { @ConsumeEvent("greeting") public Uni<String> generateGreeting(String name) { return Uni.createFrom().item(() -> "Hello, " + name + "!") .onItem().delayIt().byMillis(500); // Simulate delay } } Now, Update the GreetingResource to send events to the event bus: Java import io.smallrye.mutiny.Uni; import io.vertx.mutiny.core.eventbus.EventBus; import javax.inject.Inject; import javax.ws.rs.GET; import javax.ws.rs.Path; import javax.ws.rs.Produces; import javax.ws.rs.QueryParam; import javax.ws.rs.core.MediaType; @Path("/greeting") public class GreetingResource { @Inject EventBus eventBus; @GET @Produces(MediaType.APPLICATION_JSON) public Uni<Greeting> greeting(@QueryParam("name") String name) { return eventBus.<String>request("greeting", name) .onItem().transform(reply -> new Greeting(reply.body())); } public static class Greeting { public String message; public Greeting(String message) { this.message = message; } } } In this example: We define an event consumer GreetingService that listens for greeting events and generates a greeting message. The GreetingResource sends a greeting event to the event bus and waits for the response asynchronously. Comparison: Quarkus vs. Spring in Reactive Capabilities When building reactive applications, Quarkus and Spring offer robust frameworks, each with unique approaches and strengths. 1. Framework Integration Spring Spring Boot leverages Spring WebFlux for reactive programming and seamlessly integrates with the Spring ecosystem, supporting Project Reactor as its reactive library. Quarkus Quarkus utilizes Vert.x and Mutiny for reactive programming, providing native support from the ground up and optimizing for performance and efficiency. 2. Performance and Resource Efficiency Spring While Spring Boot with WebFlux offers good performance for reactive applications, it may be heavier in terms of resource usage compared to Quarkus. Quarkus Quarkus is designed to be lightweight and fast, showcasing lower memory consumption and faster startup times, especially when compiled to a native image with GraalVM. 3. Developer Experience Spring Spring Boot offers a mature ecosystem with extensive documentation and strong community support, making it easy for developers familiar with Spring to adopt reactive programming. Quarkus Quarkus provides an excellent developer experience with features like live coding and quick feedback loops. Its integration with reactive libraries like Mutiny makes it intuitive for developers new to reactive programming. 4. Cloud-Native and Microservices Spring Widely used for building microservices and cloud-native applications, Spring Boot provides a rich set of tools and integrations for deploying applications to the cloud. Quarkus Designed with cloud-native and microservices architectures in mind, Quarkus showcases efficient resource usage and strong support for Kubernetes, making it a compelling choice for cloud deployments. 5. Ecosystem and Community Spring Boasting a vast ecosystem with numerous extensions and integrations, Spring is supported by a large community of developers. Quarkus Rapidly gaining popularity, Quarkus offers a comprehensive set of extensions, and its community is also expanding, contributing to its ecosystem. Conclusion Reactive programming in Quarkus provides a cutting-edge approach to enhancing the performance and scalability of Java applications. By harnessing the capabilities of reactive streams and asynchronous operations, Quarkus empowers developers to build applications that are not only robust and high-performing, but also well-suited for modern cloud-native environments. The efficiency and power of Quarkus, combined with its rich ecosystem of reactive libraries, offer developers the tools they need to handle a wide range of tasks, from simple asynchronous operations to complex data streams, making Quarkus a formidable platform for reactive programming in Java.

By Reza Ganji DZone Core CORE
Understanding and Learning NoSQL Databases With Java: Three Key Benefits
Understanding and Learning NoSQL Databases With Java: Three Key Benefits

In today's rapidly evolving technological landscape, it is crucial for any business or application to efficiently manage and utilize data. NoSQL databases have emerged as an alternative to traditional relational databases, offering flexibility, scalability, and performance advantages. These benefits become even more pronounced when combined with Java, a robust and widely-used programming language. This article explores three key benefits of understanding and learning NoSQL databases with Java, highlighting the polyglot philosophy and its efficiency in software architecture. Enhanced Flexibility and Scalability One significant benefit of NoSQL databases is their capability to handle various data models, such as key-value pairs, documents, wide-column stores, and graph databases. This flexibility enables developers to select the most suitable data model for their use case. When combined with Java, a language renowned for its portability and platform independence, the adaptability of NoSQL databases can be fully utilized. Improved Performance and Efficiency Performance is a crucial aspect of database management, and NoSQL databases excel in this area because of their distributed nature and optimized storage mechanisms. When developers combine these performance-enhancing features with Java, they can create applications that are not only efficient but also high-performing. Embracing the Polyglot Philosophy The polyglot philosophy in software development encourages using multiple languages, frameworks, and databases within a single application to take advantage of each one's strengths. Understanding and learning NoSQL databases with Java perfectly embodies this approach, offering several benefits for modern software architecture. Leveraging Eclipse JNoSQL for Success With NoSQL Databases and Java To fully utilize NoSQL databases with Java, developers can use Eclipse JNoSQL, a framework created to streamline the integration and management of NoSQL databases in Java applications. Eclipse JNoSQL supports over 30 databases and is aligned with Jakarta NoSQL and Jakarta Data specifications, providing a comprehensive solution for modern data handling needs. Eclipse JNoSQL: Bridging Java and NoSQL Databases Eclipse JNoSQL is a framework that simplifies the interaction between Java applications and NoSQL databases. With support for over 30 different NoSQL databases, Eclipse JNoSQL enables developers to work efficiently across various data stores without compromising flexibility or performance. Key features of Eclipse JNoSQL include: Support for Jakarta Data Query Language: This feature enhances the power and flexibility of querying across databases. Cursor pagination: Processes large datasets efficiently by utilizing cursor-based pagination rather than traditional offset-based pagination NoSQLRepository: Simplifies the creation and management of repository interfaces New column and document templates: Simplify data management with predefined templates Jakarta NoSQL and Jakarta Data Specifications Eclipse JNoSQL is designed to support Jakarta NoSQL and Jakarta Data specifications, standardizing and simplifying database interactions in Java applications. Jakarta NoSQL: This comprehensive framework offers a unified API and a set of powerful annotations, making it easier to work with various NoSQL data stores while maintaining flexibility and productivity. Jakarta Data: This specification provides an API for easier data access across different database types, enabling developers to create custom query methods on repository interfaces. Introducing Eclipse JNoSQL 1.1.1 The latest release, Eclipse JNoSQL 1.1.1, includes significant enhancements and new features, making it a valuable tool for Java developers working with NoSQL databases. Key updates include: Support to cursor pagination Support to Jakarta Data Query Fixes several bugs and enhances performance For more details, visit the Eclipse JNoSQL Release 1.1.1 notes. Practical Example: Java SE Application With Oracle NoSQL To illustrate the practical use of Eclipse JNoSQL, let's consider a Java SE application using Oracle NoSQL. This example showcases the effectiveness of cursor pagination and JDQL for querying. The first pagination method we will discuss is Cursor pagination, which offers a more efficient way to handle large datasets than traditional offset-based pagination. Below is a code snippet demonstrating cursor pagination with Oracle NoSQL. Java @Repository public interface BeerRepository extends OracleNoSQLRepository<Beer, String> { @Find @OrderBy("hop") CursoredPage<Beer> style(@By("style") String style, PageRequest pageRequest); @Query("From Beer where style = ?1") List<Beer> jpql(String style); } public class App4 { public static void main(String[] args) { var faker = new Faker(); try (SeContainer container = SeContainerInitializer.newInstance().initialize()) { BeerRepository repository = container.select(BeerRepository.class).get(); for (int index = 0; index < 100; index++) { Beer beer = Beer.of(faker); // repository.save(beer); } PageRequest pageRequest = PageRequest.ofSize(3); var page1 = repository.style("Stout", pageRequest); System.out.println("Page 1"); page1.forEach(System.out::println); PageRequest pageRequest2 = page1.nextPageRequest(); var page2 = repository.style("Stout", pageRequest2); System.out.println("Page 2"); page2.forEach(System.out::println); System.out.println("JDQL query: "); repository.jpql("Stout").forEach(System.out::println); } System.exit(0); } } In this example, BeerRepository efficiently retrieves and paginates data using cursor pagination. The style method employs cursor pagination, while the jpql method demonstrates a JDQL query. API Changes and Compatibility Breaks in Eclipse JNoSQL 1.1.1 The release of Eclipse JNoSQL 1.1.1 includes significant updates and enhancements aimed at improving functionality and aligning with the latest specifications. However, it's important to note that these changes may cause compatibility issues for developers, which need to be understood and addressed in their projects. 1. Annotations Moved to Jakarta NoSQL Specification Annotations like Embeddable and Inheritance were previously included in the Eclipse JNoSQL framework. In the latest version, however, they have been relocated to the Jakarta NoSQL specification to establish a more consistent approach across various NoSQL databases. As a result, developers will need to update their imports and references to these annotations. Java // Old import import org.jnosql.mapping.Embeddable; // New import import jakarta.nosql.Embeddable; The updated annotations can be accessed at the Jakarta NoSQL GitHub repository. 2. Unified Query Packages To simplify and unify the query APIs, SelectQuery and DeleteQuery have been consolidated into a single package. Consequently, specific query classes like DocumentQuery, DocumentDeleteQuery, ColumnQuery, and ColumnDeleteQuery have been removed. Impact: Any code using these removed classes will no longer compile and must be refactored to use the new unified classes. Solution: Refactor your code to use the new query classes in the org.eclipse.jnosql.communication.semistructured package. For example: Java // Old usage DocumentQuery query = DocumentQuery.select().from("collection").where("field").eq("value").build(); // New usage SelectQuery query = SelectQuery.select().from("collection").where("field").eq("value").build(); Similar adjustments will be needed for delete queries. 3. Migration of Templates Templates such as ColumnTemplate, KeyValueTemplate, and DocumentTemplate have been moved from the Jakarta Specification to Eclipse JNoSQL. Java // Old import import jakarta.nosql.document.DocumentTemplate; // New import import org.eclipse.jnosql.mapping.document.DocumentTemplate; 4. Default Query Language: Jakarta Data Query Language (JDQL) Another significant update in Eclipse JNoSQL 1.1.1 is the adoption of Jakarta Data Query Language (JDQL) as the default query language. JDQL provides a standardized way to define queries using annotations, making it simpler and more intuitive for developers. Conclusion The use of a NoSQL database is a powerful asset in modern applications. It allows software architects to employ polyglot persistence, utilizing the best persistence capability in each scenario. Eclipse JNoSQL assists Java developers in implementing these NoSQL capabilities into their applications.

By Otavio Santana DZone Core CORE
How To Handle Shadow Root in Selenium Java
How To Handle Shadow Root in Selenium Java

When automating tests using Selenium, there may be a scenario where you can't find an element on a web page even though it seems to be in the Document Object Model (DOM). In this case, Selenium throws a NoSuchElementException() error. One common reason for this error is the presence of Shadow DOM elements. Although the element is present in the DOM, it's encapsulated within a Shadow root in Selenium and requires special handling to access it for automation testing. In this Selenium Java tutorial, we'll delve into Shadow root elements, how they work, and, most importantly, how to handle Shadow root in Selenium Java. What Is a Document Object Model? A Document Object Model is a language-independent and cross-platform interface that serves the HTML or XML document as a tree structure. In this tree structure, each node is an object that represents a part of the document. When a web page is loaded in the browser, the HTML code is converted into a hierarchical representation of the HTML document called a DOM tree. It has a data model consisting of root nodes and a series of child node elements, attributes, etc. Following is the HTML code when loaded on the web page: HTML <html> <head> <title>LambdaTest</title> </head> <body> <h1>Welcome to Testu Conference</h1> <p>Decode the future of testing</p> </body> </html> The above HTML code will be represented as a DOM tree as follows: HTML - Document (root) - html - head - title - "LambdaTest" - body - h1 - "Welcome to Testu Conference" - p - "Decode the future of testing" Here is the actual representation of HTML after it is rendered in the browser. Overview of Web Components Web components are a popular approach to building micro frontends that help develop reusable custom elements. It helps in the encapsulation and interoperability of individual HTML elements. Web components are based on existing web standards. Widgets and custom components built on the web component standards can be used with any JavaScript library or framework that works with HTML. Web components work across all modern browsers. Following are the four different types of web component standards: Custom elements HTML templates HTML imports Shadow DOM In the next section of this tutorial on handling Shadow root in Selenium Java, we will learn more about the Shadow root element of Shadow DOM. What Is Shadow Root? Shadow root is a part of Shadow DOM. In Shadow DOM, the web browser renders the DOM elements without adding them to the main DOM tree. It is used to achieve encapsulation in HTML documents. The style and behavior of one part of the document can be kept hidden and separate from the other code in the same HTML document to avoid interference by implementing Shadow DOM. Ideally, the Shadow DOM elements are hidden; however, they can be seen using the developer tools option in the browsers. The below screenshot is an example of Shadow DOM. In the code below, #shadow-root is called Shadow DOM. The following pictorial representation will help you understand Shadow DOM easily. The element from where the Shadow DOM starts is called Shadow Host. A Shadow tree is the DOM tree inside Shadow DOM, and the root node or the topmost node of the Shadow tree is called the Shadow Root. A Shadow Boundary is where the Shadow DOM ends and the regular DOM begins. We need to locate the Shadow Root first, as it is the place from where the Shadow DOM begins. Before we dive deep into handling Shadow Root in Selenium Java, let’s learn different ways to find Shadow Root using developer tools. Finding Shadow Root Using Developer Tools In this section of this tutorial on handling Shadow Root in Selenium Java, we will look at how to find Shadow Root elements using developer tools. Shadow DOM elements are particularly useful when creating custom elements. Shadow DOM is used to encapsulate an element's HTML, CSS, and JS, thus producing a web component. As the Shadow DOM elements are encapsulated from the regular DOM, they are not directly accessible in the Developer Tools window, as they are hidden. We need to enable the “Show user agent shadow DOM” preference in the Developer Tools window. Enabling the “Show User Agent Shadow Dom” The steps to enable the “Show user agent shadow DOM” preference are shown below. Step 1 Open the Developer Tools window in the Chrome browser by pressing F12 or clicking on the three dots on the right top of the browser. After that, navigate to More Tools > Developer Tools. Step 2 Click on the gear icon on the top right corner of the Developer Tools window to open the preferences screen and tick on the “Show user agent shadow DOM” option. We have set the preference successfully. Press the Escape key to move back to the Developer Tools option window to find and validate the Shadow root element. Locating and Validating the Shadow Root Element in the Browser We will use the Menu Shadow DOM Demo page for demonstration purposes. This page has a menu with Shadow DOM elements containing four menus: File, Edit, View, and Encoding. We will find the locator for the File menu and also validate it in the Chrome browser console. Let’s go step-by-step and locate the Shadow Root element in the browser. Step 1 Navigate to the Menu Shadow DOM Demo page and open the Developer Tools window. Step 2 Expand the node and check for the Shadow Root element. Step 3 Locate the File menu by clicking on the arrow icon on the top left of the Developer Tools window. Step 4 Here, the ID selector used for the File menu is a dynamic value. It changes every time the page is refreshed; hence, we cannot use this selector. So, let’s create the CSS Selector using the parent-child relationship in the DOM. First, we will have to consider the selector before the #shadow-root. Here, let’s take the class name smart-ui-component. Step 5 We need to take a locator from the first HTML tagline after #shadow-root, as it will be the parent of the Shadow Root element. Next, locate the File menu WebElement using its respective HTML tag and class name. We will use the CSS Selector, focusing on the class name and HTML tags here. The ID selector in the DOM for this web element is dynamic, changing with each refresh of the web page. Next, we need to get the text of the File menu, which is File, and as seen in the Properties tab on the right-hand side of the window, the attribute label can be used for it. So, the final CSS Selector that we can use for locating the File menu is: To locate the Shadow host, use the class name .smart-ui-component. To locate the File menu inside the Shadow root, use the .smart-element .smart-menu-main-container .smart-element. Once the File menu WebElement is located, use the attribute label to get its text. We have the CSS Selector .smart-ui-component > .smart-element .smart-menu-main-container .smart-element. However, we can not directly use this selector in the Elements tab to locate the web element as it is a Shadow Root element. It is better to validate this selector in the browser before we use it in our tests using Selenium WebDriver as it will save time. In case the selector is not valid, Selenium WebDriver will throw NoSuchElementException, and we will again have to check for the valid selector. To validate the selector in the Developer Tools window, use the following steps: Step 1: Navigate to the browser console. Step 2: Use the querySelector with the shadowRoot command and check the output in the console. The following query can be used to locate the Shadow host in the console: document.querySelector('.smart-ui-component').shadowRoot.querySelector('.smart-element .smart-menu-main-container .smart-element ').getAttribute('label') After entering the above query, press the Enter key to validate if we get the text of the menu name File in the output. We can check out the text File printed in the console output, thus making the validation for the selector successful. We can use this selector while running automated tests using Selenium with Java. In this section, we have learned how to handle Shadow Root in Selenium Java using developer tools. In the next section, we will explore how to handle Shadow root in Selenium Java using the getShadowRoot() method and JavaScriptExecuter. Finding Shadow Root Using Selenium Java In this section of this tutorial on handling Shadow Root in Selenium Java, we will look into different ways to find Shadow Root elements in Selenium. The Shadow Root elements can not be directly located in the automated tests using Selenium WebDriver as we do for the normal DOM elements. The following strategies can be used to handle Shadow root in Selenium Java. Using getShadowRoot() method Using JavaScriptExecutor Before we begin discussing the code and writing the automated tests, let us first get some basic information regarding the web page under test and also the tools used for test automation. Programming language - Java 17 Web automation tool - Selenium WebDriver 4.10.0 Build tool - Maven Test runner - TestNG Cloud-based testing platform - LambdaTest Project Setup Create a new Maven project and update the required dependencies for Selenium WebDriver and TestNG in the pom.xml. The following is the screenshot of pom.xml Page Object Model (POM) in Selenium Java has been used in this project as it helps maintain the project by improving test case maintenance and removing code duplication. In this section of the tutorial on handling Shadow Root in Selenium Java, we will demonstrate how to find the Shadow root element of the Menu Shadow DOM Demo page using Selenium WebDriver. With the help of the test scenarios, code walkthroughs will be provided to help understand how to locate and interact with the Shadow root elements. Let’s use the getShadowRoot() method to locate the Shadow root in Selenium Java. Locating Shadow Root in Selenium Java Using getShadowRoot() Method The getShadowRoot() method was introduced with the release of Selenium WebDriver 4.0.0 and above. The getShadowRoot() method returns a representation of an element’s Shadow root for accessing the Shadow DOM of a web component. NoSuchElementException() is thrown by this method if the Shadow DOM element is not found. Test Scenario 1 Navigate to the Menu Shadow DOM Demo page. Locate the File menu within the Shadow DOM. Perform assertion by getting the text of the menu name File. Implementation In Test Scenario 1, we need to navigate to the demo page, locate the File menu, and perform assertion by getting the text of the menu File. Here, we need to locate the File menu first and use the getShadowRoot() method in Selenium WebDriver to locate it. The following method available in the HomePage class will locate the File menu. HTML public WebElement fileMenu() { final WebElement shadowHost = getDriver().findElement(By.cssSelector(".smart-ui-component")); final SearchContext shadowRoot = shadowHost.getShadowRoot(); return shadowRoot.findElement(By.cssSelector(".smart-element .smart-menu-main-container .smart-element")); } In the fileMenu() method, the first web element we locate is the shadowHost using the classname smart-ui-component. This is required as it is the element just before the Shadow DOM. Next, we search for the Shadow root in the DOM next to it. The #shadow-root(open) is next to the <smart-ui-menu checkboxes="" class="smart-ui-component"> </smart-ui-menu> HTML element. So, we will have to locate the Shadow Root element using this Shadow Host. The SearchContext interface is used here to return the Shadow Root element using the getShadowRoot() method. getShadowRoot() method is a part of the WebElement interface, which is implemented in the RemoteWebElement class of Selenium WebDriver. Finally, the Shadow root element for the File menu is located using the CSS Selector .smart-element .smart-menu-main-container .smart-element. Now, to perform assertion, we need to get the text of the menu, i.e., File. As seen in the screenshot above, the text can be retrieved using the attribute label. The following method will provide us with the text. HTML public String getFileMenuText() { return fileMenu().getAttribute("label"); } We have located the File menu and the text of the menu; it is now time to write the test and perform the assertion. HTML @Test public void testFileMenuShadowRootElement() { getDriver().get("https://www.htmlelements.com/demos/menu/shadow-dom/index.htm"); final HomePage homePage = new HomePage(); assertEquals(homePage.getFileMenuText(), "File"); } It is very simple to understand that this test will navigate to the Menu Shadow DOM Demo page. From the website's home page, it will check for the File menu text and assert it with the expected text File. Test Scenario 2 Click on the File menu that is within the Shadow DOM. Locate the New option. Perform assertion to check that the text of the option is New. Implementation: In Test Scenario 2, we need to click on the File menu. After that, get the text of the New option displayed in the menu and assert its text. In Test Scenario 1, we have already located the File menu. Here, we will open the File menu by clicking on it and getting the text of the New option. From the screenshot above, we can use the following CSS Selector to locate the New option. The CSS Selector .smart-menu-drop-down div smart-menu-item.smart-element can be used to locate the New option and its attribute label to get its text. The following method will help us locate the New option and get its text. HTML public String getNewMenuText() { openFileMenu(); return fileMenu().findElement(By.cssSelector(".smart-menu-drop-down div smart-menu-item.smart-element")) .getAttribute("label"); } The getNewMenuText() method will open the File menu, search and locate the New option, and return the attribute label. Let’s write the test and perform the assertion for the text in the New option. HTML @Test public void testNewMenuShadowRootElement() { getDriver().get("https://www.htmlelements.com/demos/menu/shadow-dom/index.htm"); final HomePage homePage = new HomePage(); assertEquals(homePage.getNewMenuText(), "New"); } In this test, we first navigate to the Menu Shadow DOM Demo page. From the home page of the website, get the text of the New option and perform assertion on the menu text. In the next section, to find Shadow Root in Selenium Java, we will use the JavaScriptExecutor strategy. Locating Shadow Root in Selenium Java Using JavaScriptExecutor Another way to find and locate Shadow Root in Selenium Java is by using JavaScriptExecutor. If you have not upgraded to Selenium 4, this approach will be useful as it works in all the latest and older versions. Using JavaScriptExecutor to handle Shadow Root in Selenium Java is pretty simple. We need to follow the same steps as we did while working with the getShadowRoot() method. First, find the Shadow host element and then expand and locate the Shadow Root elements using it. Test Scenario 3 Navigate to the Menu Shadow DOM Demo page. Locate the Edit menu that is within the Shadow DOM. Perform assertion by getting the text of the Edit menu. Implementation: In this test scenario, we will locate the Shadow Root element for the Edit menu and perform assertion by getting its text Edit. As we are using JavaScriptExecutor here, the expandRootElement() method is created to expand and locate the Shadow Root element. HTML public SearchContext expandRootElement(final WebElement element) { return (SearchContext) ((JavascriptExecutor) getDriver()).executeScript( "return arguments[0].shadowRoot", element); } The above method will execute the script return arguments[0].shadowRoot on the WebElement provided in the method parameter and get the Shadow Root. Next, let’s locate the Edit menu and get its text. The editMenu() method returns the WebElement for the Edit menu. To get the Shadow Root element, the expandRootElement() method is used where the shadowHost WebElement is passed as a parameter. public WebElement editMenu() { final WebElement shadowHost = getDriver().findElement(By.cssSelector(".smart-ui-component")); final SearchContext shadowRoot = expandRootElement(shadowHost); return shadowRoot.findElement(By.cssSelector(".smart-element .smart-menu-main-container smart-menu-items-group:nth-child(2)")); } Once the Shadow Root element is located, we search for the Edit menu using the CSS Selector and return the WebElement. The attribute label is used to get the text Edit from the menu name. The following method, editMenuText(), returns the text in String format. HTML public String getEditMenuText() { return editMenu().getAttribute("label"); } Let’s write the test and complete the scenario by performing an assertion. HTML @Test public void testEditMenuShadowRootElement() { getDriver().get("https://www.htmlelements.com/demos/menu/shadow-dom/index.htm"); final HomePage homePage = new HomePage(); assertEquals(homePage.getEditMenuText(), "Edit"); } This test completes the scenario where we navigate to the Menu Shadow DOM Demo page, locate the Edit menu, and perform assertion by verifying the Edit text of the menu name. Test Scenario 4 Click on the Edit menu that is within the Shadow DOM. Locate the Undo option. Perform assertion to check that the text of the menu is Undo. Implementation: In this test scenario, we will click the Edit menu to open the dropdown. In the dropdown, we locate the Undo option and perform an assertion to verify its text Undo. We will reuse the existing editMenu() method created in Test Scenario 3 to locate the Edit menu’s WebElement using the expandRootElement() method, which locates the Shadow Root element using JavaScriptExecutor. The openEditMenu() method will click on the Edit menu and open the dropdown. HTML public void openEditMenu() { editMenu().click(); } The getUndoMenuText() method will locate the Undo option and return the text Undo in the String format. HTML public String getUndoMenuText() { openEditMenu(); return editMenu().findElement(By.cssSelector(".smart-menu-drop-down div smart-menu-item.smart-element")) .getAttribute("label"); } When we locate the WebElements, let’s proceed and write the final test to complete Test Scenario 4. HTML @Test public void testUndoMenuShadowRootElement() { getDriver().get("https://www.htmlelements.com/demos/menu/shadow-dom/index.htm"); final HomePage homePage = new HomePage(); assertEquals(homePage.getUndoMenuText(), "Undo"); } In this test, we navigate to the Menu Shadow DOM Demo page. From the home page, click on the Edit menu and assert the text of the Undo option. With this test, we have completed the code implementation of all four scenarios. Minor refactoring was done in the test since the driver.get() statement was getting repeated in all the tests. I have moved that statement out and placed it in a navigateToWebsite() method, using @BeforeClass annotation in TestNG. So, this annotation will be used as soon as this class is called before running the test. HTML public class ShadowRootTests extends BaseTest { @BeforeClass public void navigateToWebsite() { getDriver().get("https://www.htmlelements.com/demos/menu/shadow-dom/index.htm"); } @Test public void testFileMenuShadowRootElement() { final HomePage homePage = new HomePage(); assertEquals(homePage.getFileMenuText(), "File"); } @Test public void testNewMenuShadowRootElement() { final HomePage homePage = new HomePage(); assertEquals(homePage.getNewMenuText(), "New"); } @Test public void testEditMenuShadowRootElement() { final HomePage homePage = new HomePage(); assertEquals(homePage.getEditMenuText(), "Edit"); } @Test public void testUndoMenuShadowRootElement() { final HomePage homePage = new HomePage(); assertEquals(homePage.getUndoMenuText(), "Undo"); } } HTML package pages.htmlelements; import org.openqa.selenium.By; import org.openqa.selenium.JavascriptExecutor; import org.openqa.selenium.SearchContext; import org.openqa.selenium.WebElement; import static setup.DriverManager.getDriver; public class HomePage { public WebElement fileMenu() { final WebElement shadowHost = getDriver().findElement(By.cssSelector(".smart-ui-component")); final SearchContext shadowRoot = shadowHost.getShadowRoot(); return shadowRoot.findElement(By.cssSelector(".smart-element .smart-menu-main-container .smart-element")); } public String getFileMenuText() { return fileMenu().getAttribute("label"); } public void openFileMenu() { fileMenu().click(); } public String getNewMenuText() { openFileMenu(); return fileMenu().findElement(By.cssSelector(".smart-menu-drop-down div smart-menu-item.smart-element")) .getAttribute("label"); } public SearchContext expandRootElement(final WebElement element) { return (SearchContext) ((JavascriptExecutor) getDriver()).executeScript( "return arguments[0].shadowRoot", element); } public WebElement editMenu() { final WebElement shadowHost = getDriver().findElement(By.cssSelector(".smart-ui-component")); final SearchContext shadowRoot = expandRootElement(shadowHost); return shadowRoot.findElement(By.cssSelector(".smart-element .smart-menu-main-container smart-menu-items-group:nth-child(2)")); } public String getEditMenuText() { return editMenu().getAttribute("label"); } public void openEditMenu() { editMenu().click(); } public String getUndoMenuText() { openEditMenu(); return editMenu().findElement(By.cssSelector(".smart-menu-drop-down div smart-menu-item.smart-element")) .getAttribute("label"); } } Test Execution There are two ways to execute the tests: Using TestNG Using Maven Test Execution Using TestNG We need to have the testng.xml file in the project's root folder. The following test blocks are required in the testng.xml file to run all our tests. The tests will be running on the LambdaTest cloud grid on the Chrome browser. We need to add the following values to run the tests on the LambdaTest cloud grid: LambdaTest Username LambdaTest Access Key These values can be passed using the Run Configuration window in the IDE as -DLT_USERNAME = <LambdaTest Username> -DLT_ACCESSKEY=<LambdaTest AccessKey>. To run this testng.xml file, right-click on it and select the option Run ‘…/testng.xml. Here is the screenshot of the tests run using IntelliJ IDE: Test Execution Using Maven To execute the tests using Maven, open the terminal, navigate to the root folder of the project, and run the following command: mvn clean test -DLT_USERNAME = <LambdaTest Username> -DLT_ACCESSKEY=<LambdaTest AccessKey> Here is the screenshot of the tests run using the terminal: Once the tests pass, you can view the test execution results on the LambdaTest Web Automation Dashboard, which provides all the details of the test execution. Conclusion In this tutorial, we explored how to handle Shadow Root in Selenium Java. We also discussed the DOM, Shadow Tree, and Shadow Root elements. Further, to automate the Shadow root elements, we used the getShadowRoot() method, which was introduced with Selenium WebDriver 4. The JavaScriptExecutor can be used to handle Shadow Root in Selenium Java. If you are working on the Selenium WebDriver version less than 4, using JavaScriptExecutor is an ideal solution to handle Shadow Root in Selenium Java. However, with the Selenium 4 release, as we have the getShadowRoot() method, we can use it as it is much easier than JavaScriptExecutor.

By Faisal Khatri
Hibernate Validator vs Regex vs Manual Validation: Which One Is Faster?
Hibernate Validator vs Regex vs Manual Validation: Which One Is Faster?

While I was coding for a performance back-end competition, I tried a couple of tricks, and I was wondering if there was a faster validator for Java applications, so I started a sample application. I used a very simple scenario: just validate the user's email. Controller With Hibernate Validator Hibernate Validator needs an object to put its rules to, so we have this: Java public record User( @NotNull @Email String email ){} This is used in the HibernateValidatorController class, which uses the jakarta.validation.Validator (which is just an interface for the Hibernate Validator implementation): Java @RestController @Validated public class HibernateValidatorController { @Autowired private Validator validator; @GetMapping("/validate-hibernate") public ResponseEntity<String> validateEmail(@RequestParam String email) { Using the validate method, we can check if this user's email is valid and get a proper HTTP response. Java var user = new User(email); var violations = validator.validate(user); if (violations.isEmpty()) { return ResponseEntity.ok("Valid email: 200 OK"); } else { var violationMessages = new StringBuilder(); for (ConstraintViolation<User> violation : violations) { violationMessages.append(violation.getMessage()).append("\n"); } return ResponseEntity.status(HttpStatus.BAD_REQUEST) .body("Invalid email: 400 Bad Request\n" + violationMessages.toString()); } Controller With Regular Expression For validation with regex, we need just the email regex and a method to validate: Java static final String EMAIL_REGEX = "^[A-Za-z0-9+_.-]+@(.+)$"; boolean isValid(String email) { return email != null && email.matches(EMAIL_REGEX); } The regexController class just gets an email from the request and uses the isValid method to validate it. Java @GetMapping("/validate-regex") public ResponseEntity<String> validateEmail(@RequestParam String email) { if (isValid(email)) { return ResponseEntity.ok("Valid email: 200 OK"); } else { return ResponseEntity.status(HttpStatus.BAD_REQUEST).body("Invalid email: 400 Bad Request"); } } Controller With Manual Validation We won't use any framework or libs to validate, just plain old String methods: Java boolean isValid(String email) { if (email == null) return false; int atIndex = email.indexOf("@"); int dotIndex = email.lastIndexOf("."); return atIndex > 0 && dotIndex > atIndex + 1 && dotIndex < email.length() - 1; } The programmaticController class just gets an email from the request and uses the isValid method to validate it. Java @GetMapping("/validate-programmatic") public ResponseEntity<String> validateEmail(@RequestParam String email) { if (isValid(email)) { return ResponseEntity.ok("Valid email: 200 OK"); } else { return ResponseEntity.status(HttpStatus.BAD_REQUEST).body("Invalid email: 400 Bad Request"); } } Very Simple Stress Test We are using Apache JMeter to test all 3 APIs. Our simulation runs with 1000 concurrent users in a loop for 100 times sending a valid email each request. Running it on my desktop machine got similar results for all APIs, but the winner is Hibernate Validator. | API | avg | 99% | max | TPS | |--------------------------------|-----|-----|------|--------| | Regex API Thread Group | 18 | 86 | 254 | 17784 | | Programmatic API Thread Group | 13 | 67 | 169 | 19197 | | Hibernate API Thread Group | 10 | 59 | 246 | 19960 | Conclusion Before this test, I thought that my own code should perform way better than somebody else's code, but actually, Hibernate Validator was the best option for my test. You can also run this test and check the source code in my GitHub.

By Fernando Boaglio
Generate Object Mapping Using MapStruct
Generate Object Mapping Using MapStruct

Do you need to write a lot of mapping code in order to map between different object models? MapStruct simplifies this task by generating mapping code. In this blog, you will learn some basic features of MapStruct. Enjoy! Introduction In a multi-layered application, one often has to write boilerplate code in order to map different object models. This can be a tedious and an error-prone task. MapStruct simplifies this task by generating the mapping code for you. It generates code during compile time and aims to generate the code as if it was written by you. This blog will only give you a basic overview of how MapStruct can aid you, but it will be sufficient to give you a good impression of which problem it can solve for you. If you are using IntelliJ as an IDE, you can also install the MapStruct Support Plugin which will assist you in using MapStruct. Sources used in this blog can be found on GitHub. Prerequisites Prerequisites for this blog are: Basic Java knowledge, Java 21 is used in this blog Basic Spring Boot knowledge Basic Application The application used in this blog is a basic Spring Boot project. By means of a Rest API, a customer can be created and retrieved. In order to keep the API specification and source code in line with each other, you will use the openapi-generator-maven-plugin. First, you write the OpenAPI specification and the plugin will generate the source code for you based on the specification. The OpenAPI specification consists out of two endpoints, one for creating a customer (POST) and one for retrieving the customer (GET). The customer consists of its name and some address data. YAML Customer: type: object properties: firstName: type: string description: First name of the customer minLength: 1 maxLength: 20 lastName: type: string description: Last name of the customer minLength: 1 maxLength: 20 street: type: string description: Street of the customer minLength: 1 maxLength: 20 number: type: string description: House number of the customer minLength: 1 maxLength: 5 postalCode: type: string description: Postal code of the customer minLength: 1 maxLength: 5 city: type: string description: City of the customer minLength: 1 maxLength: 20 The CustomerController implements the generated Controller interface. The OpenAPI maven plugin makes use of its own model. In order to transfer the data to the CustomerService, DTOs are created. These are Java records. The CustomerDto is: Java public record CustomerDto(Long id, String firstName, String lastName, AddressDto address) { } The AddressDto is: Java public record AddressDto(String street, String houseNumber, String zipcode, String city) { } The domain itself is used within the Service and is a basic Java POJO. The Customer domain is: Java public class Customer { private Long customerId; private String firstName; private String lastName; private Address address; // Getters and setters left out for brevity } The Address domain is: Java public class Address { private String street; private int houseNumber; private String zipcode; private String city; // Getters and setters left out for brevity } In order to connect everything together, you will need to write mapper code for: Mapping between the API model and the DTO Mapping between the DTO and the domain Mapping Between DTO and Domain Add Dependency In order to make use of MapStruct, it suffices to add the MapStruct Maven dependency and to add some configuration to the Maven Compiler plugin. XML <dependency> <groupId>org.mapstruct</groupId> <artifactId>mapstruct</artifactId> <version>${org.mapstruct.version}</version> </dependency> ... <build> <plugins> ... <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <version>3.8.1</version> <configuration> <annotationProcessorPaths> <path> <groupId>org.mapstruct</groupId> <artifactId>mapstruct-processor</artifactId> <version>${org.mapstruct.version}</version> </path> </annotationProcessorPaths> </configuration> </plugin> ... </plugins> </build> Create Mapper The CustomerDto, AddressDto and the Customer, Address domains do not differ very much from each other. CustomerDto has an id while Customer has a customerId. AddressDto has a houseNumber of the type String while Address has a houseNumber of the type integer. In order to create a mapper for this using MapStruct, you create an interface CustomerMapper, annotate it with @Mapper, and specify the component model with the value spring. Doing this will ensure that the generated mapper is a singleton-scoped Spring bean that can be retrieved via @Autowired. Because both models are quite similar to each other, MapStruct will be able to generate most of the code by itself. Because the customer id has a different name in both models, you need to help MapStruct a bit. Using the @Mapping annotation, you specify the source and target mapping. For the type conversion, you do not need to do anything, MapStruct can sort this out based on the implicit type conversions. The corresponding mapper code is the following: Java @Mapper(componentModel = "spring") public interface CustomerMapper { @Mapping(source = "customerId", target = "id") CustomerDto transformToCustomerDto(Customer customer); @Mapping(source = "id", target = "customerId") Customer transformToCustomer(CustomerDto customerDto); } Generate the code: Shell $ mvn clean compile In the target/generated-sources/annotations directory, you can find the generated CustomerMapperImpl class. Java @Generated( value = "org.mapstruct.ap.MappingProcessor", date = "2024-04-21T13:38:51+0200", comments = "version: 1.5.5.Final, compiler: javac, environment: Java 21 (Eclipse Adoptium)" ) @Component public class CustomerMapperImpl implements CustomerMapper { @Override public CustomerDto transformToCustomerDto(Customer customer) { if ( customer == null ) { return null; } Long id = null; String firstName = null; String lastName = null; AddressDto address = null; id = customer.getCustomerId(); firstName = customer.getFirstName(); lastName = customer.getLastName(); address = addressToAddressDto( customer.getAddress() ); CustomerDto customerDto = new CustomerDto( id, firstName, lastName, address ); return customerDto; } @Override public Customer transformToCustomer(CustomerDto customerDto) { if ( customerDto == null ) { return null; } Customer customer = new Customer(); customer.setCustomerId( customerDto.id() ); customer.setFirstName( customerDto.firstName() ); customer.setLastName( customerDto.lastName() ); customer.setAddress( addressDtoToAddress( customerDto.address() ) ); return customer; } protected AddressDto addressToAddressDto(Address address) { if ( address == null ) { return null; } String street = null; String houseNumber = null; String zipcode = null; String city = null; street = address.getStreet(); houseNumber = String.valueOf( address.getHouseNumber() ); zipcode = address.getZipcode(); city = address.getCity(); AddressDto addressDto = new AddressDto( street, houseNumber, zipcode, city ); return addressDto; } protected Address addressDtoToAddress(AddressDto addressDto) { if ( addressDto == null ) { return null; } Address address = new Address(); address.setStreet( addressDto.street() ); if ( addressDto.houseNumber() != null ) { address.setHouseNumber( Integer.parseInt( addressDto.houseNumber() ) ); } address.setZipcode( addressDto.zipcode() ); address.setCity( addressDto.city() ); return address; } } As you can see, the code is very readable and it has taken into account the mapping of Customer and Address. Create Service The Service will create a domain Customer taken the CustomerDto as an input. The customerMapper is injected into the Service and is used for converting between the two models. The other way around, when a customer is retrieved, the mapper converts the domain Customer to a CustomerDto. In the Service, the customers are persisted in a basic list in order to keep things simple. Java @Service public class CustomerService { private final CustomerMapper customerMapper; private final HashMap<Long, Customer> customers = new HashMap<>(); private Long index = 0L; CustomerService(CustomerMapper customerMapper) { this.customerMapper = customerMapper; } public CustomerDto createCustomer(CustomerDto customerDto) { Customer customer = customerMapper.transformToCustomer(customerDto); customer.setCustomerId(index); customers.put(index, customer); index++; return customerMapper.transformToCustomerDto(customer); } public CustomerDto getCustomer(Long customerId) { if (customers.containsKey(customerId)) { return customerMapper.transformToCustomerDto(customers.get(customerId)); } else { return null; } } } Test Mapper The mapper can be easily tested by using the generated CustomerMapperImpl class and verify whether the mappings are executed successfully. Java class CustomerMapperTest { @Test void givenCustomer_whenMaps_thenCustomerDto() { CustomerMapperImpl customerMapper = new CustomerMapperImpl(); Customer customer = new Customer(); customer.setCustomerId(2L); customer.setFirstName("John"); customer.setLastName("Doe"); Address address = new Address(); address.setStreet("street"); address.setHouseNumber(42); address.setZipcode("zipcode"); address.setCity("city"); customer.setAddress(address); CustomerDto customerDto = customerMapper.transformToCustomerDto(customer); assertThat( customerDto ).isNotNull(); assertThat(customerDto.id()).isEqualTo(customer.getCustomerId()); assertThat(customerDto.firstName()).isEqualTo(customer.getFirstName()); assertThat(customerDto.lastName()).isEqualTo(customer.getLastName()); AddressDto addressDto = customerDto.address(); assertThat(addressDto.street()).isEqualTo(address.getStreet()); assertThat(addressDto.houseNumber()).isEqualTo(String.valueOf(address.getHouseNumber())); assertThat(addressDto.zipcode()).isEqualTo(address.getZipcode()); assertThat(addressDto.city()).isEqualTo(address.getCity()); } @Test void givenCustomerDto_whenMaps_thenCustomer() { CustomerMapperImpl customerMapper = new CustomerMapperImpl(); AddressDto addressDto = new AddressDto("street", "42", "zipcode", "city"); CustomerDto customerDto = new CustomerDto(2L, "John", "Doe", addressDto); Customer customer = customerMapper.transformToCustomer(customerDto); assertThat( customer ).isNotNull(); assertThat(customer.getCustomerId()).isEqualTo(customerDto.id()); assertThat(customer.getFirstName()).isEqualTo(customerDto.firstName()); assertThat(customer.getLastName()).isEqualTo(customerDto.lastName()); Address address = customer.getAddress(); assertThat(address.getStreet()).isEqualTo(addressDto.street()); assertThat(address.getHouseNumber()).isEqualTo(Integer.valueOf(addressDto.houseNumber())); assertThat(address.getZipcode()).isEqualTo(addressDto.zipcode()); assertThat(address.getCity()).isEqualTo(addressDto.city()); } } Mapping Between API and DTO Create Mapper The API model looks a bit different than the CustomerDto because it has no Address object and number and postalCode have different names in the CustomerDto. Java public class Customer { private String firstName; private String lastName; private String street; private String number; private String postalCode; private String city; // Getters and setters left out for brevity } In order to create a mapper, you need to add a bit more @Mapping annotations, just like you did before for the customer ID. Java @Mapper(componentModel = "spring") public interface CustomerPortMapper { @Mapping(source = "street", target = "address.street") @Mapping(source = "number", target = "address.houseNumber") @Mapping(source = "postalCode", target = "address.zipcode") @Mapping(source = "city", target = "address.city") CustomerDto transformToCustomerDto(Customer customerApi); @Mapping(source = "id", target = "customerId") @Mapping(source = "address.street", target = "street") @Mapping(source = "address.houseNumber", target = "number") @Mapping(source = "address.zipcode", target = "postalCode") @Mapping(source = "address.city", target = "city") CustomerFullData transformToCustomerApi(CustomerDto customerDto); } Again, the generated CustomerPortMapperImpl class can be found in the target/generated-sources/annotations directory after invoking the Maven compile target. Create Controller The mapper is injected in the Controller and the corresponding mappers can easily be used. Java @RestController class CustomerController implements CustomerApi { private final CustomerPortMapper customerPortMapper; private final CustomerService customerService; CustomerController(CustomerPortMapper customerPortMapper, CustomerService customerService) { this.customerPortMapper = customerPortMapper; this.customerService = customerService; } @Override public ResponseEntity<CustomerFullData> createCustomer(Customer customerApi) { CustomerDto customerDtoIn = customerPortMapper.transformToCustomerDto(customerApi); CustomerDto customerDtoOut = customerService.createCustomer(customerDtoIn); return ResponseEntity.ok(customerPortMapper.transformToCustomerApi(customerDtoOut)); } @Override public ResponseEntity<CustomerFullData> getCustomer(Long customerId) { CustomerDto customerDtoOut = customerService.getCustomer(customerId); return ResponseEntity.ok(customerPortMapper.transformToCustomerApi(customerDtoOut)); } } Test Mapper A unit test is created in a similar way as the one for the Service and can be viewed here. In order to test the complete application, an integration test is created for creating a customer. Java @SpringBootTest @AutoConfigureMockMvc class CustomerControllerIT { @Autowired private MockMvc mockMvc; @Test void whenCreateCustomer_thenReturnOk() throws Exception { String body = """ { "firstName": "John", "lastName": "Doe", "street": "street", "number": "42", "postalCode": "1234", "city": "city" } """; mockMvc.perform(post("/customer") .contentType("application/json") .content(body)) .andExpect(status().isOk()) .andExpect(jsonPath("firstName", equalTo("John"))) .andExpect(jsonPath("lastName", equalTo("Doe"))) .andExpect(jsonPath("customerId", equalTo(0))) .andExpect(jsonPath("street", equalTo("street"))) .andExpect(jsonPath("number", equalTo("42"))) .andExpect(jsonPath("postalCode", equalTo("1234"))) .andExpect(jsonPath("city", equalTo("city"))); } } Conclusion MapStruct is an easy-to-use library for mapping between models. If the basic mapping is not sufficient, you are even able to create your own custom mapping logic (which is not demonstrated in this blog). It is advised to read the official documentation to get a comprehensive list of all available features.

By Gunter Rotsaert DZone Core CORE
GenAI in Java With Merlinite, Quarkus, and Podman Desktop AI Lab
GenAI in Java With Merlinite, Quarkus, and Podman Desktop AI Lab

GenAI is everywhere these days, and I feel it is particularly hard for developers to navigate the almost endless possibilities and catch up with all the learning that seems upon us. It's time for me to get back to writing and make sure nobody feels left behind in this craziness. GenAI Use Cases For Developers As developers, we basically have to deal with two main aspects of artificial intelligence. The first one is how to use the enhanced tooling available to us now. There are plenty of IDE plug-ins, web-based tools, and chatbots that promise to help us be more efficient with coding. Bad news first: this article doesn't cover a bit about this. You're on your own figuring out what suits you and helps you best. I am personally following what Stephan Janssen is doing with his Devoxx Ginie Plug-In, in case you need some inspiration to look at something. The second biggest aspect, on the other hand, is indeed something I want to write about in this article. It's how to incorporate "intelligence" into our own applications, and where to even start. This is particularly challenging if you are completely new to the topic and haven't had a chance to follow what is going on for a little over a year now. Finding, Tuning, and Running Suitable Models, Locally I will spare you an introduction about what AI models are and what types are available. Honestly, I've been digging into the basics quite a bit for a while and have to admit that it gets data-science-y very quickly. Especially if you see yourself more on the consuming side of artificial intelligence, there is probably very little value to deep dive into the inner workings. The following is a very condensed and opinionated view of models and how I personally think about them. If you want to learn all the basics, feel free to follow your passion. There are uncountable models out there already, and they all have a specific set of training data that theoretically makes them a fit for solving business challenges. Getting a model to this point is the basic step, and it is called "training" (probably one of the most expensive things you ever planned to do in your whole career). I like to call the outcome state of an initially trained model "foundation." Those foundation models (e.g., GPT-n, BERT, LLAMA-n, etc.) have been trained on a broad data set, making them a mediocre fit for pretty much everything. And they come with some disadvantages. They don't know much about your specific problem, they probably don't disclose what kind of data has been used to train them, and they might just make things up and some other downsides. You might wanna look for smaller and more suitable models that you can actually influence, at least partly. Instead of "influence" I probably should say "infuse" or "tune" with additional context. There are various ways to "tune" a model (compare the image below), and the approach depends on your goals. Common Model Tuning Techniques On the very left side, you see what is commonly known as "prompt engineering," which is kind of an "on the fly" model tuning, and on the very upper right side, you see full "alignment tuning." While alignment tuning actually changes model weights and behavior, prompt tuning does not. You may have already guessed that every single one of these steps has not only a quality difference but also a cost difference attached to it. I am going to talk mostly about a version of prompt tuning in this article, but wanted to also get you excited about the alignment tuning. I will use the Merlinite-7B model further down in this article. The main reason is that it is an open-source model and it has a community attached to it, so you can help contribute knowledge and grow transparent training data. Make sure to check out InstructLab and dive into LAB tuning a little more. Well, now that I picked a model, how do we run it locally? There are (again) plenty of alternatives and ways. And trying to research and understand the various formats and ways left me exhausted quickly. My developer mind wanted to understand models like deployments with endpoints that I could access. Thankfully, this is slowly but surely becoming a reality. With various binary model formats in the wild, the GGUF format was introduced late last year by the team creating the most broadly used endpoint API (llama.cpp) right now. When I talk about endpoints and APIs, I should at least introduce the term "inference." This is basically what data scientists call a query to a model. And it is called this because inferences are steps in reasoning, moving from premises to consequences. It basically is a beautiful way to explain how AI models work. Back to llama.cpp, which is written in C/C++ and serves as an inference engine for various model formats on a variety of hardware, locally and in the cloud: The llama.cpp web server provides an OpenAI compatible API that can be used to serve local models and connects them to clients (thanks to the OpenAI folks for the MIT license on the API!). More on the API part later. Let's get this thing fired up locally. But wait, C/C++ you said? and GGUF? How do I? Well, easy. Just use Podman Desktop AI lab. It is an extension to the desktop client released last year which lets you work with models locally. It is a one-click installation if you have Podman Desktop running already, and all that is left to do is to download a desired model via the Model Catalog. Podman Desktop AI Lab - Model Catalog When that is done, we need to create a Model Service and run it. Podman Desktop AI Lab: Create Model Service I've selected the Merlinite lab model that you learned about above already. Clicking "create service" creates the llama.ccp-based inference server for you and packs everything up in a container to use locally. Going to the "Service Details," you are presented with a UI that gives you the API endpoint for the model and some pre-generated client code you can use in various languages and tools. Podman Desktop AI Lab: Merlinite Service Details Play around with curl and use the playground to start a chat with the model directly from Podman Desktop. But in this blog post, we are going to dive deeper into how to integrate the model you just containerized and started into your application. Quarkus and LangChain4j Having a model run locally in a container is already pretty cool, but what we really want is to use it from our Java application - ideally from Quarkus, of course. I mentioned the OpenAI API above already, but we really do not want to handle direct API calls ourselves anymore. This is where LangChain4j comes into the picture. It provides a unified abstraction and various tools to integrate the shiny GenAI world into your Java applications. Additionally, Quarkus makes it even easier by providing a LangChain4j extension that does all the configuration and heavy lifting for you. Let's start our AI-infused Quarkus application. I am assuming that you have the following: Roughly 15 minutes An IDE JDK 17+ installed with JAVA_HOME configured appropriately Apache Maven 3.9.6 Go into your projects folder or somewhere to bootstrap a simple Quarkus project with the following Maven command: Shell mvn io.quarkus.platform:quarkus-maven-plugin:3.10.1:create \ -DprojectGroupId=org.acme \ -DprojectArtifactId=get-ai-started \ -Dextensions='rest,quarkus-langchain4j-openai' This will create a folder called "get-ai-started" that you need to change into. Open the project with your favorite editor and delete everything from the src/main/test folder. Yeah, I know, but I really want this to be a super simple start. Next create a Bot.java file in /src/main/java/org.acme/ with the following content: Java package org.acme; import dev.langchain4j.service.SystemMessage; import dev.langchain4j.service.UserMessage; import io.quarkiverse.langchain4j.RegisterAiService; import jakarta.enterprise.context.SessionScoped; @RegisterAiService() @SessionScoped public interface Bot { String chat(@UserMessage String question); } Save and open the src/main/resources/application.properties file. Just add the following two lines: Properties files quarkus.langchain4j.openai.base-url=<MODEL_URL_FROM_PODMAN_SERVICE> quarkus.langchain4j.openai.timeout=120s Make sure to change to the endpoint your Podman Desktop AI Lab shows you under Service Details. In the above example, it is http://localhost:64752/v1. Now open GreetingResource.java and change to the following: Java package org.acme; import io.quarkiverse.langchain4j.RegisterAiService; import jakarta.enterprise.context.SessionScoped; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import jakarta.ws.rs.Produces; import jakarta.ws.rs.core.MediaType; @Path("/hello") @RegisterAiService() @SessionScoped public class GreetingResource { private final Bot bot; public GreetingResource(Bot bot){ this.bot = bot; } @GET @Produces(MediaType.TEXT_PLAIN) public String hello() { return bot.chat("What model are you?"); } } You've now basically added the call to the model and the /hello resource should respond with the answer to the hard-coded question "What model are you?". Start your application in Development Mode on the terminal: Shell mvn quarkus:dev And navigate your browser of choice to http://localhost:8080/hello. Assuming that your model container is still running in Podman Desktop, you will see the answer within a few seconds in your browser: "I am a text-based AI language model, trained by OpenAI. I am designed to assist users in various tasks and provide information based on my knowledge cutoff of September 2021." Quarkus makes interacting with the underlying LLM super simple. If you navigate to http://localhost:8080/q/dev-ui/io.quarkiverse.langchain4j.quarkus-langchain4j-core/chat, you can access the built-in chat functionality of the Quarkus LangChain4J integration. Quarkus Dev UI Screenshot: Chat Integration Wrapping Up That's it: as simple and straightforward as you can imagine. Feel free to play around further and let me know in the comments what you'd like to learn more about next.

By Markus Eisele
Javac and Java Katas, Part 1: Class Path
Javac and Java Katas, Part 1: Class Path

Here, I'd like to talk you through three Java katas, ranging from the simplest to the most complex. These exercises should help you gain experience working with JDK tools such as javac, java, and jar. By doing them, you'll get a good understanding of what goes on behind the scenes of your favorite IDE or build tools like Maven, Gradle, etc. None of this denies the benefits of an IDE. But to be truly skilled at your craft, understand your essential tools and don’t let them get rusty. - Gail Ollis, "Don’t hIDE Your Tools" Getting Started The source code can be found in the GitHub repository. All commands in the exercises below are executed inside a Docker container to avoid any particularities related to a specific environment. Thus, to get started, clone the repository and run the command below from its java-javac-kata folder: Shell docker run --rm -it --name java_kata -v .:/java-javac-kata --entrypoint /bin/bash maven:3.9.6-amazoncorretto-17-debian Kata 1: "Hello, World!" Warm Up In this kata, we will be dealing with a simple Java application without any third-party dependencies. Let's navigate to the /class-path-part/kata-one-hello-world-warm-up folder and have a look at the directory structure. Within this directory, we can see the Java project structure and two classes in the com.example.kata.one package. Compilation Shell javac -d ./target/classes $(find -name '*.java') The compiled Java classes should appear in the target/classes folder, as shown in the screenshot above. Try using the verbose option to see more details about the compilation process in the console output: Shell javac -verbose -d ./target/classes $(find -name '*.java') With that covered, let's jump into the execution part. Execution Shell java --class-path "./target/classes" com.example.kata.one.Main As a result, you should see Hello World! in your console. Try using different verbose:[class|gc|jni] options to get more details on the execution process: Shell java -verbose:class --class-path "./target/classes" com.example.kata.one.Main As an extra step, it's worth trying to remove classes or rename packages to see what happens during both the complication and execution stages. This will give you a better understanding of which problems result in particular errors. Packaging Building Jar Shell jar --create --file ./target/hello-world-warm-up.jar -C target/classes/ . The built jar is placed in the target folder. Don't forget to use the verbose option as well to see more details: Shell jar --verbose --create --file ./target/hello-world-warm-up.jar -C target/classes/ . You can view the structure of the built jar using the following command: Shell jar -tf ./target/hello-world-warm-up.jar With that, let's proceed to run it: Shell java --class-path "./target/hello-world-warm-up.jar" com.example.kata.one.Main Building Executable Jar To build an executable jar, the main-class must be specified: Shell jar --create --file ./target/hello-world-warm-up.jar --main-class=com.example.kata.one.Main -C target/classes/ . It can then be run via jar option: Shell java -jar ./target/hello-world-warm-up.jar Kata 2: Third-Party Dependency In this kata, you will follow the same steps as in the previous one. The main difference is that our Hello World! application uses guava-30.1-jre.jar as a third-party dependency. Also, remember to use the verbose option to get more details. So, without further ado, let's get to the /class-path-part/kata-two-third-party-dependency folder and check out the directory's structure. Compilation Shell javac --class-path "./lib/*" -d ./target/classes/ $(find -name '*.java') The class-path option is used to specify the path to the lib folder where our dependency is stored. Execution Shell java --class-path "./target/classes:./lib/*" com.example.kata.two.Main Packaging Building Jar Shell jar --create --file ./target/third-party-dependency.jar -C target/classes/ . And let us run it: Shell java --class-path "./target/third-party-dependency.jar:./lib/*" com.example.kata.two.Main Building Executable Jar Our first step here is to create a MANIFEST.FM file with the Class-Path specified: Shell echo 'Class-Path: ../lib/guava-30.1-jre.jar' > ./target/MANIFEST.FM Next up, we build a jar with the provided manifest option: Shell jar --create \ --file ./target/third-party-dependency.jar \ --main-class=com.example.kata.two.Main \ --manifest=./target/MANIFEST.FM \ -C target/classes/ . Finally, we execute it: Shell java -jar ./target/third-party-dependency.jar Building Fat Jar First of all, we need to unpack our guava-30.1-jre.jar into the ./target/classes/ folder (be patient, this can take some time): Shell cp lib/guava-30.1-jre.jar ./target/classes && \ cd ./target/classes && \ jar xf guava-30.1-jre.jar && \ rm ./guava-30.1-jre.jar && \ rm -r ./META-INF && \ cd ../../ With all the necessary classes in the ./target/classes folder, we can build our fat jar (again, be patient as this can take some time): Shell jar --create --file ./target/third-party-dependency-fat.jar --main-class=com.example.kata.two.Main -C target/classes/ . Now, we can run our built jar: Shell java -jar ./target/third-party-dependency-fat.jar Kata 3: Spring Boot Application Conquest In the /class-path-part/kata-three-spring-boot-app-conquest folder, you will find a Maven project for a simple Spring Boot application. The main goal here is to apply everything that we have learned so far to manage all its dependencies and run the application, including its test code. As a starting point, let's run the following command: Shell mvn clean package && \ find ./target/ -mindepth 1 ! -regex '^./target/lib\(/.*\)?' -delete This will leave only the source code and download all necessary dependencies into the ./target/lib folder. Compilation Shell javac --class-path "./target/lib/compile/*" -d ./target/classes/ $(find -P ./src/main/ -name '*.java') Execution Shell java --class-path "./target/classes:./target/lib/compile/*" com.example.kata.three.Main As an extra step for both complication and execution, you can try specifying all necessary dependencies explicitly in the class-path. This will help you understand that not all artifacts in the ./target/lib/compile are needed to do that. Packaging Let's package our compiled code as a jar and try to run it. It won't be a Spring Boot jar because Spring Boot uses a non-standard approach to build fat jars, including its own class loader. See the documentation on The Executable Jar Format for more details. In this exercise, we will package our source code as we did before to demonstrate that everything can work in the same way with Spring Boot, too. Shell jar --create --file ./target/spring-boot-app-conquest.jar -C target/classes/ . Now, let's run it to verify that it works: Shell java --class-path "./target/spring-boot-app-conquest.jar:./target/lib/compile/*" com.example.kata.three.Main Test Compilation Shell javac --class-path "./target/classes:./target/lib/test/*:./target/lib/compile/*" -d ./target/test-classes/ $(find -P ./src/test/ -name '*.java') Take notice that this time we are searching for source files in the ./src/test/ directory, and both the application source code and test dependencies are added to the class-path. Test Execution To be able to run code via java, we need an entry point (a class with the main method). Traditionally, tests are run via a Maven plugin or by an IDE, which have their own launchers to make this process comfortable for developers. To demonstrate test execution, the junit-platform-console-standalone dependency, which includes the org.junit.platform.console.ConsoleLauncher with the main method, is added to our pom.xml. Its artifact can also be seen in the ./target/lib/test/* folder. Shell java --class-path "./target/classes:./target/test-classes:./target/lib/compile/*:./target/lib/test/*" \ org.junit.platform.console.ConsoleLauncher execute --scan-classpath --disable-ansi-colors Wrapping Up Gail's article, "Don’t hIDE Your Tools" quoted at the very beginning of this article, taken from 97 Things Every Java Programmer Should Know by Kevlin Henney and Trisha Gee, inspired me to start thinking in this direction and eventually led to the creation of this post. Hopefully, by doing these katas and not just reading them, you have developed a better understanding of how the essential JDK tools work.

By Maksim Kren
Singleton: 6 Ways To Write and Use in Java Programming
Singleton: 6 Ways To Write and Use in Java Programming

In Java programming, object creation or instantiation of a class is done with "new" operator and with a public constructor declared in the class as below. Java Clazz clazz = new Clazz(); We can read the code snippet as follows: Clazz() is the default public constructor called with "new" operator to create or instantiate an object for Clazz class and assigned to variable clazz, whose type is Clazz. While creating a singleton, we have to ensure only one single object is created or only one instantiation of a class takes place. To ensure this, the following common things become the prerequisite. All constructors need to be declared as "private" constructors. It prevents the creation of objects with "new" operator outside the class. A private constant/variable object holder to hold the singleton object is needed; i.e., a private static or a private static final class variable needs to be declared. It holds the singleton object. It acts as a single source of reference for the singleton object By convention, the variable is named as INSTANCE or instance. A static method to allow access to the singleton object by other objects is required. This static method is also called a static factory method, as it controls the creation of objects for the class. By convention, the method is named as getInstance(). With this understanding, let us delve deeper into understanding singleton. Following are the 6 ways one can create a singleton object for a class. 1. Static Eager Singleton Class When we have all the instance properties in hand, and we like to have only one object and a class to provide a structure and behavior for a group of properties related to each other, we can use the static eager singleton class. This is well-suited for application configuration and application properties. Java public class EagerSingleton { private static final EagerSingleton INSTANCE = new EagerSingleton(); private EagerSingleton() {} public static EagerSingleton getInstance() { return INSTANCE; } public static void main(String[] args) { EagerSingleton eagerSingleton = EagerSingleton.getInstance(); } } The singleton object is created while loading the class itself in JVM and assigned to the INSTANCE constant. getInstance() provides access to this constant. While compile-time dependencies over properties are good, sometimes run-time dependencies are required. In such a case, we can make use of a static block to instantiate singleton. Java public class EagerSingleton { private static EagerSingleton instance; private EagerSingleton(){} // static block executed during Class loading static { try { instance = new EagerSingleton(); } catch (Exception e) { throw new RuntimeException("Exception occurred in creating EagerSingleton instance"); } } public static EagerSingleton getInstance() { return instance; } } The singleton object is created while loading the class itself in JVM as all static blocks are executed while loading. Access to the instance variable is provided by the getInstance() static method. 2. Dynamic Lazy Singleton Class Singleton is more suited for application configuration and application properties. Consider heterogenous container creation, object pool creation, layer creation, facade creation, flyweight object creation, context preparation per requests, and sessions, etc.: they all require dynamic construction of a singleton object for better "separation of concern." In such cases, dynamic lazy singletons are required. Java public class LazySingleton { private static LazySingleton instance; private LazySingleton(){} public static LazySingleton getInstance() { if (instance == null) { instance = new LazySingleton(); } return instance; } } The singleton object is created only when the getInstance() method is called. Unlike the static eager singleton class, this class is not thread-safe. Java public class LazySingleton { private static LazySingleton instance; private LazySingleton(){} public static synchronized LazySingleton getInstance() { if (instance == null) { instance = new LazySingleton(); } return instance; } } The getInstance() method needs to be synchronized to ensure the getInstance() method is thread-safe in singleton object instantiation. 3. Dynamic Lazy Improved Singleton Class Java public class LazySingleton { private static LazySingleton instance; private LazySingleton(){} public static LazySingleton getInstance() { if (instance == null) { synchronized (LazySingleton.class) { if (instance == null) { instance = new LazySingleton(); } } } return instance; } } Instead of locking the entire getInstance() method, we could lock only the block with double-checking or double-checked locking to improve performance and thread contention. Java public class EagerAndLazySingleton { private EagerAndLazySingleton(){} private static class SingletonHelper { private static final EagerAndLazySingleton INSTANCE = new EagerAndLazySingleton(); } public static EagerAndLazySingleton getInstance() { return SingletonHelper.INSTANCE; } } The singleton object is created only when the getInstance() method is called. It is a Java memory-safe singleton class. It is a thread-safe singleton and is lazily loaded. It is the most widely used and recommended. Despite performance and safety improvement, the only objective to create just one object for a class is challenged by memory reference, reflection, and serialization in Java. Memory reference: In a multithreaded environment, reordering of read and writes for threads can occur on a referenced variable, and a dirty object read can happen anytime if the variable is not declared volatile. Reflection: With reflection, the private constructor can be made public and a new instance can be created. Serialization: A serialized instance object can be used to create another instance of the same class. All of these affect both static and dynamic singletons. In order to overcome such challenges, it requires us to declare the instance holder as volatile and override equals(), hashCode() and readResolve() of default parent class of all classes in Java, Object.class. 4. Singleton With Enum The issue with memory safety, reflection, and serialization can be avoided if enums are used for static eager singleton. Java public enum EnumSingleton { INSTANCE; } These are static eager singletons in disguise, thread safe. It is good to prefer an enum where a static eagerly initialized singleton is required. 5. Singleton With Function and Libraries While understanding the challenges and caveats in singleton is a must to appreciate, why should one worry about reflection, serialization, thread safety, and memory safety when one can leverage proven libraries? Guava is such a popular and proven library, handling a lot of best practices for writing effective Java programs. I have had the privilege of using the Guava library to explain supplier-based singleton object instantiation to avoid a lot of heavy-lifting lines of code. Passing a function as an argument is the key feature of functional programming. While the supplier function provides a way to instantiate object producers, in our case, the producer must produce only one object and should keep returning the same object repeatedly after a single instantiation. We can memoize/cache the created object. Functions defined with lambdas are usually lazily invoked to instantiate objects and the memoization technique helps in lazily invoked dynamic singleton object creation. Java import com.google.common.base.Supplier; import com.google.common.base.Suppliers; public class SupplierSingleton { private SupplierSingleton() {} private static final Supplier<SupplierSingleton> singletonSupplier = Suppliers.memoize(()-> new SupplierSingleton()); public static SupplierSingleton getInstance() { return singletonSupplier.get(); } public static void main(String[] args) { SupplierSingleton supplierSingleton = SupplierSingleton.getInstance(); } } Functional programming, supplier function, and memoization help in the preparation of singletons with a cache mechanism. This is most useful when we don't want heavy framework deployment. 6. Singleton With Framework: Spring, Guice Why worry about even preparing an object via supplier and maintaining cache? Frameworks like Spring and Guice work on POJO objects to provide and maintain singleton. This is heavily used in enterprise development where many modules each require their own context with many layers. Each context and each layer are good candidates for singleton patterns. Java import org.springframework.beans.factory.config.ConfigurableBeanFactory; import org.springframework.context.annotation.AnnotationConfigApplicationContext; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; import org.springframework.context.annotation.Scope; class SingletonBean { } @Configuration public class SingletonBeanConfig { @Bean @Scope(value = ConfigurableBeanFactory.SCOPE_SINGLETON) public SingletonBean singletonBean() { return new SingletonBean(); } public static void main(String[] args) { AnnotationConfigApplicationContext applicationContext = new AnnotationConfigApplicationContext(SingletonBean.class); SingletonBean singletonBean = applicationContext.getBean(SingletonBean.class); } } Spring is a very popular framework. Context and Dependency Injection are the core of Spring. import com.google.inject.AbstractModule; import com.google.inject.Guice; import com.google.inject.Injector; interface ISingletonBean {} class SingletonBean implements ISingletonBean { } public class SingletonBeanConfig extends AbstractModule { @Override protected void configure() { bind(ISingletonBean.class).to(SingletonBean.class); } public static void main(String[] args) { Injector injector = Guice.createInjector(new SingletonBeanConfig()); SingletonBean singletonBean = injector.getInstance(SingletonBean.class); } } Guice from Google is also a framework to prepare singleton objects and an alternative to Spring. Following are the ways singleton objects are leveraged with "Factory of Singletons." Factory Method, Abstract Factory, and Builders are associated with the creation and construction of specific objects in JVM. Wherever we envision the construction of an object with specific needs, we can discover the singleton's need. Further places where one can check out and discover singleton are as follows. Prototype or Flyweight Object pools Facades Layering Context and class loaders Cache Cross-cutting concerns and aspect-oriented programming Conclusion Patterns appear when we solve use cases for our business problems and for our non-functional requirement constraints like performance, security, and CPU and memory constraints. Singleton objects for a given class is such a pattern, and requirements for its use will fall in place to discover. The class by nature is a blueprint to create multiple objects, yet the need for dynamic heterogenous containers to prepare "context," "layer,", "object pools," and "strategic functional objects" did push us to make use of declaring globally accessible or contextually accessible objects. Thanks for your valuable time, and I hope you found something useful to revisit and discover.

By Narendran Solai Sridharan

Top Java Experts

expert thumbnail

Nicolas Fränkel

Head of Developer Advocacy,
Api7

Developer Advocate with 15+ years experience consulting for many different customers, in a wide range of contexts (such as telecoms, banking, insurances, large retail and public sector). Usually working on Java/Java EE and Spring technologies, but with focused interests like Rich Internet Applications, Testing, CI/CD and DevOps. Also double as a trainer and triples as a book author.
expert thumbnail

Shai Almog

OSS Hacker, Developer Advocate and Entrepreneur,
Codename One

Software developer with ~30 years of professional experience in a multitude of platforms/languages. JavaOne rockstar/highly rated speaker, author, blogger and open source hacker. Shai has extensive experience in the full stack of backend, desktop and mobile. This includes going all the way into the internals of VM implementation, debuggers etc. Shai started working with Java in 96 (the first public beta) and later on moved to VM porting/authoring/internals and development tools. Shai is the co-founder of Codename One, an Open Source project allowing Java developers to build native applications for all mobile platforms in Java. He's the coauthor of the open source LWUIT project from Sun Microsystems and has developed/worked on countless other projects both open source and closed source. Shai is also a developer advocate at Lightrun.
expert thumbnail

Andrei Tuchin

Lead Software Developer, VP,
JPMorgan & Chase

expert thumbnail

Ram Lakshmanan

yCrash - Chief Architect

Want to become Java Performance Expert? Attend my master class: https://ycrash.io/java-performance-training

The Latest Java Topics

article thumbnail
Maven Archetypes: Simplifying Project Template Creation
Maven Archetypes enables you to apply best practices within your project or org. Learn how to create archetypes from scratch and based on an existing project.
July 9, 2024
by Gunter Rotsaert DZone Core CORE
· 855 Views · 3 Likes
article thumbnail
How To Remove Excel Worksheets Using APIs in Java
Learn how to simplify the process of retrieving worksheet details from an Excel XLSX file, and removing specific worksheets based on that information.
July 5, 2024
by Brian O'Neill DZone Core CORE
· 2,000 Views · 1 Like
article thumbnail
Javac and Java Katas, Part 2: Module Path
In this article, look at some exercises dedicated to using JDK tools such as javac, java, and jar to build and run modular Java applications.
July 3, 2024
by Maksim Kren
· 2,425 Views · 1 Like
article thumbnail
Integration Testing With Keycloak, Spring Security, Spring Boot, and Spock Framework
Configure Keycloak, integrate with Spring Boot, write repeatable unit tests using Spock, and ensure auth mechanisms work correctly through automated testing.
July 1, 2024
by Greg Lawson
· 3,498 Views · 2 Likes
article thumbnail
Implementing Real-Time Credit Card Fraud Detection With Apache Flink on AWS
Real-time fraud detection systems are essential for identifying and preventing fraudulent transactions as they occur. Apache Flink is useful in this scenario.
July 1, 2024
by Harsh Daiya
· 3,336 Views · 1 Like
article thumbnail
Twenty Things Every Java Software Architect Should Know
Architects need a deep understanding of Java and its ecosystem, staying updated on the latest trends and best practices.
June 28, 2024
by Reza Ganji DZone Core CORE
· 8,215 Views · 19 Likes
article thumbnail
Spring AI: How To Write GenAI Applications With Java
In this article, take a look at how to write GenAI applications with Java using the Spring AI framework and utilize RAG for improving answers.
June 28, 2024
by Jennifer Reif DZone Core CORE
· 4,010 Views · 3 Likes
article thumbnail
Addressing Memory Issues and Optimizing Code for Efficiency: Glide Case
The approach to identifying and rectifying specific pain points, such as object churn and memory leaks, is commendable, specifically for mobile devices.
June 27, 2024
by Murat Gungor DZone Core CORE
· 2,494 Views · 1 Like
article thumbnail
How To Use Thread.sleep() in Selenium
Learn how to pause test execution with Thread.sleep() in Selenium. Control timing for effective automation testing.
June 25, 2024
by Faisal Khatri
· 3,272 Views · 1 Like
article thumbnail
Open-Source Dapr for Spring Boot Developers
Using Dapr with Spring Boot simplifies the development for Dapr-enabled apps: run, test, and debug locally without the need to run inside a K8s cluster.
June 24, 2024
by Thomas Vitale
· 5,887 Views · 1 Like
article thumbnail
Automate Message Queue Deployment on JBoss EAP
In this article, learn how to fully automate the deployment of your own Message Oriented Middleware using JBoss EAP and Ansible.
June 21, 2024
by Romain Pelisse
· 5,068 Views · 3 Likes
article thumbnail
How To Compare DOCX Documents in Java
In this article, learn how to carry out DOCX comparisons programmatically by calling a specialized web API with Java code examples.
June 21, 2024
by Brian O'Neill DZone Core CORE
· 5,610 Views · 3 Likes
article thumbnail
IoT Needs To Get Serious About Security
Security issues in IoT have gotten worse, not better, and it's time we acknowledge that and fix it. It's long past time.
June 20, 2024
by David G. Simmons DZone Core CORE
· 5,191 Views · 2 Likes
article thumbnail
GenAI: Spring Boot Integration With LocalAI for Code Conversion
Learn how GenAI can be used locally or in private data centers using LocalAI, SpringBoot, and LangChain4J for code conversion tasks.
June 19, 2024
by Aftab Shaikh
· 5,142 Views · 2 Likes
article thumbnail
Efficient Data Management With Offset and Cursor-Based Pagination in Modern Applications
Explore offset and cursor-based pagination, integrated with Jakarta Data, Quarkus, and MongoDB, highlighting their benefits and practical use in REST APIs.
June 19, 2024
by Otavio Santana DZone Core CORE
· 4,703 Views · 2 Likes
article thumbnail
Cucumber and Spring Boot Integration: Passing Arguments To Step Definitions Explained
Cucumber is a tool that supports Behavior-Driven Development (BDD). Learn how to pass arguments to step definitions when using Cucumber and Spring Boot.
June 18, 2024
by Gunter Rotsaert DZone Core CORE
· 4,245 Views · 3 Likes
article thumbnail
The Past, Present, and Future of Stream Processing
Stream Processing Journey with IBM, Apama, TIBCO StreamBase, Kafka Streams, Apache Flink, Streaming Databases, GenAI, and Apache Iceberg.
June 17, 2024
by Kai Wähner DZone Core CORE
· 4,851 Views · 6 Likes
article thumbnail
How To Use Builder Design Pattern and DataFaker Library for Test Data Generation in Automation Testing
In this tutorial, learn how to use a builder design pattern in Java with a DataFaker library to generate test data for automation testing.
June 15, 2024
by Faisal Khatri
· 10,984 Views · 5 Likes
article thumbnail
Benchmarking Java Streams
Take a deep dive into the performance characteristics of Java streams. With the help of JMH, learn how Java streams behave when put under pressure.
June 13, 2024
by Bartłomiej Żyliński DZone Core CORE
· 7,653 Views · 6 Likes
article thumbnail
How To Handle Shadow Root in Selenium Java
In this tutorial, learn how to handle Shadow Root in Selenium Java using the getShadowRoot() method and JavaScriptExecuter.
June 12, 2024
by Faisal Khatri
· 3,224 Views · 2 Likes
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • ...
  • Next

ABOUT US

  • About DZone
  • Send feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends: