DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Low-Code Development: Leverage low and no code to streamline your workflow so that you can focus on higher priorities.

DZone Security Research: Tell us your top security strategies in 2024, influence our research, and enter for a chance to win $!

Launch your software development career: Dive head first into the SDLC and learn how to build high-quality software and teams.

Open Source Migration Practices and Patterns: Explore key traits of migrating open-source software and its impact on software development.

JavaScript

JavaScript (JS) is an object-oriented programming language that allows engineers to produce and implement complex features within web browsers. JavaScript is popular because of its versatility and is preferred as the primary choice unless a specific function is needed. In this Zone, we provide resources that cover popular JS frameworks, server applications, supported data types, and other useful topics for a front-end engineer.

icon
Latest Refcards and Trend Reports
Trend Report
Modern Web Development
Modern Web Development
Refcard #363
JavaScript Test Automation Frameworks
JavaScript Test Automation Frameworks
Refcard #288
Getting Started With Low-Code Development
Getting Started With Low-Code Development

DZone's Featured JavaScript Resources

JavaScript, Node.js, and Apache Kafka for Full-Stack Data Streaming

JavaScript, Node.js, and Apache Kafka for Full-Stack Data Streaming

By Kai Wähner DZone Core CORE
JavaScript is a pivotal technology for web applications. With the emergence of Node.js, JavaScript became relevant for both client-side and server-side development, enabling a full-stack development approach with a single programming language. Both Node.js and Apache Kafka are built around event-driven architectures, making them naturally compatible for real-time data streaming. This blog post explores open-source JavaScript clients for Apache Kafka and discusses the trade-offs and limitations of JavaScript Kafka producers and consumers compared to stream processing technologies such as Kafka Streams or Apache Flink. JavaScript: A Pivotal Technology for Web Applications JavaScript is a pivotal technology for web applications, serving as the backbone of interactive and dynamic web experiences. Here are several reasons JavaScript is essential for web applications: Interactivity: JavaScript enables the creation of highly interactive web pages. It responds to user actions in real time, allowing for the development of features such as interactive forms, animations, games, and dynamic content updates without the need to reload the page. Client-side scripting: Running in the user's browser, JavaScript reduces server load by handling many tasks on the client's side. This can lead to faster web page loading times and a smoother user experience. Universal browser support: All modern web browsers support JavaScript, making it a universally accessible programming language for web development. This wide support ensures that JavaScript-based features work consistently across different browsers and devices. Versatile frameworks and libraries: The JavaScript ecosystem includes a vast array of frameworks and libraries (such as React, Angular, and Vue.js) that streamline the development of web applications, from single-page applications to complex web-based software. These tools offer reusable components, two-way data binding, and other features that enhance productivity and maintainability. Real-time applications: JavaScript is ideal for building real-time applications, such as chat apps and live streaming services, thanks to technologies like WebSockets and frameworks that support real-time communication. Rich web APIs: JavaScript can access a wide range of web APIs provided by browsers, allowing for the development of complex features, including manipulating the Document Object Model (DOM), making HTTP requests (AJAX or Fetch API), handling multimedia, and tracking user geolocation. SEO and performance optimization: Modern JavaScript frameworks and server-side rendering solutions help in building fast-loading web pages that are also search engine friendly, addressing one of the traditional criticisms of JavaScript-heavy applications. In conclusion, JavaScript's capabilities offer the tools and flexibility needed to build everything from simple websites to complex, high-performance web applications. Full-Stack Development: JavaScript for the Server-Side With Node.js With the advent of Node.js, JavaScript is not just used only for the client side of web applications. JavaScript is for both client-side and server-side development. It enables a full-stack development approach with a single programming language. This simplifies the development process and allows for seamless integration between the frontend and backend. Using JavaScript for backend applications, especially with Node.js, offers several advantages: Unified language for frontend and backend: JavaScript on the backend allows developers to use the same language across the entire stack, simplifying development and reducing context switching. This can lead to more efficient development processes and easier maintenance. High performance: Node.js is a popular JavaScript runtime. It is built on Chrome's V8 engine, which is known for its speed and efficiency. Node.js uses non-blocking, event-driven architecture. The architecture makes it particularly suitable for I/O-heavy operations and real-time applications like chat applications and online gaming. Vast ecosystem: JavaScript has one of the largest ecosystems, powered by npm (Node Package Manager). npm provides a vast library of modules and packages that can be easily integrated into your projects, significantly reducing development time. Community support: The JavaScript community is one of the largest and most active, offering a wealth of resources, frameworks, and tools. This community support can be invaluable for solving problems, learning new skills, and staying up to date with the latest technologies and best practices. Versatility: JavaScript with Node.js can be used for developing a wide range of applications, from web and mobile applications to serverless functions and microservices. This versatility makes it a go-to choice for many developers and companies. Real-time data processing: JavaScript is well-suited for applications requiring real-time data processing and updates, such as live chats, online gaming, and collaboration tools, because of its non-blocking nature and efficient handling of concurrent connections. Cross-platform development: Tools like Electron and React Native allow JavaScript developers to build cross-platform desktop and mobile applications, respectively, further extending JavaScript's reach beyond the web. Node.js's efficiency and scalability, combined with the ability to use JavaScript for both frontend and backend development, have made it a popular choice among developers and companies around the world. Its non-blocking, event-driven I/O characteristics are a perfect match for an event-driven architecture. JavaScript and Apache Kafka for Event-Driven Applications Using Node.js with Apache Kafka offers several benefits for building scalable, high-performance applications that require real-time data processing and streaming capabilities. Here are several reasons integrating Node.js with Apache Kafka is helpful: Unified language for full-stack development: Node.js allows developers to use JavaScript across both the client and server sides, simplifying development workflows and enabling seamless integration between frontend and backend systems, including Kafka-based messaging or event streaming architectures. Event-driven architecture: Both Node.js and Apache Kafka are built around event-driven architectures, making them naturally compatible. Node.js can efficiently handle Kafka's real-time data streams, processing events asynchronously and non-blocking. Scalability: Node.js is known for its ability to handle concurrent connections efficiently, which complements Kafka's scalability. This combination is ideal for applications that require handling high volumes of data or requests simultaneously, such as IoT platforms, real-time analytics, and online gaming. Large ecosystem and community support: Node.js's extensive npm ecosystem includes Kafka libraries and tools that facilitate the integration. This support speeds up development, offering pre-built modules for connecting to Kafka clusters, producing and consuming messages, and managing topics. Real-time data processing: Node.js is well-suited for building applications that require real-time data processing and streaming, a core strength of Apache Kafka. Developers can leverage Node.js to build responsive and dynamic applications that process and react to Kafka data streams in real-time. Microservices and cloud-native applications: The combination of Node.js and Kafka is powerful for developing microservices and cloud-native applications. Kafka serves as the backbone for inter-service communication. Node.js is used to build lightweight, scalable service components. Flexibility and speed: Node.js enables rapid development and prototyping. Kafka environments can implement new streaming data pipelines and applications quickly. In summary, using Node.js with Apache Kafka leverages the strengths of both technologies to build efficient, scalable, and real-time applications. The combination is an attractive choice for many developers. Open Source JavaScript Clients for Apache Kafka Various open-source JavaScript clients exist for Apache Kafka. Developers use them to build everything from simple message production and consumption to complex streaming applications. When choosing a JavaScript client for Apache Kafka, consider factors like performance requirements, ease of use, community support, commercial support, and compatibility with your Kafka version and features. Open Source JavaScript Clients for Apache Kafka For working with Apache Kafka in JavaScript environments, several clients and libraries can help you integrate Kafka into your JavaScript or Node.js applications. Here are some of the notable JavaScript clients for Apache Kafka from the past years: kafka-node: One of the original Node.js clients for Apache Kafka, kafka-node provides a straightforward and comprehensive API for interacting with Kafka clusters, including producing and consuming messages. node-rdkafka: This client is a high-performance library for Apache Kafka that wraps the native librdkafka library. It's known for its robustness and is suitable for heavy-duty operations. node-rdkafka offers advanced features and high throughput for both producing and consuming messages. KafkaJS: An Apache Kafka client for Node.js, which is entirely written in JavaScript, it focuses on simplicity and ease of use and supports the latest Kafka features. KafkaJS is designed to be lightweight and flexible, making it a good choice for applications that require a simple and efficient way to interact with a Kafka cluster. Challenges With Open Source Projects In General Open source projects are only successful if an active community maintains them. Therefore, familiar issues with open source projects include: Lack of documentation: Incomplete or outdated documentation can hinder new users and contributors. Complex contribution process: A complicated process for contributing can deter potential contributors. This is not just a disadvantage, as it guarantees code reviews and quality checks of new commits. Limited support: Relying on community support can lead to slow issue resolution times. Critical projects often require commercial support by a vendor. Project abandonment: Projects can become inactive if maintainers lose interest or lack time. Code quality and security: Ensuring high code quality and addressing security vulnerabilities can be challenging if nobody is responsible and has no critical SLAs in mind. Governance issues: Disagreements on project direction or decisions can lead to forks or conflicts. Issues With Kafka's JavaScript Open Source Clients Some of the above challenges apply for the available Kafka's open source JavaScript clients. We have seen maintenance inactivity and quality issues as the biggest challenges in projects. And be aware that it is difficult for maintainers to keep up not only with issues but also with new KIPs (Kafka Improvement Proposal). The Apache Kafka project is active and releases new features in new releases two to three times a year. kafka-node, KafkaJS, and node-rdkafka are all on different parts of the "unmaintained" spectrum. For example, kafka-node has not had a commit in 5 years. KafkaJS had an open call for maintainers around a year ago. Additionally, commercial support was not available for enterprises to get guaranteed response times and support help in case of production issues. Unfortunately, production issues happened regularly in critical deployments. For this reason, Confluent open-sourced a new JavaScript client for Apache Kafka with guaranteed maintenance and commercial support. Confluent's Open Source JavaScript Client for Kafka powered by librdkafka Confluent provides a Kafka client for JavaScript. This client works with Confluent Cloud (fully managed service) and Confluent Platform (self-managed deployments). But it is an open-source project and works with any Apache Kafka environment. The JavaScript client for Kafka comes with a long-term support and development strategy. The source code is available now on GitHub. The client is available via npm. npm (Node Package Manager) is the default package manager for Node.js. This JavaScript client is a librdkafka-based library (from node-rdkafka) with API compatibility for the very popular KafkaJS library. Users of KafkaJS can easily migrate their code over (details in the migration guide in the repo). At the time of writing in February 2024, the new Confluent JavaScript Kafka Client is in early access and not for production usage. GA is later in 2024. Please review the GitHub project, try it out, and share feedback and issues when you build new projects or migrate from other JavaScript clients. What About Stream Processing? Keep in mind that Kafka clients only provide a product and consume API. However, the real potential of event-driven architectures comes with stream processing. This is a computing paradigm that allows for the continuous ingestion, processing, and analysis of data streams in real-time. Event stream processing enables immediate responses to incoming data without the need to store and process it in batches. Stream processing frameworks like Kafka Streams or Apache Flink offer several key features that enable real-time data processing and analytics: State management: Stream processing systems can manage the state across data streams, allowing for complex event processing and aggregation over time. Windowing: They support processing data in windows, which can be based on time, data size, or other criteria, enabling temporal data analysis. Exactly-once processing: Advanced systems provide guarantees for exactly-once processing semantics, ensuring data is processed once and only once, even in the event of failures. Integration with external systems: They offer connectors for integrating with various data sources and sinks, including databases, message queues, and file systems. Event time processing: They can handle out-of-order data based on the time events actually occurred, not just when they are processed. Stream processing frameworks are NOT available for most programming languages, including JavaScript. Therefore, if you live in the JavaScript world, you have three options: Build all the stream processing capabilities by yourself. Trade-off: A lot of work! Leverage a stream processing framework in SQL (or another programming language). Trade-off: This is not JavaScript! Don't do stream processing and stay with APIs and databases. Trade-off: Cannot solve many innovative use cases. Apache Flink provides APIs for Java, Python, and ANSI SQL. SQL is an excellent option to complement JavaScript code. In a fully managed data streaming platform like Confluent Cloud, you can leverage serverless Flink SQL for stream processing and combine it with your JavaScript applications. One Programming Language Does NOT Solve All Problems JavaScript has broad adoption and sweet spots for client and server development. The new Kafka Client for JavaScript from Confluent is open source and has a long-term development strategy, including commercial support. Easy migration from KafkaJS makes the adoption very simple. If you can live with the dependency on librdkafka (which is acceptable for most situations), then this is the way to go for JavaScript Node.js development with Kafka producers and consumers. JavaScript is NOT an all-rounder. The data streaming ecosystem is broad, open, and flexible. Modern enterprise architectures leverage microservices or data mesh principles. You can choose the right technology for your application. Learn how to build data streaming applications using your favorite programming language and open-source Kafka client by looking at Confluent's developer examples: JavaScript/Node.js Java HTTP/REST C/C++/.NET Kafka Connect DataGen Go Spring Boot Python Clojure Groovy Kotlin Ruby Rust Scala Which JavaScript Kafka client do you use? What are your experiences? Or do you already develop most applications with stream processing using Kafka Streams or Apache Flink? Let’s connect on LinkedIn and discuss it! More
The Benefits of Using RTK Query: A Scalable and Efficient Solution

The Benefits of Using RTK Query: A Scalable and Efficient Solution

By Oren Farhi
As developers, we're constantly seeking ways to streamline our workflows and enhance the performance of our applications. One tool that has gained significant traction in the React ecosystem is Redux Toolkit Query (RTK Query). This library, built on top of Redux Toolkit, provides a solution for managing asynchronous data fetching and caching. In this article, we'll explore the key benefits of using RTK Query. The Benefits of Using RTK Query: A Scalable and Efficient Solution 1. Simplicity and Ease of Use One of the most compelling advantages of RTK Query is its simplicity. This is how one would easily define endpoints for various operations, such as querying data, and creating, updating, and deleting resources. The injectEndpoints method allows you to define these endpoints in a concise and declarative manner, reducing boilerplate code and improving readability. TypeScript booksApi.injectEndpoints({ endpoints: builder => ({ getBooks: builder.query<IBook[], void | string[]>({ // ... }), createBundle: builder.mutation<any, void>({ // ... }), addBook: builder.mutation<string, AddBookArgs>({ // ... }), // ... }), }); 2. Automatic Caching and Invalidation One of the features of RTK Query is its built-in caching mechanism. The library automatically caches the data fetched from your endpoints, ensuring that subsequent requests for the same data are served from the cache, reducing network overhead and improving performance. These examples demonstrate how RTK Query handles cache invalidation using the invalidatesTags option. TypeScript createBundle: builder.mutation<any, void>({ invalidatesTags: [BooksTag], // ... }), addBook: builder.mutation<string, AddBookArgs>({ invalidatesTags: [BooksTag], // ... }), By specifying the BooksTag, RTK Query knows which cache entries to invalidate when a mutation (e.g., createBundle or addBook) is performed, ensuring that the cache stays up-to-date and consistent with the server data. 3. Scalability and Maintainability As your application grows in complexity, managing asynchronous data fetching and caching can become increasingly challenging. RTK Query's modular approach and separation of concerns make it easier to scale and maintain your codebase. Each endpoint is defined independently, allowing you to easily add, modify, or remove endpoints as needed without affecting the rest of your application. TypeScript endpoints: builder => ({ getBooks: builder.query<IBook[], void | string[]>({ // ... }), createBundle: builder.mutation<any, void>({ // ... }), // ... }) This modular structure promotes code reusability and makes it easier to reason about the different parts of your application, leading to better maintainability and collaboration within your team. 4. Efficient Data Fetching and Normalization RTK Query provides built-in support for efficient data fetching and normalization. The queryFn shows how you can fetch data from multiple sources and normalize the data using the toSimpleBooks function. However, the current implementation can be optimized to reduce code duplication and improve readability. Here's an optimized version of the code: TypeScript async queryFn(collections) { try { const [snapshot, snapshot2] = await Promise.all( collections.map(fetchCachedCollection) ]); const success = await getBooksBundle(); const books = success ? toSimpleBooks([...snapshot.docs, ...snapshot2.docs]) : []; return { data: books }; } catch (error) { return { error }; } } In this optimized version, we're using Promise.all to fetch the two collections (latest-books-1-query and latest-books-2-query) concurrently. This approach ensures that we don't have to wait for one collection to finish fetching before starting the other, potentially reducing the overall fetching time. Additionally, we've moved the getBooksBundle call inside the try block, ensuring that it's executed only if the collections are fetched successfully. This change helps maintain a clear separation of concerns and makes the code easier to reason about. By leveraging RTK Query's efficient data fetching capabilities and employing best practices like Promise.all, you can ensure that your application fetches and normalizes data in an optimized and efficient manner, leading to improved performance and a better user experience. 5. Ease of Use With Exposed Hooks One of the standout features of RTK Query is the ease of use it provides through its exposed hooks. Finally, I like to export the available generated typed hooks so you can use them (i.e, useGetBooksQuery, useCreateBundleMutation, etc.) to interact with the defined endpoints within your React components. These hooks abstract away the complexities of managing asynchronous data fetching and caching, allowing you to focus on building your application's logic. TypeScript export const { useGetBooksQuery, useLazyGetBooksQuery, useCreateBundleMutation, useAddBookMutation, useDeleteBookMutation, useUpdateBookMutation, } = booksApi; By leveraging these hooks, you can fetch data, trigger mutations, and handle loading and error states, all while benefiting from the caching and invalidation mechanisms provided by RTK Query. Conclusion By adopting RTK Query, you gain access to a solution for managing asynchronous data fetching and caching, while experiencing the simplicity, scalability, and ease of use provided by its exposed hooks. Whether you're building a small application or a large-scale project, RTK Query can help you streamline your development process and deliver high-performance, responsive applications. The code within this post is taken from a live app in production, ReadM, a Real-time AI for Reading Fluency Assessments & Insights platform. More
How To Protect Node.js Form Uploads With a Deterministic Threat Detection API
How To Protect Node.js Form Uploads With a Deterministic Threat Detection API
By Brian O'Neill DZone Core CORE
How To Create a Network Graph Using JavaScript
How To Create a Network Graph Using JavaScript
By Alex Carter
10 Svelte Data Grids: Choose the Right One for Your Project
10 Svelte Data Grids: Choose the Right One for Your Project
By Alena Stasevich
Mocking Dependencies and AI Is the Next Frontier in Vue.js Testing
Mocking Dependencies and AI Is the Next Frontier in Vue.js Testing

Vue.js is a popular JavaScript framework, and as such, it is crucial to ensure that its components work as they are supposed to: effectively, and more importantly, reliably. Mocking dependencies is one of the most efficient methods of testing, as we will discover in this article. The Need for Mocking Dependencies Mocking dependencies is a way of exerting control over tests by providing the capacity to isolate components under test from their dependencies. As all frameworks work with multiple components, which can range from APIs to services and even interactions such as clicks or hovers, it is important to be able to isolate these components to test for their durability, behavior, and reliability. Mocking dependencies allow users to create a controlled testing environment to verify the component's behavior in isolation. There are several reasons for mocking dependencies in Vue.js tests, as we will highlight the strategies for isolating components that will enhance the performance of the tests run on this software. Isolation When developers are testing a particular component, they want to focus solely on the behavior of that specific component without any input or interaction from its dependencies. Mocking gives users the ability to isolate the specific component and test it while replacing dependencies with controlled substitutes. Controlled Testing Environment When mocking dependencies, users can control the environment by simulating different scenarios and conditions without falling back on external resources such as real-world scenarios, making it much more cost-effective and reliable. Speed and Reduced Complexity Mocking will strip away dependencies that may introduce latency or require additional steps in order to set it up, all of which increase the duration for users to receive the results. By stripping away these dependencies, not only will tests have a decreased duration, but they will also increase efficiency and reliability. Consistency By removing extraneous variables, mocking provides the most accurate test results that are not hampered by factors such as network availability or data changes, etcetera. Testing Edge Cases There are some scenarios that may be hard to replicate with real dependencies and mocking will be able to test edge cases and error conditions to enhance the debugging process. For example, mocking an API response with unexpected data may be able to help verify how components handle such situations. AI Working Hand-In-Hand With Mocking AI (artificial intelligence) has been making waves in software testing, and its integration into testing Vue.js applications can streamline the entire mocking process. By predicting and automating the creation of mocks based on previous test data, it can further enhance testing by creating much more valuable insights. It is no secret that AI has the capacity to process large amounts of data, which is why it is being implemented in many different industries. Mocking often generates synthetic data, covering a wide range of scenarios, and AI will be able to break it down and make it more user-friendly in the sense that human testers will not need to go through the data themselves, which is in itself, a time-consuming process. AI can also be used to dynamically generate mock responses by automating the process. For instance, instead of manually defining mock responses for different API endpoints, AI algorithms can independently generate mock responses through past patterns. It will also be able to adapt based on any feedback, optimizing the mocking strategy to better create scenarios and edge cases which will inadvertently improve results. Aside from data generation, AI algorithms can also be used to detect anomalies within the system or application. By monitoring the interactions between the mocked dependencies and the test environment, AI will be able to identify any unexpected behavior and deviations which can help in uncovering any bugs that may have been missed in manual or human tests. AI’s hand in guiding the mocking process can also take into account recent changes and optimize for mocks that target areas with the most potential to be affected. Mocking Events and Methods When it comes to mocking events, Vue Test Utils allows developers to mock methods and events in order to make sure that the component’s responses fall within what is considered accurate. Even when the application is put under different scenarios and edge cases, it should be able to provide relevant insight into the component’s behavior. Take, for example, a component that relies on a certain method to fetch data or handle user input, having a mocking dependency test requires it to verify the results of those tests, gauging whether the components are reacting the way that they should. In fact, it should be able to test for efficacy as well. Mocking events and methods are a commonplace practice in software development. Without invoking real-world implementations, users will be able to procure the results of those simulations which are both reliable and effective. It is particularly useful in isolating components during testing for specific conditions that are difficult to replicate in real time. Leveraging Jest for Snapshot Testing Another powerful strategy is snapshot testing whereby users are able to capture the rendered output of a component and compare it with a baseline snapshot. Think of it as generating a side-by-side comparison to indicate what is different. This approach helps identify unintended changes in the component's output, and that any modifications rendered do not break the existing functionality. To implement snapshot testing, users are able to render the component using Vue Test Utils and then use Jest to capture and compare snapshots, which provides a quick way to verify the visual and structural integrity of the component over time. By combining snapshot testing with other mocking strategies, developers can achieve a comprehensive testing suite that ensures their Vue.js components are robust, maintainable, and free from regressions. Going Forward Properly mocking dependencies in Vue.js tests is essential for isolating and testing components effectively, ensuring that their tests are both robust and reliable. The Vue Test Utils, with its rich features of subbing child components, mocking global objects, and intercepting API calls are all highly commendable in their innovation. Furthermore, by leveraging AI in software testing, developers will be able to further refine the process, creating more accurate and faster testing cycles. As the complexity of web applications continues to grow, the ability to isolate components and test them thoroughly will become a benchmark for maintaining quality control over the applications that are being developed and released for use.

By Anton Lucanus DZone Core CORE
React 19: Comprehensive Guide To the Latest Features
React 19: Comprehensive Guide To the Latest Features

React 19 Beta is finally here, after a two-year hiatus. The React team has published an article about the latest version. Among the standout features is the introduction of a new compiler, aimed at performance optimization and simplifying developers’ workflows. Furthermore, the update brings significant improvements to handling state updates triggered by responses, with the introduction of actions and new handling from state hooks. Additionally, the introduction of the use() hook simplifies asynchronous operations even further, allowing developers to manage loading states and context seamlessly. React 19 Beta also marks a milestone in improving accessibility and compatibility, with full support for Web components and custom elements. Moreover, developers will benefit from built-in support for document metadata, async scripts, stylesheets, and preloading resources, further enhancing the performance and user experience of React applications. In this article, the features mentioned above will be deeply explained. React 19 New Features New Compiler: Revolutionizing Performance Optimization and Replacing useMemo and useCallback React 19 introduces an experimental compiler that allows React code into optimized JS code. While other frontend frameworks such as Astro and Sveltve have their compiler, React now joins this team, enhancing its performance optimization. React applications frequently encountered performance challenges due to excessive re-rendering triggered by state changes. To mitigate this issue, developers often manually employ useMemo, useCallback, or memo APIs. These mechanisms aimed to optimize performance by memoizing certain computations and callback functions. However, the introduction of the React compiler automates these optimizations, integrating them into the codebase. Consequently, this automation not only enhances the speed and efficiency of React applications but also simplifies the development process for engineers. Actions and New Handling Form State Hooks The introduction of actions represents one of the most significant enhancements within React’s latest features. These changes help the process of handling state updates triggered by responses, particularly in scenarios involving data mutations. A common scenario arises where a user initiates a data mutation, such as submitting a form to modify their information. This action typically involves making an API request and handling this response. Developers had to face the task of managing various states manually, including pending states, errors, and optimistic updates. However, with the new hooks like useActionState, developers can now handle this process efficiently. By simply passing an asynchronous function within this hook, developers can handle error states, submit actions, and pending states. This simplifies the codebase and enhances the overall development experience. React’s 19 documentation highlights the evolution of these hooks, with React.useActionState formerly known as ReactDOM.useFormState in the Canary releases. Moreover, the introduction of useFormStatus addresses another common challenge in design systems. Design components often require access to information about the <form> they are embedded within, without using prop drilling. While this could previously be achieved through Context, the new useFormStatus hook offers a solution by giving the pending status of calls, enabling them to disable or enable buttons and adjust component styles accordingly. The use() Hook The new use() hook in React 19 is designed specifically for asynchronous functions. This innovative hook revolutionizes the way developers handle asynchronous operations within React applications. With the use hook for async functions, developers can now pass a promise directly, eliminating the need for useEffect and setIsLoading to manage loading states and additional dependencies. The use() hook not only handles loading states effortlessly, but it also provides flexibility in handling context. Developers can easily pass context inside the use() hook, allowing integration with the broader application context. Additionally, the hook enables reading context within its scope, further enhancing its utility and convenience. By abstracting away the complexities of asynchronous operations and context management, the use() hook in React version 19 represents a significant leap forward in developer productivity and application performance. ForwardRef Developers no longer need to use forwardRef to access the ref prop. Instead, React provides direct access to the ref prop, eliminating the need for an additional layer and simplifying the component hierarchy. It enhances code readability and offers developers a more intuitive and efficient way to work with refs. Support for Document Metadata Another notable enhancement in React 19 is the built-in support for document metadata. This significant change marks a departure from reliance on external libraries like React Helmet to manage document metadata within React applications. Previously, developers often turned to React Helmet, especially when working outside of frameworks like Next.js, to manipulate document metadata such as titles and links. However, with this latest update, React gives native access directly within its components, eliminating the need for additional dependencies. Now, developers can seamlessly modify document metadata from anywhere within their React codebase, offering flexibility. Support for Async Scripts, Stylesheets, and Preloading Resources This significant update is to manage the asynchronous loading and rendering of stylesheets, fonts, and scripts, including those defined within <style>, <link>, and <script> tags. Notably, developers now have the flexibility to load stylesheets within the context of Suspense, enhancing the performance and user experience of applications by ensuring a smoother transition during asynchronous component rendering. Furthermore, when components are rendered asynchronously, developers can effortlessly incorporate the loading of styles and scripts directly within those components, streamlining the development process and improving overall code organization. Full Support for Web Components and Custom Elements Unlike previous iterations where compatibility was only partial, React 19 now seamlessly integrates with Web Components and custom elements, offering comprehensive support for their use within React applications. Previously, developers encountered challenges as React’s handling of props sometimes clashed with attributes of custom elements, leading to conflicts and inconsistencies. However, with this latest update, React has provided an intuitive experience for incorporating Web Components and custom elements into React-based projects. This enhanced compatibility opens up a world of possibilities for developers, allowing them to leverage the power and flexibility of Web Components. With full support for Web Components and custom elements, React solidifies its position as a versatile and adaptable framework. Conclusion In conclusion, React 19 Beta represents a significant step forward in the evolution of the React ecosystem, offering developers powerful tools and features to build faster, more efficient, and more accessible applications. This latest iteration of the React library empowers developers to build faster, more efficient, and more innovative web applications. From the introduction of a new compiler to improved state management and seamless integration with Web Components, React 19 Beta offers tools to elevate the developer experience and push the boundaries of what’s possible in modern web development.

By Beste Bayhan
How To Optimize AG Grid Performance With React
How To Optimize AG Grid Performance With React

AG Grid is a feature-rich JavaScript library primarily used to build robust data tables in web applications. It’s used by almost 90% of Fortune 500 companies and it’s especially useful in Business Intelligence (BI) applications and FinTech applications. React is the market leader of JavaScript libraries to build enterprise web and mobile applications. It is widely adopted by major companies and boasts a large community. In this article, we will set up a React web application and use AG Grid to build a performant data table. All the code in this article is available at this GitHub link. Prerequisites Node.js and npm are installed on your system. Knowledge of JavaScript and React. Set up a New React Application Verify that NodeJS and NPM are installed. Commands to check: node -v and npm -v We will use "create react app" to initiate a new React application, let's install it globally on the machine using npm install -g create-react-app Create a new React application using npx create-react-app AGGridReact Wait for the app to be fully created and then go to the newly created app’s folder using cd AGGridReact Start the application using npm start. Soon you will be able to access this react app on localhost port 3000 using the URL Now we are ready to make modifications to our React app. You can use the code editor of your choice, I have used Visual Studio Code. Integrating AG Grid Into Our React App AG Grid comes in two flavors, community version and enterprise version. We will use the community version to not incur any licensing fee. The enterprise version is preferred in large corporations due to the set of additional features it provides. Install the AG Grid community version with React support using npm install ag-grid-react Let’s create two folders under the src folder in our project: components and services. Let's create a service under the services folder. This service will have the job of communicating to the backend and fetching data. For simplicity purposes we will not be doing actual API calls, instead, we will have a JSON file with all sample data. Let's create movie-data.json file and add content to it from here. Add movie-service.js to the services folder. Our service will have two methods and one exported constant. Soon, all of these will make sense. Below is the reference code for this file. JavaScript import movies from './movie-data.json'; const DEFAULT_PAGE_SIZE = 5; const countOfMovies = async() => { return movies.movies.length; }; const fetchMovies = async() => { return movies.movies; }; export { DEFAULT_PAGE_SIZE, countOfMovies, fetchMovies }; At this point let’s create our React component which will hold AG Grid Table. Add AGGridTable.js file under the components folder under the src directory. Let's import React and AG Grid in our component and lay down basic component export JavaScript import React, { useState, useEffect } from 'react'; import { AgGridReact } from 'ag-grid-react'; import 'ag-grid-community/styles/ag-grid.css'; import 'ag-grid-community/styles/ag-theme-quartz.css'; export const AgGridTable = () => {} We are going to use the AGGridReactcomponent to render our table, this component needs two main things: Columns we want to display in our table. Rows we want to display in our table. We have to pass a parameter named columnDefs to our AGGridReact to tell it how we want our columns to be set up. If you look at our movie data in movie-data.json file we have columns movieID, movieName and releaseYear. Let’s map these to our column definitions parameters. We can achieve it using the below lines of code. JavaScript const columnDefs = [ { field: 'movieId', headerName: "Movie ID", minWidth: 100 }, { field: 'movieName', headerName: "Movie Name", flex: 1 }, { field: 'releaseYear', headerName: "Release Year", flex: 1 } ]; We need to fetch actual movie data, and we are going to leverage the fetchMovies function from our movie service. Also, we would want to load it on page load. This can be achieved using the useEffect hook of React by passing an empty dependency array. JavaScript useEffect(() => { const fetchCount = async () => { const totalCount = await countOfMovies(); setTotalRecords(totalCount); } fetchCount(); }, []); useEffect(() => { fetchData(); }, []); const fetchData = async () => { setIsLoading(true); try { const response = await fetchMovies(); setMovieData(response); } catch (error) { console.error(error); } finally { setIsLoading(false); } }; Let’s add some nice loading indicator variables to indicate to our users something is getting processed. JavaScript const [isLoading, setIsLoading] = useState(false); Putting everything together we get our component as below. JavaScript import React, { useState, useEffect } from 'react'; import { AgGridReact } from 'ag-grid-react'; import 'ag-grid-community/styles/ag-grid.css'; import 'ag-grid-community/styles/ag-theme-quartz.css'; import { countOfMovies, fetchMovies } from '../services/movie-service'; export const AgGridTable = () => { const [movieData, setMovieData] = useState([]); const [totalRecords, setTotalRecords] = useState(0); const [isLoading, setIsLoading] = useState(false); const columnDefs = [ { field: 'movieId', headerName: "Movie ID", minWidth: 100 }, { field: 'movieName', headerName: "Movie Name", flex: 1 }, { field: 'releaseYear', headerName: "Release Year", flex: 1 } ]; useEffect(() => { const fetchCount = async () => { const totalCount = await countOfMovies(); setTotalRecords(totalCount); } fetchCount(); }, []); useEffect(() => { fetchData(); }, []); const fetchData = async () => { setIsLoading(true); try { const response = await fetchMovies(); setMovieData(response); } catch (error) { console.error(error); } finally { setIsLoading(false); } }; return ( <> {isLoading && <div>Loading...</div>} <div className="ag-theme-quartz" style={{ height: 300, minHeight: 300 } > { totalRecords > 0 && <AgGridReact rowData={movieData} columnDefs={columnDefs} /> } </div> </> ) } Let's update our app.js to include our newly built component and perform cleanup to remove basic create React app-generated code. Below is the updated code for app.js: JavaScript import './App.css'; import { AgGridTable } from './components/AgGridTable'; function App() { return ( <div className="App"> <header className="App-header"> <h1>Welcome logged in user.</h1> </header> <AgGridTable></AgGridTable> </div> ); } export default App; Our table should load on the UI now. Enhancing Performance With Pagination We have been rendering all the rows in the table in one go till now. This approach doesn’t scale in the real world. Imagine we had 10000 rows instead of just 100, our page would be very slow and UI performance would take a huge hit. We can easily enhance this by paginating through our data. In simpler terms, pagination means breaking our data into a set of x items and displaying one set item at a time. Some key benefits of adding paginations are: Reduced DOM Size resulting in Optimized Memory Usage Improved Rendering Speed Enhanced Scrolling Performance Faster Updates Let's add additional parameters to the AGGridReact setup to enable pagination. pagination = true to tells AG Grid we want to paginate paginationPageSize tells AG Grid what is the default number of items to be displayed on the page initially. We would be passing an array to the paginationPageSizeSelector parameter. It will define different page sizes we allow our users to choose from. totalRows tells AG Grid how many records in total there are, which, in turn, helps count the number of pages in our table. To have the right value for all of the above parameters we need to update our code to fetch the total row count and define the page size selector array. JavaScript import React, { useState, useEffect, useMemo } from 'react'; import { AgGridReact } from 'ag-grid-react'; import 'ag-grid-community/styles/ag-grid.css'; import 'ag-grid-community/styles/ag-theme-quartz.css'; import { DEFAULT_PAGE_SIZE, countOfMovies, fetchMovies } from '../services/movie-service'; export const AgGridTable = () => { const [movieData, setMovieData] = useState([]); const [totalRecords, setTotalRecords] = useState(0); const [isLoading, setIsLoading] = useState(false); const columnDefs = [ { field: 'movieId', headerName: "Movie ID", minWidth: 100 }, { field: 'movieName', headerName: "Movie Name", flex: 1 }, { field: 'releaseYear', headerName: "Release Year", flex: 1 } ]; useEffect(() => { const fetchCount = async () => { const totalCount = await countOfMovies(); setTotalRecords(totalCount); } fetchCount(); }, []); useEffect(() => { fetchData(); }, []); const fetchData = async () => { setIsLoading(true); try { const response = await fetchMovies(); setMovieData(response); } catch (error) { console.error(error); } finally { setIsLoading(false); } }; const paginationPageSizeSelector = useMemo(() => { return [5, 10, 20]; }, []); return ( <> {isLoading && <div>Loading...</div>} <div className="ag-theme-quartz" style={{ height: 300, minHeight: 300 } > { totalRecords > 0 && <AgGridReact rowData={movieData} columnDefs={columnDefs} pagination={true} paginationPageSize={DEFAULT_PAGE_SIZE} paginationPageSizeSelector={paginationPageSizeSelector} totalRows={totalRecords} /> } </div> </> ) } With this code, we will have nice pagination built into it with default page size. Conclusion AG Grid integration with React is easy to set up, and we can boost the performance with techniques such as paginations. There are other ways to lazy load rows in AG Grid beyond pagination. Going through the AG Grid documentation should help you get familiar with other methods. Happy coding!

By Anujkumarsinh Donvir
Getting Started With Valkey Using JavaScript
Getting Started With Valkey Using JavaScript

Valkey is an open-source alternative to Redis. It's a community-driven, Linux Foundation project created to keep the project available for use and distribution under the open-source Berkeley Software Distribution (BSD) 3-clause license after the Redis license changes. I think the path to Valkey was well summarised in this inaugural blog post: I will walk through how to use Valkey for JavaScript applications using existing clients in Redis ecosystem as well as iovalkey (a friendly fork of ioredis). Using Valkey With node-redis node-redis is a popular and widely used client. Here is a simple program that uses the Subscriber component of the PubSub API to subscribe to a channel. JavaScript import redis from 'redis'; const client = redis.createClient(); const channelName = 'valkey-channel'; (async () => { try { await client.connect(); console.log('Connected to Redis server'); await client.subscribe(channelName, (message, channel) => { console.log(`message "${message}" received from channel "${channel}"`) }); console.log('Waiting for messages...'); } catch (err) { console.error('Error:', err); } })(); To try this with Valkey, let's start an instance using the Valkey Docker image: docker run --rm -p 6379:637 valkey/valkey Also, head here to get OS-specific distribution, or use Homebrew (on Mac) — brew install valkey. You should now be able to use the Valkey CLI (valkey-cli). Get the code from GitHub repo: Shell git clone https://github.com/abhirockzz/valkey-javascript cd valkey-javascript npm install Start the subscriber app: node subscriber.js Publish a message and ensure that the subscriber is able to receive it: valkey-cli PUBLISH valkey-channel 'hello valkey' Nice! We were able to write a simple application with an existing Redis client and run it using Valkey (instead of Redis). Sure, this is an oversimplified example, but there were no code changes required. Use Valkey With ioredis Client ioredis is another popular client. To be doubly sure, let's try ioredis with Valkey as well. Let's write a publisher application: JavaScript import Redis from 'ioredis'; const redisClient = new Redis(); const channelName = 'valkey-channel'; const message = process.argv[2]; if (!message) { console.error('Please provide a message to publish.'); process.exit(1); } async function publishMessage() { try { const receivedCount = await redisClient.publish(channelName, message); console.log(`Message "${message}" published to channel "${channelName}". Received by ${receivedCount} subscriber(s).`); } catch (err) { console.error('Error publishing message:', err); } finally { // Close the client connection await redisClient.quit(); } } publishMessage(); Run the publisher, and confirm that the subscriber app is able to receive it: Shell node publisher.js 'hello1' node publisher.js 'hello2' You should see these logs in the subscriber application: Shell message "hello1" received from channel "valkey-channel" message "hello2" received from channel "valkey-channel" Switch to iovalkey Client As mentioned, iovalkey is a fork of ioredis. I made the following changes to port the producer code to use iovalkey: Commented out import Redis from 'ioredis'; Added import Redis from 'iovalkey'; Installed iovalkey - npm install iovalkey Here is the updated version — yes, this was all I needed to change (at least for this simple application): JavaScript // import Redis from 'ioredis'; import Redis from 'iovalkey'; Run the new iovalkey based publisher, and confirm that the subscriber is able to receive it: Shell node publisher.js 'hello from iovalkey' You should see these logs in the subscriber application: Shell message "hello from iovalkey" received from channel "valkey-channel" Awesome, this is going well. We are ready to sprinkle some generative AI now! Use Valkey With LangChainJS Along with Python, JavaScript/TypeScript is also being used in the generative AI ecosystem. LangChain is a popular framework for developing applications powered by large language models (LLMs). LangChain has JS/TS support in the form of LangchainJS. Having worked a lot with the Go port (langchaingo), as well as Python, I wanted to try LangchainJS. One of the common use cases is to use Redis as a chat history component in generative AI apps. LangchainJS has this built-in, so let's try it out with Valkey. Using Valkey as Chat History in LangChain To install LangchainJS: npm install langchain For the LLM, I will be using Amazon Bedrock (its supported natively with LangchainJS), but feel free to use others. For Amazon Bedrock, you will need to configure and set up Amazon Bedrock, including requesting access to the Foundation Model(s). Here is the chat application. As you can see, it uses the RedisChatMessageHistory component. JavaScript import { BedrockChat } from "@langchain/community/chat_models/bedrock"; import { RedisChatMessageHistory } from "@langchain/redis"; import { ConversationChain } from "langchain/chains"; import { BufferMemory } from "langchain/memory"; import prompt from "prompt"; import { ChatPromptTemplate, MessagesPlaceholder, } from "@langchain/core/prompts"; const chatPrompt = ChatPromptTemplate.fromMessages([ [ "system", "The following is a friendly conversation between a human and an AI.", ], new MessagesPlaceholder("chat_history"), ["human", "{input}"], ]); const memory = new BufferMemory({ chatHistory: new RedisChatMessageHistory({ sessionId: new Date().toISOString(), sessionTTL: 300, host: "localhost", port: 6379, }), returnMessages: true, memoryKey: "chat_history", }); const model = "anthropic.claude-3-sonnet-20240229-v1:0" const region = "us-east-1" const langchainBedrockChatModel = new BedrockChat({ model: model, region: region, modelKwargs: { anthropic_version: "bedrock-2023-05-31", }, }); const chain = new ConversationChain({ llm: langchainBedrockChatModel, memory: memory, prompt: chatPrompt, }); while (true) { prompt.start({noHandleSIGINT: true}); const {message} = await prompt.get(['message']); const response = await chain.invoke({ input: message, }); console.log(response); Run the application: node chat.js Start a conversation: If you peek into Valkey, notice that the conversations are saved in a List: valkey-cli keys * valkey-cli LRANGE <enter list name> 0 -1 Don't runkeys * in production — its just for demo purposes. Using iovalkey Implementation for Chat History The current implementation uses the node-redis client, but I wanted to try out iovalkey client. I am not a JS/TS expert, but it was simple enough to port the existing implementation. You can refer to the code on GitHub As far as the client (chat) app is concerned, I only had to make a few changes to switch the implementation: Comment out import { RedisChatMessageHistory } from "@langchain/redis"; Add import { ValkeyChatMessageHistory } from "./valkey_chat_history.js"; Replace RedisChatMessageHistory with ValkeyChatMessageHistory (while creating the memory instance) It worked the same way as above. Feel free to give it a try! Wrapping Up It's still early days for the Valkey (at the time of writing), and there is a long way to go. I'm interested in how the project evolves and also the client ecosystem for Valkey. Happy Building!

By Abhishek Gupta DZone Core CORE
A Comprehensive Look at the Top React Boilerplates in 2024
A Comprehensive Look at the Top React Boilerplates in 2024

Recently, React has become one of the most beloved interface frameworks of all time. According to a 2023 survey, React.js is used by 40.58% of developers worldwide, making it one of the most popular web frameworks. Developed by Facebook, React.js is also relied upon by other tech giants such as PayPal, Uber, Instagram, and Airbnb for their user interfaces. Undoubtedly, its widespread adoption and strong community support have been facilitated by React's combination of productivity, component-based architecture, and declarative syntax. This means developers are building projects on React more than ever before. The React library is non-opinionated by design, meaning that "out of the box," it doesn't include practically any additional features beyond the core functionality of defining and managing components. Therefore, it's easy to mess up without knowing best practices for prop passing, decomposing, structuring React application files, scaling the application as a whole, and other nuances. These pitfalls can be avoided by using a boilerplate that contains built-in functions and configurations, providing a comprehensive foundation with core tools and libraries, optimizing the development process, and allowing developers to focus on building their application logic rather than dealing with initial setup and configuration. In other words, it serves as a standardized starting point for initiating application development. Searching for "react-boilerplate" on GitHub yields 44.8k repositories at the moment. The question arises regarding which template to choose for development, one that fits your application and is good for scalability and future maintenance. Types of React Boilerplates In the past, the most commonly used way to start React projects was create-react-app (CRA), a popular and officially supported boilerplate by Facebook. However, the new React documentation, published on March 16, 2023, no longer recommends CRA as the best solution for creating React programs. Let's consider the alternatives, compare them, and decide on the best way to start a project. By delving into the various aspects of React boilerplates, let's consider the criteria by which they can be divided: Libs and Configs Minimalistic Boilerplates Minimalistic boilerplates provide basic configurations for a React project, including basic setups (such as Webpack, Babel, and ESLint). They assume that developers will add certain libraries and features as needed. The majority of boilerplates fall into this category. Feature-Rich Boilerplates Feature-rich boilerplates come with pre-configured additional libraries and tools. These may include state management (e.g., Redux), routing (React Router), and testing, and also may include basic UI components and pages, further speeding up development by providing common UI elements and layouts. Authentication and Registration Boilerplates with auth and registration include components for login, signup, and user sessions. Boilerplates without auth leave authentication implementation to developers. Full-Stack vs. Frontend-Only Full-stack boilerplates provide a comprehensive solution for building web applications, covering both the front end (React) and back end. Frontend-only boilerplates focus solely on the React interface. Developers need to integrate them with the desired server. UI Components Libraries Boilerplates with UI components include a full set of UI components that adhere to consistent design patterns (e.g., buttons, forms, modals). Boilerplates without UI components leave the development of components entirely to the developer using the boilerplate. Paid vs. Free Free/open-source boilerplates are freely available, community-supported, and often well-maintained. Paid boilerplates: Some commercial templates offer additional features, premium support, or extended functionality. Based on the above classification, it can be said that the most popular React boilerplates, such as Vite, Create React App (CRA), Create Next App, and Razzle, include only the basic libraries and configurations necessary to start developing with React (Minimalistic Boilerplates). React Template Selection Criteria Deciding which boilerplate to use during development can be quite challenging because it's not just about creating an application but also about scaling and maintaining it afterward. So how do you choose the appropriate solution from the given variety of existing boilerplates, and how do you choose them in general? Here are the key points we suggest paying attention to when choosing a boilerplate to start your project: Support and maintenance options: Is the project regularly updated? Performance scores Code quality (structure cleanliness, scalability, code organization) Production readiness: Is the project ready for production use right now? Availability of features such as authentication, routing, internationalization, form handling, testing, basic pages, and UI components - the list could go on, you just need to determine which ones are needed for your project implementation and look for them in the boilerplate. React Project Scaffolding Tools The initial step in developing React applications typically involves choosing among Vite, Create React App, Create Next App, or Razzle as the foundation. They provide framework-like functionality, particularly regarding setting up the initial project structure, configuring build tools, and providing development servers. Vite focuses on providing an extremely fast development server and workflow speed in web development. It uses its own ES module imports during development, speeding up the startup time. Create React App (CRA) abstracts away the complexity of configuring Webpack, Babel, and other build tools, allowing developers to focus on writing React code. It includes features such as hot module reloading for efficient development. Next.js is a React framework for building server-rendered and statically generated web applications. It configures Next.js projects with sensible defaults, including features like server-side rendering (SSR), file-based routing, and API routes. Razzle is a build tool created by Airbnb, which also simplifies server-side rendering. It abstracts away the complexity of configuring server-side rendering settings and allows developers to easily create versatile JavaScript applications. Razzle supports features like code splitting, CSS-in-JS, and hot module replacement, making it suitable for building React applications that require server-side rendering. The build tools mentioned above are often referred to as React boilerplates. Since they only abstract the complexities of setup away, provide basic configurations, and optimize build workflows, they are not very functional and do not contain additional features themselves. Therefore, according to the classification provided above, we classify them as Minimal Boilerplates. Essentially, they often serve as boilerplate templates, that is, they are great tools for creating more feature-rich React boilerplates. Table of Selected Boilerplates Next, we will consider React boilerplates that do not charge a license fee and/or offer their features for money and also take into account the date of the last update (not more than six months ago). Based on this, we have taken into consideration 12 boilerplates*: Stars Contributors Last Commit Date Open Issues About extensive-react-boilerplate 148 5 contributors 29-04-2024 2 Extensive React Boilerplate: ✔️NextJS ✔️Auth ✔️I18N ✔️MUI ✔️Formsdemo React Starter Kit 22.5k 5 contributors 15-02-2024 2 The web's popular Jamstack front-end template for building web applications with Reactdemo react-redux-saga-boilerplate 606 6 contributors 06-02-2024 - Starter kit with react-router, react-helmet, redux, redux-saga and styled-componentsdemo Next-js-Boilerplate 7k 24 contributors 05-04-2024 1 Boilerplate and Starter for Next.js 14+ with App Router/Page Router, Tailwind CSS 3.4, TypeScriptdemo landy-react-template 1.2k 1 contributor 06-04-2024 1 Landy is an open-source React landing page template designed for developers and startupsdemo core 308 6 contributors 18-04-2024 6 Boilerplate for React/Typescript, built based on Vite nextjs-boilerplate 134 1 contributor 24-04-2024 - Next.js 14+ boilerplate with typescript, husky, lint-staged, eslint, prettier, jest, react-testing-library, storybook, ghaction and plopdemo react-pwa 511 6 contributors 09-01-2024 8 Starter kit for modern web applicationsdemo Vitamin 499 5 contributors 12-04-2024 1 Optional Vite starter templatedemo next-saas-stripe-starter 830 1 contributor 26-04-2024 - An open-source SaaS Starter built using Next.js 14, Prisma, Planetscale, Auth.js v5, Resend, React Email, Shadcn/ui, Stripe, and Server Actionsdemo gatsby-starter-apple 133 3 contributors 11-04-2024 - Gatsby blog starter kit with a beautiful responsive designdemo fullstack-typescript 360 7 contributors 28-04-2024 4 FullStack React with TypeScript starter kit * as of April 2024 Comparison of Boilerplates by Features Now let's take a closer look at the features developers can get from using boilerplates and what else needs to be taken into account: API integration: Some templates may contain configurations for integrating with specific APIs or server services. State management solutions: Options like Redux, MobX, Recoil, or built-in React state management; also, it's hard to ignore asynchronous React Query Testing configuration: Predefined testing setups or none of them at all Authentication and authorization: Whether user authentication and authorization are prescribed and how they are handled, in particular, whether there is integration with certain authentication libraries Internationalization (i18n) and localization: Providing the ability to support multiple languages using libraries like react-i18next or react-intl ESLint rules compliance: Allows not only to detect or fix problems during code formatting but also to identify potential bugs Styling solutions: The solution for using CSS modules, styled components, or UI libs, which will ensure easy and effective reuse of styled-components Type safety in the project: Using TypeScript to provide static typing during development, utilizing classes or modules to create more scalable and reliable applications App theme selection: Allowing users to switch between light and dark themes based on their preferences or automatic settings Ready-made form components: Providing components intended for reuse across forms, reducing code duplication, and promoting standardization; they may also include built-in validation and error handling, making development more reliable. UI Component Libraries: Offering ready-made and customizable components, such as buttons and modal windows, that developers can easily integrate into their applications, saving time and effort on designing and coding these elements from scratch We analyzed each boilerplate and obtained the following table: extensive-react-boilerplate React Starter Kit react-redux-saga-boilerplate Next-js-Boilerpl ate landy-react-template core nextjs-boilerplate react-pwa Vitamin next-saas-stripe-starter gatsby-starter-apple fullstack-typescript Documentation + - - + - + + + - + +- - Authentication features + + - + - - - - - + - - Social sign in + + - + - - - - - + - - Internationalization + - - + + - - - - - - - User profile + - - - - - - - - + - - Forms + - - + + - - - - + - - Statement management + + + - - - - + - + - - Tests + - + + - - - + + - - - UI components + + + - + - - + - + - - Eslint + + + + - + - + + + + + Paid - - - + - - - - - - - - Styled-components + - + - + - - + + - + + TypeScript + + + + + + + + + + + + Themes + + + + - - - + + + + - UI Component Material ui Material ui @gilbarbara/ components - antd - Tailwind CSS Material ui Tailwind CSS @radix-ui - Material ui Description of Boilerplates From the Table Extensive-react-boilerplate This React boilerplate is designed for all types of projects. It's not only fully compatible with the backend boilerplate nestjs-boilerplate, but also stands as an independent solution, which is one of its main advantages. This template offers a wide range of functionalities, such as: User authentication and authorization, including the possibility of using Google or Facebook accounts Page private or public access settings ESLint setup with custom rules to enhance code efficiency and cleanliness Type safety to ensure the reliability of the written code Project localization using a custom useLanguage hook E2E testing support Light or Dark Mode at the discretion of the user A library of controlled components based on MUI, integrated with react-hook-form by default - so no longer a need to spend extra time connecting input fields to controllers State management using React Query for handling asynchronous operations User management functionalities (CRUD) Avatar selection and upload feature with Dropzone capability Support for Next.js framework (SSR) for improved application performance and SEO optimization As you can see, from the above-mentioned features, this boilerplate significantly reduces the startup time for your project (approximately 193 hours), making it a worthwhile consideration. Categories: Feature-rich boilerplates, boilerplates with Auth and Registration, frontend-only (and has a fully compatible backend boilerplate, thus can be used as full-stack boilerplates), free React-starter-kit This is a template for creating web applications based on React. It comes with pre-configured setups such as CSS-in-JS, Vitest, VSCode settings, Cloudflare support, and SSR. A connection to Firestore is used as a database. It includes implementations of some UI components like a toolbar or sidebar based on Joy UI. Categories: Feature-rich boilerplates, boilerplates with Auth and Registration, frontend-only, free React-redux-saga-boilerplate This is a starter project for creating a React application that uses Redux for state management. It provides support for unit and end-to-end testing, react-helmet, and uses the Emotion library for styling, simplifying CSS styling with JavaScript. It includes custom components like a header or footer implemented using styled functionality. Categories: Feature-rich boilerplates, boilerplates without Auth, frontend-only, free Next-js-Boilerplate This boilerplate has a flexible code structure where you only need to select and save the necessary functionality. It supports integration with Tailwind CSS, authentication with Clerk, and is compatible with SQLite, PostgreSQL, and MySQL databases. Unit testing is done using Jest, and the Zod library is used for describing validation schemas. Categories: Feature-rich boilerplates, boilerplates with Auth and registration, frontend-only, free Landy-react-template This boilerplate comes with multilingual support and smooth animations, and all content is stored in JSON files, allowing users to manage texts without prior knowledge of React.js. It contains a set of its own components (button, input, textarea, etc.) created based on styling HTML elements using styled components. Categories: Feature-rich boilerplates, boilerplates without Auth, frontend-only, free Core This modern template was developed based on the fast project creation tool — Vite. It supports TypeScript for type safety and includes good configurations for ESLint, Prettier, CommitLint, Husky, and Lint-Staged. Categories: Minimalistic boilerplates, boilerplates without Auth, frontend-only, free Nextjs-boilerplate This React boilerplate uses Next.js for static page generation. It supports Git message convention, component generation using Plop, and Tailwind CSS for styling organization. It has its Storybook for component documentation. Categories: Minimalistic boilerplates, boilerplates without Auth, frontend-only, free React-pwa This is a ready-made set to start your project from scratch. It consists of a minimalistic combination of core libraries, components, and utilities typically needed by developers when creating React applications. It contains its own HOC for error handling on the page and is developed based on Vite. Categories: Feature-rich boilerplates, boilerplates without Auth, frontend-only, free Vitamin This is a starter project containing Tailwind CSS with a basic style reset and a Prettier plugin that automatically organizes your classes. For testing, tools such as Vitest, Testing Library, and Cypress are used, but it does not include React UI Component Libraries. Categories: Minimalistic boilerplates, boilerplates without Auth, frontend-only, free Next-saas-stripe-starter By using this boilerplate, you can extend the capabilities of your project with features like Next.js, Prisma, Planetscale, Auth.js v5, Resend, React Email, Shadcn/ui, and Stripe. It includes a library of components built using Radix UI and Tailwind CSS. Categories: Feature-rich boilerplates, boilerplates with Auth and registration, full-stack boilerplates, free Gatsby-starter-apple This is a template for creating applications with a nice responsive design and contains animations for a mobile menu. The basis for styling the used components is styled-components. The boilerplate supports search engine optimization well and has RSS feed capabilities. Categories: Minimalistic boilerplates, boilerplates without Auth, frontend-only, free Fullstack-typescript This boilerplate is a full-stack application for quickly launching your project. It has a library of custom components based on Material UI, and Axios is used for client-server communication. It does not support certain state management technologies like Redux, MobX, etc. Categories: Minimalistic boilerplates, boilerplates without Auth, full-stack boilerplates, free Peculiarities of Implementation of Some Features In general, React templates offer various implementation features aimed at speeding up and standardizing the development process. They include UI component libraries and encompass a general approach to styling, state management, and basic ESLint configurations. React UI Component Libraries The implementation of functionalities in React boilerplates often revolves around modular development, where components are designed to be reusable and composable. Analyzing current libraries and according to this article, the following can be considered the most popular ones: We can say for sure that Material UI is currently the most popular library with 91.2k GitHub stars and more than 3 million weekly downloads. Thanks to its responsive web design (RWD) feature, you can be confident that your application will automatically adapt to various screens and devices. Styling Solutions Styling solutions such as CSS modules, styled components, or Sass are usually included in React boilerplates. They offer different approaches to styling components, providing flexibility and scalability while maintaining component encapsulation. Advantages of using styled components as a styling solution: The library automatically tracks components rendered on the page and applies only their styles. Automatically generates a unique class name for styles, ensuring no errors in class names Styles are attached to specific components, simplifying the removal of CSS itself. Effortless dynamic styling (code instances below from bc-boilerplates) JavaScript const AvatarInputContainer = styled(Box)(({ theme }) => ({ display: "flex", position: "relative", flexDirection: "column", alignItems: "center", padding: theme.spacing(2), marginTop: theme.spacing(2), border: "1px dashed", borderColor: theme.palette.divider, borderRadius: theme.shape.borderRadius, cursor: "pointer", "&:hover": { borderColor: theme.palette.text.primary, }, })); Using dynamic props of a component during styling: This ensures that the style is updated based on the value of the variable. JavaScript const StyledCollapseBtn = styled("button")<ICollapse>(({ isOpen, theme }) => ({ justifySelf: "flex-end", color: COLOURS.black, backgroundColor: "transparent", border: "none", cursor: "pointer", paddingLeft: theme.spacing(2.5), position: "absolute", bottom:theme.spacing(3), left: isOpen ? "150px" : "unset", })); This allows for the reuse of styles from one component to another or for influencing one component over another (parent-child relationship). JavaScript const Link = styled.a` display: flex; align-items: center; padding: 5px 10px; background: papayawhip; color: #BF4F74; `; const Icon = styled.svg` flex: none; transition: fill 0.25s; width: 48px; height: 48px; ${Link}:hover & { fill: rebeccapurple; } `; State Management State management is another important aspect that simplifies application state handling, providing scalability and maintainability, especially in complex applications. Usually, Redux, MobX, and Zustand come to mind when choosing a state management tool. However, they are client-side libraries, and compared to a tool like React Query, their application for storing asynchronous data may not be as efficient. React Query is a server-state library used in some boilerplates like bc-boilerplates. React Query is a server-state library. It is responsible not only for managing asynchronous operations between the server and the client but also provides ready-to-use functionality for searching, caching, and updating data in React and Next.js applications. With just a few lines of code, React Query replaces the boilerplate code used to manage cached data in your client state. ESLint Rules in Boilerplates The efficiency of using ESLint rules during the development of your project is also manifested in writing custom rules. Since ESLint has extensive functionality and flexibility, you can create not only formatting and rules, but also consider internal project decisions. For example, when working with forms, it is possible to control and warn developers about possible unnecessary renders, and incorrect solutions when working with objects or simply point out unused imports. For example, extensive-react-boilerplate addresses such issues as follows: Warn about rules regarding incorrect usage of patterns JSON { "selector": "ConditionalExpression[consequent.type=Literal][consequent.value=true][alternate.type=Literal][alternate.value=false]", "message": "Do not use \"condition ? true : false\". Simplify \"someVariable === 42 ? true : false \" to \"someVariable === 42\"" }, { "selector": "JSXElement[openingElement.name.property.name=Provider] JSXElement[openingElement.name.name]", "message": "Do not put your regular components inside Context \".Provider\". Create new component, for example ComponentProvider. Put Provider's logic to ComponentProvider. Render \"{children} instead of regular component. Wrap regular component via new ComponentProvider \". Example: \"src/services/auth/auth-provider\"" }, { "selector": "Property[key.name=/^(padding|margin|paddingLeft|paddingRight|paddingTop|paddingBottom|paddingVertical|marginLeft|marginRight|marginTop|marginBottom|marginVertical)$/][value.type=/^(Literal|UnaryExpression)$/]:not([value.value=\"0 !important\"]):not([value.value=\"0\"]):not([value.value=\"0 auto\"]):not([value.value=\"auto\"])", "message": "Use theme.spacing() instead of literal." }, { "selector": "CallExpression[callee.name=/^(useQuery|useInfiniteQuery)$/] Property[key.name=queryKey]:not(:has(Identifier[name=key]))", "message": "Use key created via createQueryKeys function instead of your solution" }, { "selector": "CallExpression[callee.name=refresh]", "message": "Do not use refresh() function for update or change result in react-query. Use \"queryClient.resetQueries\" or pass new filter data to queryKey." }, { "selector": "ExpressionStatement[expression.callee.object.name=JSON][expression.callee.property.name=parse][expression.arguments.0.callee.object.name=JSON][expression.arguments.0.callee.property.name=stringify]", "message": "Do not use JSON.parse(JSON.stringify(...)) for deep copy. Use structuredClone instead." } Inform about the possibility of uncontrolled renders JSON { "selector": "VariableDeclaration[declarations.0.init.callee.name=useForm] ~ VariableDeclaration[declarations.0.init.callee.name=useFieldArray]", "message": "\"useFieldArray\" in main form component (which use \"useForm\") will re-render the whole form component. Move your useFieldArray's logic to separate component." }, { "selector": "VariableDeclaration[declarations.0.init.callee.name=useForm] ~ VariableDeclaration[declarations.0.init.callee.name=useController]", "message": "\"useController\" in main form component (which use \"useForm\") will re-render the whole form component. Move your useController's logic to separate component." }, { "selector": "VariableDeclaration[declarations.0.init.callee.name=useForm] ~ VariableDeclaration[declarations.0.init.callee.name=useFormContext]", "message": "\"useFormContext\" in main form component (which use \"useForm\") will re-render the whole form component. Move your useFormContext's logic to separate component." }, { "selector": "VariableDeclaration[declarations.0.init.callee.name=useForm] ~ VariableDeclaration[declarations.0.init.callee.name=useFormState]", "message": "\"useFormState\" in main form component (which use \"useForm\") will re-render the whole form component. Move your useFormState's logic to separate component." }, { "selector": "CallExpression[callee.name=useForm][arguments.length=0], CallExpression[callee.name=useForm][arguments.length=1]:not(:has(Property[key.name=defaultValues]))", "message": "Pass object with \"defaultValues\" for correct \"formState\" behavior. More info here: https://react-hook-form.com/api/useform/formstate#main" } Conclusion The choice of an effective React template is crucial for the success of your project. Instead of reinventing the wheel, leveraging the power of a well-chosen boilerplate can significantly speed up your development process and create a solid foundation. When selecting a boilerplate, we recommend familiarizing yourself with its directory structure and configuration files to understand its underlying foundation, ease of integration, modularity, and maximum alignment with technical requirements. Consider whether the available features can provide the functions you need. This can save development time and potentially offer well-maintained and tested code. Since there was often a question of how to apply multiple boilerplates simultaneously due to the lack of comprehensive functionality in such templates, the BC boilerplates team proposed a solution in the form of the extensive-react-boilerplate. In our opinion, it can carve out its niche among well-known counterparts and become a worthy competitor deserving of your attention. We invite you to try it out and look forward to your feedback in the form of a new star.

By Olena Vl
Build a Time-Tracking App With ClickUp API Integration Using Openkoda
Build a Time-Tracking App With ClickUp API Integration Using Openkoda

Is it possible to build a time-tracking app in just a few hours? It is, and in this article, I'll show you how! I’m a senior backend Java developer with 8 years of experience in building web applications. I will show you how satisfying and revolutionary it can be to save a lot of time on building my next one. The approach I use is as follows: I want to create a time-tracking application (I called it Timelog) that integrates with the ClickUp API. It offers a simple functionality that will be very useful here: creating time entries remotely. In order to save time, I will use some out-of-the-box functionalities that the Openkoda platform offers. These features are designed with developers in mind. Using them, I can skip building standard features that are used in every web application (over and over again). Instead, I can focus on the core business logic. I will use the following pre-built features for my application needs: Login/password authentication User and organization management Different user roles and privileges Email sender Logs overview Server-side code editor Web endpoints creator CRUDs generator Let’s get started! Timelog Application Overview Our sample internal application creates a small complex system that can then be easily extended both model-wise and with additional business logic or custom views. The main focus of the application is to: Store the data required to communicate with the ClickUp API. Assign users to their tickets. Post new time entries to the external API. To speed up the process of building the application, we relied on some of the out-of-the-box functionalities mentioned above. At this stage, we used the following ones: Data model builder (Form) - Allows us to define data structures without the need to recompile the application, with the ability to adjust the data schema on the fly Ready-to-use management functionalities - With this one, we can forget about developing things like authentication, security, and standard dashboard view. Server-side code editor - Used to develop a dedicated service responsible for ClickUp API integration, it is coded in JavaScript all within the Openkoda UI. WebEndpoint builder - Allows us to create a custom form handler that uses a server-side code service to post time tracking entry data to the ClickUp servers instead of storing it in our internal database Step 1: Setting Up the Architecture To implement the functionality described above and to store the required data, we designed a simple data model, consisting of the following five entities. ClickUpConfig, ClickUpUser, Ticket, and Assignment are designed to store the keys and IDs required for connections and messages sent to the ClickUp API. The last one, TimeEntry, is intended to take advantage of a ready-to-use HTML form (Thymeleaf fragment), saving a lot of time on its development. The following shows the detailed structure of a prepared data model for the Timelog ClickUp integration. ClickUpConfig apiKey - ClickUp API key teamId - ID of space in ClickUp to create time entry in ClickUpUser userId - Internal ID of a User clickUpUserId - ID of a user assigned to a workspace in ClickUp Ticket name - Internal name of the ticket clickUpTicketid - ID of a ticket in ClickUp to create time entries Assignment userId - Internal ID of a User ticketId - Internal ID of a Ticket TimeEntry userId - Internal ID of a User ticketId - Internal ID of a ticket date - Date of a time entry durationHours - Time entry duration provided in hours durationMinutes - Time entry duration provided in minutes description - Short description for created time entry We want to end up with five data tiles on the dashboard: Step 2: Integrating With ClickUp API We integrated our application with the ClickUp API specifically using its endpoint to create time entries in ClickUp. To connect the Timelog app with our ClickUp workspace, it is required to provide the API Key. This can be done using either a personal API token or a token generated by creating an App in the ClickUp dashboard. For information on how to retrieve one of these, see the official ClickUp documentation. In order for our application to be able to create time entries in our ClickUp workspace, we need to provide some ClickUp IDs: teamId: This is the first ID value in the URL after accessing your workspace. userId: To check the user’s ClickUp ID (Member ID), go to Workspace -> Manage Users. On the Users list, select the user’s Settings and then Copy Member ID. taskId: Task ID is accessible in three places on the dashboard: URL, task modal, and tasks list view. See the ClickUp Help Center for detailed instructions. You can recognize the task ID being prefixed by the # sign - we use the ID without the prefix. Step 3: Data Model Magic With Openkoda Openkoda uses the Byte Buddy library to dynamically build entity and repository classes for dynamically registered entities during the runtime of our Spring Boot application. Here is a short snippet of entity class generation in Openkoda (a whole service class is available on their GitHub). Java dynamicType = new ByteBuddy() .with(SKIP_DEFAULTS) .subclass(OpenkodaEntity.class) .name(PACKAGE + name) .annotateType(entity) .annotateType(tableAnnotation) .defineConstructor(PUBLIC) .intercept(MethodCall .invoke(OpenkodaEntity.class.getDeclaredConstructor(Long.class)) .with((Object) null)); Openkoda provides a custom form builder syntax that defines the structure of an entity. This structure is then used to generate both entity and repository classes, as well as HTML representations of CRUD views such as a paginated table with all records, a settings form, and a simple read-only view. All of the five entities from the data model described earlier have been registered in the same way, only by using the form builder syntax. The form builder snippet for the Ticket entity is presented below. JavaScript a => a .text("name") .text("clickUpTaskId") The definition above results in having the entity named Ticket with a set of default fields for OpenkodaEntity and two custom ones named “name” and “clickUpTaskId”. The database table structure for dynamically generated Ticket entity is as follows: Markdown Table "public.dynamic_ticket" Column | Type | Collation | Nullable | Default ------------------+--------------------------+-----------+----------+----------------------- id | bigint | | not null | created_by | character varying(255) | | | created_by_id | bigint | | | created_on | timestamp with time zone | | | CURRENT_TIMESTAMP index_string | character varying(16300) | | | ''::character varying modified_by | character varying(255) | | | modified_by_id | bigint | | | organization_id | bigint | | | updated_on | timestamp with time zone | | | CURRENT_TIMESTAMP click_up_task_id | character varying(255) | | | name | character varying(255) | | | The last step of a successful entity registration is to refresh the Spring context so it recognizes the new repository beans and for Hibernate to acknowledge entities. It can be done by restarting the application from the Admin Panel (section Monitoring). Our final result is an auto-generated full CRUD for the Ticket entity. Auto-generated Ticket settings view: Auto-generated all Tickets list view: Step 4: Setting Up Server-Side Code as a Service We implemented ClickUp API integration using the Openkoda Server-Side Code keeping API calls logic separate as a service. It is possible to use the exported JS functions further in the logic of custom form view request handlers. Then we created a JavaScript service that delivers functions responsible for ClickUp API communication. Openkoda uses GraalVM to run any JS code fully on the backend server. Our ClickupAPI server-side code service has only one function (postCreateTimeEntry) which is needed to meet our Timelog application requirements. JavaScript export function postCreateTimeEntry(apiKey, teamId, duration, description, date, assignee, taskId) { let url = `https://api.clickup.com/api/v2/team/${teamId}/time_entries`; let timeEntryReq = { duration: duration, description: '[Openkoda Timelog] ' + description, billable: true, start: date, assignee: assignee, tid: taskId, }; let headers = {Authorization: apiKey}; return context.services.integrations.restPost(url, timeEntryReq, headers); } To use such a service later on in WebEndpoints, it is easy enough to follow the standard JS import expression import * as clickupAPI from 'clickupAPI';. Step 5: Building Time Entry Form With Custom GET/POST Handlers Here, we prepare the essential screen for our demo application: the time entry form which posts data to the ClickUp API. All is done in the Openkoda user interface by providing simple HTML content and some JS code snippets. The View The HTML fragment is as simple as the one posted below. We used a ready-to-use form Thymeleaf fragment (see form tag) and the rest of the code is a standard structure of a Thymeleaf template. HTML <!--DEFAULT CONTENT--> <!DOCTYPE html> <html xmlns:th="http://www.thymeleaf.org" xmlns:layout="http://www.ultraq.net.nz/thymeleaf/layout" lang="en" layout:decorate="~{${defaultLayout}"> <body> <div class="container"> <h1 layout:fragment="title"/> <div layout:fragment="content"> <form th:replace="~{generic-forms::generic-form(${TimeEntry}, 'TimeEntry', '', '', '', 'Time Entry', #{template.save}, true)}"></form> </div> </div> </body> </html> HTTP Handlers Once having a simple HTML code for the view, we need to provide the actual form object required for the generic form fragment (${TimeEntry}). We do it inside a GET endpoint as a first step, and after that, we set the currently logged user ID so there’s a default value selected when entering the time entry view. JavaScript flow .thenSet("TimeEntry", a => a.services.data.getForm("TimeEntry")) .then(a => a.model.get("TimeEntry").dto.set("userId", a.model.get("userEntityId"))) Lastly, the POST endpoint is registered to handle the actual POST request sent from the form view (HTML code presented above). It implements the scenario where a user enters the time entry form, provides the data, and then sends the data to the ClickUp server. The following POST endpoint JS code: Receives the form data. Reads the additional configurations from the internal database (like API key, team ID, or ClickUp user ID). Prepares the data to be sent. Triggers the clickupAPI service to communicate with the remote API. JavaScript import * as clickupAPI from 'clickupAPI'; flow .thenSet("clickUpConfig", a => a.services.data.getRepository("clickupConfig").search( (root, query, cb) => { let orgId = a.model.get("organizationEntityId") != null ? a.model.get("organizationEntityId") : -1; return cb.or(cb.isNull(root.get("organizationId")), cb.equal(root.get("organizationId"), orgId)); }).get(0) ) .thenSet("clickUpUser", a => a.services.data.getRepository("clickupUser").search( (root, query, cb) => { let userId = a.model.get("userEntityId") != null ? a.model.get("userEntityId") : -1; return cb.equal(root.get("userId"), userId); }) ) .thenSet("ticket", a => a.form.dto.get("ticketId") != null ? a.services.data.getRepository("ticket").findOne(a.form.dto.get("ticketId")) : null) .then(a => { let durationMs = (a.form.dto.get("durationHours") != null ? a.form.dto.get("durationHours") * 3600000 : 0) + (a.form.dto.get("durationMinutes") != null ? a.form.dto.get("durationMinutes") * 60000 : 0); return clickupAPI.postCreateTimeEntry( a.model.get("clickUpConfig").apiKey, a.model.get("clickUpConfig").teamId, durationMs, a.form.dto.get("description"), a.form.dto.get("date") != null ? (new Date(a.services.util.toString(a.form.dto.get("date")))).getTime() : Date.now().getTime(), a.model.get("clickUpUser").length ? a.model.get("clickUpUser").get(0).clickUpUserId : -1, a.model.get("ticket") != null ? a.model.get("ticket").clickUpTaskId : '') }) Step 6: Our Application Is Ready! This is it! I built a complex application that is capable of storing the data of users, assignments to their tickets, and any properties required for ClickUp API connection. It provides a Time Entry form that covers ticket selection, date, duration, and description inputs of a single time entry and sends the data from the form straight to the integrated API. Not to forget about all of the pre-built functionalities available in Openkoda like authentication, user accounts management, logs overview, etc. As a result, the total time to create the Timelog application was only a few hours. What I have built is just a simple app with one main functionality. But there are many ways to extend it, e.g., by adding new structures to the data model, by developing more of the ClickUp API integration, or by creating more complex screens like the calendar view below. If you follow almost exactly the same scenario as I presented in this case, you will be able to build any other simple (or not) business application, saving time on repetitive and boring features and focusing on the core business requirements. I can think of several applications that could be built in the same way, such as a legal document management system, a real estate application, a travel agency system, just to name a few. As an experienced software engineer, I always enjoy implementing new ideas and seeing the results quickly. In this case, that is all I did. I spent the least amount of time creating a fully functional application tailored to my needs and skipped the monotonous work. The .zip package with all code and configuration files are available on my GitHub.

By Martyna Szczepanska
Bridging JavaScript and Java Packages: An Introduction to Npm2Mvn
Bridging JavaScript and Java Packages: An Introduction to Npm2Mvn

Integrating assets from diverse platforms and ecosystems presents a significant challenge in enterprise application development, where projects often span multiple technologies and languages. Seamlessly incorporating web-based assets such as JavaScript, CSS, and other resources is a common yet complex requirement in Java web applications. The diversity of development ecosystems — each with its tools, package managers, and distribution methods — complicates including these assets in a unified development workflow. This fragmentation can lead to inefficiencies, increased development time, and potential for errors as developers navigate the intricacies of integrating disparate systems. Recognizing this challenge, the open-source project Npm2Mvn offers a solution to streamline the inclusion of NPM packages into Java workspaces, thereby bridging the gap between the JavaScript and Java ecosystems. Understanding NPM and Maven Before diving into the intricacies of Npm2Mvn, it's essential to understand the platforms it connects: NPM and Maven. NPM (Node Package Manager) is the default package manager for Node.js, primarily used for managing dependencies of various JavaScript projects. It hosts thousands of packages developers provide worldwide, facilitating the sharing and distribution of code. NPM simplifies adding, updating, and managing libraries and tools in your projects, making it an indispensable tool for JavaScript developers. Maven, on the other hand, is a powerful build automation tool used primarily for Java projects. It goes beyond simple build tasks by managing project dependencies, documentation, SCM (Source Code Management), and releases. Maven utilizes a Project Object Model (POM) file to manage a project's build configuration, dependencies, and other elements, ensuring developers can easily manage and build their Java applications. The Genesis of Npm2Mvn Npm2Mvn emerges as a solution to a familiar challenge developers face: incorporating the vast array of JavaScript libraries and frameworks available on NPM into Java projects. While Java and JavaScript operate in markedly different environments, the demand for utilizing web assets (like CSS, JavaScript files, and fonts) within Java applications has grown exponentially. It is particularly relevant for projects that require rich client interfaces or the server-side rendering of front-end components. Many Javascript projects are distributed exclusively through NPM, so like me, if you have found yourself copying and pasting assets from an NPM archive across to your Java web application workspace, then Npm2Mvn is just the solution you need. Key Features of Npm2Mvn Designed to automate the transformation of NPM packages into Maven-compatible jar files, Npm2Mvn makes NPM packages readily consumable by Java developers. This process involves several key steps: Standard Maven repository presentation: Utilizing another open-source project, uHTTPD, NPM2MVN presents itself as a standard Maven repository. Automatic package conversion: When a request for a Maven artifact in the group npm is received, NPM2MVN fetches the package metadata and tarball from NPM. It then enriches the package with additional metadata required for Maven, such as POM files and MANIFEST.MF. Inclusion of additional metadata: Besides standard Maven metadata, NPM2MVN adds specific metadata for Graal native images, enhancing compatibility and performance for projects leveraging GraalVM. Seamless integration into local Maven cache: The final jar file, enriched with the necessary metadata, is placed in the local Maven cache, just like any other artifact, ensuring that using NPM packages in Java projects is as straightforward as adding a Maven dependency. Benefits for Java Developers Npm2Mvn offers several compelling benefits for Java developers: Access to a vast repository of JavaScript libraries: By bridging NPM and Maven, Java developers can easily incorporate thousands of JavaScript libraries and frameworks into their projects. This access significantly expands the resources for enhancing Java applications, especially for UI/UX design, without leaving the familiar Maven ecosystem. Simplified dependency management: Managing dependencies across different ecosystems can be cumbersome. Npm2Mvn streamlines this process, allowing developers to handle NPM packages with the Maven commands they are accustomed to. Enhanced productivity: By automating the conversion of NPM packages to Maven artifacts, NPM2MVN saves developers considerable time and effort. This efficiency boost enables developers to focus more on building their applications than wrestling with package management intricacies. Real-world applications: Projects like Fontawesome, Xterm, and Bootstrap, staples for frontend development, can seamlessly integrate into Java applications. How To Use Using Npm2Mvn is straightforward. Jadaptive, the project's developers, host a repository here. This repository is open and free to use. You can also download a copy of the server to host in a private build environment. To use this service, add the repository entry to your POM file. XML <repositories> <repository> <id>npm2mvn</id> <url>https://npm2mvn.jadaptive.com</url> </repository> </repositories> Now, declare your NPM packages. For example, I am including the JQuery NPM package here. XML <dependency> <groupId>npm</groupId> <artifactId>jquery</artifactId> <version>3.7.1</version> </dependency> That's all we need to include and version manage NPM packages into the classpath. Consuming the NPM Resources in Your Java Application The resources of the NPM package are placed in the jar under a fixed prefix, allowing multiple versions of multiple NPM packages to be available to the JVM via the classpath or module path. For example, if the NPM package bootstrap@v5.3.1 contains a resource with the path css/bootstrap.css, then the Npm2Mvn package will make that resource available at the resource path /npm2mvn/npm/bootstrap/5.3.1/css/boostrap.css. Now that you know the path of the resources in your classpath, you can prepare to consume them in your Java web application by implementing a Servlet or other mechanism to serve the resources from the classpath. How you do this depends on your web application platform and any framework you use. In Spring Boot, we would add a resource handler as demonstrated below. Java @Configuration @EnableWebMvc public class MvcConfig implements WebMvcConfigurer { @Override public void addResourceHandlers(ResourceHandlerRegistry registry) { registry .addResourceHandler("/npm2mvn/**") .addResourceLocations("classpath:/npm2mvn/"); } } With this configuration in a Spring Boot application, we can now reference NPM assets directly in HTML files we use in the application. HTML <script type="text/javascript" src="/npm2mvn/npm/jquery/3.7.1/dist/jquery.min.js"> But What About NPM Scopes? NPM version 2 supports scopes which, according to their website: ... allows you to create a package with the same name as a package created by another user or organization without conflict. In the examples above, we are not using scopes. If the package you require uses a scope, you must modify your pom.xml dependency and the resource path. Taking the FontAwesome project as an example, to include the @fortawesome/fontawesome-free module in our Maven build, we modify the groupId to include the scope as demonstrated below. XML <dependency> <groupId>npm.fortawesome</groupId> <artifactId>fontawesome-free</artifactId> <version>6.5.1</version> </dependency> Similarly, in the resource path, we change the second path value from 'npm' to the same groupId we used above. HTML <link rel="stylesheet" href="/npm2nvm/npm.fortawesome/fontawesome-free/6.5.1/css/all.css"/> You can download a full working Spring Boot example that integrates the Xterm NPM module and add-ons from GitHub. Dependency Generator The website at the hosted version of Npm2Mvn provides a useful utility that developers can use to get the correct syntax for the dependencies needed to build the artifacts. Here we have entered the scope, package, and version to get the correct dependency entry for the Maven build. If the project does not have a scope simply leave the first field blank. Conclusion Npm2Mvn bridges the JavaScript and Java worlds, enhancing developers' capabilities and project possibilities. By simplifying the integration of NPM packages into Java workspaces, Npm2Mvn promotes a more interconnected and efficient development environment. It empowers developers to leverage the best of both ecosystems in their applications.

By Lee Painter
Cypress vs. React Testing Library
Cypress vs. React Testing Library

Purpose and Scope of Cypress Cypress automated testing initiates the execution procedure on a NodeJS server that interacts with the test runner in order to run the application and test code in the same event loop. Because both automated and application code run on the same platform, the development team and QA retain complete control over the app being tested. Teams can test back-end functionality by running Node code using Cypress’ cy.task() command. The CI\CD dashboard, which graphically provides the overall perspective of the process flow, is another awesome activity performed by Cypress. The simplicity with which full end-to-end tests may be written is one of the benefits of using Cypress. These tests ensure that your application performs as expected throughout. End-to-end tests can also be used to find bugs that can go undetected when tested as separate components. Using Cypress, a strong and useful tool, one may create complete tests for web-based applications. The specific requirements of your project will dictate the scope of your tests, but Cypress can be used to test every area of your application. Whether you want to concentrate on the user interface (UI) or the underlying functionality, Cypress gives you the freedom to create the tests you need. Advantages and Disadvantages of Cypress Advantages of Cypress Excellent documentation is available from Cypress, and there is no configuration needed to set up dependencies and libraries. QAs or software engineering teams can monitor and verify the behavior of server responses, functions, or timers by implementing the Spies, Stubs, and Clocks features. Support for cross-browser testing. Cypress runs tests in real-time and offers the development or QA teams visual feedback so they may make significant changes. Cypress supports BDD Testing and TDD styles. Cypress allows for immediate feedback by running the code as the developer types it. While the tests are running, the Cypress framework grabs snapshots. A quality assurance tester or software engineer can simply hover over a command in the Command Log to examine the detailed log entry that appears if they are curious about the intricacies of how that command was executed. Additionally, it has access to the network layer above the application layer, which enables us to control every network request made to and received from our service. Also, this may be quite useful for trying out other scenarios, such as what would happen if our server had an unforeseen failure. Before continuing, Cypress JS will automatically wait for commands and assertions in Cypress. Disadvantages of Cypress You cannot divide our testing across two superdomains with Cypress.. Currently, accessing two different superdomains requires passing 2 distinct tests. There is not much support for iFrames. There aren’t as many AI-powered features as some competitors, such as testRigor. The most significant user workflows in your application are automatically found by testRigor. Cypress only takes JavaScript code to build test cases. Example of Cypress Tests Usually, end-to-end will execute the whole application (both frontend and backend), and your test will interact with the app similarly to how a user would. To create these tests, Cypress is used. JavaScript import { generate } from 'task-test-utils'; describe('Task Management Application', () => { it('should allow a standard user to manage tasks', () => { // Generate user and task data const user = generate.user(); const task = generate.task(); // Navigate to the application cy.visitApp(); // Register a new user cy.findByText(/sign up/i).click(); cy.findByLabelText(/username/i).type(user.username); cy.findByLabelText(/password/i).type(user.password); cy.findByText(/submit/i).click(); // Add a new task cy.findByLabelText(/add task/i) .type(task.description) .type('{enter}'); // Verify the task is added cy.findByTestId('task-0').should('have.value', task.description); // Mark the task as complete cy.findByLabelText('mark as complete').click(); // Verify the task is marked as complete cy.findByTestId('task-0').should('have.class', 'completed'); // Additional tests can be added as per requirements // ... }); }); Purpose and Scope of React Testing Library The React Testing Library provides a really simple way to test React components. In a way that promotes improved testing techniques, it offers simple utility functions on top of react-dom and react-dom/test-utils. Only “render,” which is akin to Enzyme’s “mount,” is offered by RTL as a method of rendering React components. By testing your components in the context of user interaction, the React Testing Library’s principal objective is to build confidence in you. Users don’t care about what happens in the background. All that they focus on and interact with are the outcomes. Instead of relying on the components’ internal API or evaluating their state, you’ll feel more confident when writing tests based on component output. Managers and teams have reportedly been required to submit 100% code coverage. As the coverage goes significantly beyond 70%, the problem is that you get diminishing returns on your tests. Why is this the case? You spend time testing things that really don’t need to be checked when you constantly aim for perfection. Certain things are completely illogical (ESLint and Flow could catch so many bugs). Thus, you and your team will move very slowly while maintaining tests like this. The trade-offs between speed and cost/confidence are quite well-balanced by integration testing. It’s advised to concentrate the majority (though certainly not all) of your effort there because of this. There is some blurring of the boundaries between integration and unit testing. Nevertheless, cutting back on your use of mocking will go a long way toward encouraging you to write more integration tests. By mocking anything, you undermine any belief in the compatibility of the subject of your test and the object of your mocking. The use of functional components, react hooks, classes, or state management libraries is irrelevant to the end user. They expect your app to operate in a way that helps them finish their work. The end-user is taken into consideration when testing the application using the React Testing Library in this context. The React Testing Library places more of an emphasis on testing the components as the user would. By looking for texts, labels, etc., one can search for elements from the DOM. With this methodology, you can check that the component’s output and behavior are valid rather than accessing the internals of the components. Given that one constantly has to check to see if the component is working perfectly from the user’s perspective, this can increase the level of confidence in the results of our tests. Example of React Test Library JavaScript import React, { useState } from 'react'; // Component Name: MessageRevealer // Description: A component that conditionally renders a message based on the state of a checkbox. const MessageRevealer = ({messageContent}) => { // State: isMessageVisible (boolean) // Description: Determines whether the message should be visible or not. // Initial Value: false (message is hidden by default) const [isMessageVisible, setMessageVisibility] = useState(false); // Function: toggleMessageVisibility // Description: Toggles the visibility of the message based on the checkbox state. // Parameters: event (object) - the event object from the checkbox input change. const toggleMessageVisibility = event => setMessageVisibility(event.target.checked); // JSX Return // Description: Renders a checkbox input and conditionally renders the message based on isMessageVisible. return ( <div> {/* Label for the checkbox input */} <label htmlFor="messageToggle">Display Message</label> {/* Checkbox Input */} {/* When changed, it toggles the message visibility */} <input id="messageToggle" type="checkbox" onChange={toggleMessageVisibility} // On change, toggle visibility checked={isMessageVisible} // Checked state is bound to isMessageVisible /> {/* Conditional Rendering of Message */} {/* If isMessageVisible is true, render messageContent, otherwise render null */} {isMessageVisible ? messageContent : null} </div> ); }; JavaScript // Importing necessary utilities from testing-library and jest-dom import '@testing-library/jest-dom/extend-expect'; import React from 'react'; import { render, fireEvent, screen } from '@testing-library/react'; // Importing the component to be tested import MessageRevealer from '../message-revealer'; // Defining a test suite for the MessageRevealer component test('renders the children content when the checkbox is toggled', () => { // Defining a message to be used as children in the component const demoMessage = 'Demo Message'; // Rendering the component with the demoMessage as children render(<MessageRevealer>{demoMessage}</MessageRevealer>); // Asserting that the demoMessage is not in the document initially expect(screen.queryByText(demoMessage)).not.toBeInTheDocument(); // Simulating a click event on the checkbox, assuming it's labelled as "reveal" fireEvent.click(screen.getByLabelText(/reveal/i)); // Asserting that the demoMessage is visible in the document after the click event expect(screen.getByText(demoMessage)).toBeVisible(); }); // Exporting the MessageRevealer component for use in other files. export default MessageRevealer; Advantages and Disadvantages of React Testing Library Advantages of React Testing Library Some advantages of using React Testing Library for testing your React applications are: Encourages writing tests from the user’s perspective: React Testing Library promotes testing your application as a user would interact with it, rather than focusing on implementation details. This approach results in tests that are more reliable and maintainable. Easy to learn and use: React Testing Library is designed to be easy to learn and use, even for developers new to testing. Its API is simple and intuitive, and the framework provides plenty of examples and documentation to help you get started. Supports testing accessibility: React Testing Library includes tools that make it easy to test for accessibility in your React components. This is particularly important for web applications, which must be accessible to users with disabilities. Provides a lightweight solution: React Testing Library is a lightweight solution, which means that it doesn’t have many dependencies or require a lot of setups. This makes it easy to integrate with your existing testing setup and to run tests quickly. Works with popular testing tools: React Testing Library is designed to work well with other popular testing tools like Jest and Cypress, making it easy to integrate into your existing testing workflow. Improves code quality: By writing tests with React Testing Library, you can catch bugs and issues early on in the development process, which helps to improve the overall quality of your code. Disadvantages of React Testing Library Limited support for complex testing components. Doesn’t cover all aspects of testing: Requires a good understanding of React Can result in slower test performance Requires maintenance Cypress vs. React Testing Library: When To Use Which? Cypress and React Testing Library are popular testing frameworks that can test React applications. While they have their strengths and weaknesses, there are certain situations where one may be more suitable. Here are some general guidelines for when to use each framework: Use Cypress When You need end-to-end testing: Cypress is designed for end-to-end testing, which involves testing the entire application from the user’s perspective. If you need to test how multiple components interact with each other or how the application behaves in different scenarios, Cypress may be a better choice. You need to test complex scenarios: Cypress can test more complex scenarios, such as interactions with APIs or databases, which may be more difficult to test with React Testing Library. You need a more robust testing solution: Cypress provides more advanced features than React Testing Library, such as visual testing, time-travel debugging, and network stubbing. Cypress may be a better choice if you need a more robust testing solution. Use React Testing Library When You want to test the user interface: React Testing Library is designed to test React components’ user interface and interactions. If you want to ensure that your components are rendering and behaving correctly, React Testing Library may be a better choice. You want a lightweight testing solution: React Testing Library is a lightweight testing solution that can be easily integrated into your testing workflow. If you want a testing solution that is easy to set up and use, React Testing Library may be a better choice. You want to test for accessibility: React Testing Library includes tools for testing accessibility in your React components. If you want to ensure that your application is accessible to all users, React Testing Library may be a better choice. You want to perform integration testing: Since integration testing is more granular and does not require running the complete application, use React testing library (RTL). Utilizing the react-testing library at a lower level of your application can ensure that your components work as intended. With Cypress, you can deploy your app in a caching-enabled environment behind CDNs using data from an API. In Cypress, you would also create an end-to-end journey, a joyful path through your app that might boost your confidence after deployment. In general, the choice between Cypress and React Testing Library will depend on your specific testing needs and the complexity of your application. It may be beneficial to combine both frameworks to cover different aspects of testing.

By Hamid Akhtar
How To Capture Node.js Garbage Collection Traces
How To Capture Node.js Garbage Collection Traces

Garbage collection (GC) is a fundamental aspect of memory management in Node.js applications. However, inefficient garbage collection can lead to performance issues, causing application slowdowns and potentially impacting user experience. To ensure optimal performance and diagnose memory problems, it’s essential to study garbage collection traces. In this blog post, we’ll explore various methods for capturing garbage collection traces from Node.js applications. Options To Capture Garbage Collection Traces From Node.js Applications There are 3 options to capture Garbage Collection traces from the Node.js applications: –trace-gc flag v8 module Performance hook Let’s discuss them in this post. 1. –trace-gc Flag The easiest and most straightforward approach is to pass the –trace-gc flag along with your usual invocation command. For example: node --trace-gc my-script.mjs Once the –trace-gc flag is enabled, your Node.js application will start generating garbage collection traces in the console output. These traces provide valuable insights into memory usage, GC events, and potential performance bottlenecks. Garbage collection traces would look something like this: [721159:0x61f0210] 1201125 ms: Scavenge 27.7 (28.8) -> 26.8 (29.8) MB, 0.5 / 0.2 ms (average mu = 0.999, current mu = 0.970) allocation failure [721166:0x5889210] 1201338 ms: Scavenge 30.7 (32.1) -> 29.7 (33.1) MB, 0.6 / 0.3 ms (average mu = 0.998, current mu = 0.972) allocation failure [721173:0x54fc210] 1202608 ms: Scavenge 26.8 (28.3) -> 25.8 (29.3) MB, 0.7 / 0.4 ms (average mu = 0.999, current mu = 0.972) allocation failure [721152:0x54ca210] 1202879 ms: Scavenge 30.5 (31.8) -> 29.6 (32.8) MB, 0.6 / 0.2 ms (average mu = 0.999, current mu = 0.978) allocation failure [721166:0x5889210] 1202925 ms: Scavenge 30.6 (32.1) -> 29.7 (33.1) MB, 0.7 / 0.3 ms (average mu = 0.998, current mu = 0.972) task [721159:0x61f0210] 1203105 ms: Scavenge 27.7 (28.8) -> 26.7 (29.8) MB, 0.4 / 0.2 ms (average mu = 0.999, current mu = 0.970) allocation failure [721173:0x54fc210] 1204660 ms: Scavenge 26.8 (28.3) -> 25.8 (29.3) MB, 0.5 / 0.2 ms (average mu = 0.999, current mu = 0.972) allocation failure 2. v8 Module If you don’t want to enable GC traces for the entire lifetime of the application or if you want to enable them only on certain conditions or in certain parts of code, then you can use the v8 module, as it provides options to add/remove flags at run-time. Using the v8 module, you can pass the –trace-gc flag and remove it as shown in the code snippet below: import v8 from 'v8'; // enable trace-gc v8.setFlagsFromString('--trace-gc'); // app code // .. // .. // disable trace-gc v8.setFlagsFromString('--notrace-gc'); 3. Performance Hook Node.js has a built-in perf_hooks module that facilitates you to capture performance metrics from the application. You can use the perf_hooks module to capture garbage collection traces. Refer to the code snippet below: const { performance, PerformanceObserver } = require('perf_hooks'); // Step 1: Create a PerformanceObserver to monitor GC events const obs = new PerformanceObserver((list) => { const entries = list.getEntries(); for (const entry of entries) { // Printing GC events in the console log console.log(entry); } }); // Step 2: Subscribe to GC events obs.observe({ entryTypes: ['gc'], buffered: true }); // Step 3: Stop subscription obs.disconnect(); If you notice in the code above, we are doing the following: We are importing the performance and PerformanceObserver classes from the perf_hooks module. We create a PerformanceObserver instance to monitor garbage collection events (gc entry type). Whenever garbage collection events occur in the application, we are logging in to the console using the console.log(entry) statement. We start observing GC events with obs.observe(). Finally, we stop observing GC events with obs.disconnect(). When the code snippet above is added to your application, in the console you will start to see the GC events reported in the JSON format as below: { kind: 'mark_sweep_compact', startTime: 864.659982532, duration: 7.824, entryType: 'gc', name: 'GC Event' } { kind: 'scavenge', startTime: 874.589382193, duration: 3.245, entryType: 'gc', name: 'GC Event' } Conclusion In this post, we explored three main methods for capturing garbage collection traces in Node.js applications: using the –trace-gc flag, leveraging the v8 module for dynamic tracing, and utilizing the perf_hooks module. Each method offers its own advantages and flexibility in capturing and analyzing GC events. I hope you found it helpful.

By Ram Lakshmanan DZone Core CORE
Initializing Services in Node.js Application
Initializing Services in Node.js Application

While working on a user model, I found myself navigating through best practices and diverse strategies for managing a token service, transitioning from straightforward functions to a fully-fledged, independent service equipped with handy methods. I delved into the nuances of securely storing and accessing secret tokens, discerning between what should remain private and what could be public. Additionally, I explored optimal scenarios for deploying the service or function and pondered the necessity of its existence. This article chronicles my journey, illustrating the evolution from basic implementations to a comprehensive, scalable solution through a variety of examples. Services In a Node.js application, services are modular, reusable components responsible for handling specific business logic or functionality, such as user authentication, data access, or third-party API integration. These services abstract away complex operations behind simple interfaces, allowing different parts of the application to interact with these functionalities without knowing the underlying details. By organizing code into services, developers achieve separation of concerns, making the application more scalable, maintainable, and easier to test. Services play a crucial role in structuring the application’s architecture, facilitating a clean separation between the application’s core logic and its interactions with databases, external services, and other application layers. I decided to show an example with JWT Service. Let’s jump to the code. First Implementation In our examples, we are going to use jsonwebtoken as a popular library in the Node.js ecosystem. It will allow us to encode, decode, and verify JWTs easily. This library excels in situations requiring the safe and quick sharing of data between web application users, especially for login and access control. To create a token: TypeScript jsonwebtoken.sign(payload, JWT_SECRET) and verify: TypeScript jsonwebtoken.verify(token, JWT_SECRET, (error, decoded) => { if (error) { throw error } return decoded; }); For the creation and verifying tokens we have to have JWT_SECRET which lying in env. TypeScript process.env.JWT_SECRET That means we have to read it to be able to proceed to methods. TypeScript if (!JWT_SECRET) { throw new Error('JWT secret not found in environment variables!'); } So, let’s sum it up to the one object with methods: TypeScript require('dotenv').config(); import jsonwebtoken from 'jsonwebtoken'; const JWT_SECRET = process.env.JWT_SECRET!; export const jwt = { verify: <Result>(token: string): Promise<Result> => { if (!JWT_SECRET) { throw new Error('JWT secret not found in environment variables!'); } return new Promise((resolve, reject) => { jsonwebtoken.verify(token, JWT_SECRET, (error, decoded) => { if (error) { reject(error); } else { resolve(decoded as Result); } }); }); }, sign: (payload: string | object | Buffer): Promise<string> => { if (!JWT_SECRET) { throw new Error('JWT secret not found in environment variables!'); } return new Promise((resolve, reject) => { try { resolve(jsonwebtoken.sign(payload, JWT_SECRET)); } catch (error) { reject(error); } }); }, }; jwt.ts file jwt Object With Methods This object demonstrates setting up JWT authentication functionality in a Node.js application. To read env variables helps: require(‘dotenv’).config();and with access to process, we are able to get JWT_SECRET value. Let’s reduce repentance of checking the secret. TypeScript checkEnv: () => { if (!JWT_SECRET) { throw new Error('JWT_SECRET not found in environment variables!'); } }, Incorporating a dedicated function within the object to check the environment variable for the JWT secret can indeed make the design more modular and maintainable. But still some repentance, because we still have to call it in each method: this.checkEnv(); Additionally, I have to consider the usage of this context because I have arrow functions. My methods have to become function declarations instead of arrow functions for verify and sign methods to ensure this.checkEnvworks as intended. Having this we can create tokens: TypeScript const token: string = await jwt.sign({ id: user.id, }) or verify them: TypeScript jwt.verify(token) At this moment we can think, is not better to create a service that is going to handle all of this stuff? Token Service By using the service we can improve scalability. I still checking the existing secret within the TokenService for dynamic reloading of environment variables (just as an example), I streamline it by creating a private method dedicated to this check. This reduces repetition and centralizes the logic for handling missing configurations: TypeScript require('dotenv').config(); import jsonwebtoken from 'jsonwebtoken'; export class TokenService { private static jwt_secret = process.env.JWT_SECRET!; private static checkSecret() { if (!TokenService.jwt_secret) { throw new Error('JWT token not found in environment variables!'); } } public static verify = <Result>(token: string): Promise<Result> => { TokenService.checkSecret(); return new Promise((resolve, reject) => { jsonwebtoken.verify(token, TokenService.jwt_secret, (error, decoded) => { if (error) { reject(error); } else { resolve(decoded as Result); } }); }); }; public static sign = (payload: string | object | Buffer): Promise<string> => { TokenService.checkSecret(); return new Promise((resolve, reject) => { try { resolve(jsonwebtoken.sign(payload, TokenService.jwt_secret)); } catch (error) { reject(error); } }); }; } TokenService.ts File But I have to consider moving the check for the presence of necessary configuration outside of the methods and into the initialization or loading phase of my application, right? This ensures that my application configuration is valid before it starts up, avoiding runtime errors due to missing configuration. And in this moment the word proxy comes to my mind. Who knows why, but I decided to check it out: Service With Proxy First, I need to refactor my TokenService to remove the repetitive checks from each method, assuming that the secret is always present: TypeScript require('dotenv').config(); import jsonwebtoken from 'jsonwebtoken'; export class TokenService { private static jwt_secret = process.env.JWT_SECRET!; public static verify<TokenPayload>(token: string): Promise<TokenPayload> { return new Promise((resolve, reject) => { jsonwebtoken.verify(token, TokenService.jwt_secret, (error, decoded) => { if (error) { reject(error); } else { resolve(decoded as TokenPayload); } }); }); } public static sign(payload: string | object | Buffer): Promise<string> { return new Promise((resolve, reject) => { try { resolve(jsonwebtoken.sign(payload, TokenService.jwt_secret)); } catch (error) { reject(error); } }); } } Token Service Without Checking Function the Secret Then I created a proxy handler that checks the JWT secret before forwarding calls to the actual service methods: TypeScript const tokenServiceHandler = { get(target, propKey, receiver) { const originalMethod = target[propKey]; if (typeof originalMethod === 'function') { return function(...args) { if (!TokenService.jwt_secret) { throw new Error('Secret not found in environment variables!'); } return originalMethod.apply(this, args); }; } return originalMethod; } }; Token Service Handler Looks fancy. Finally, for the usage of the proxied token service, I have to create an instance of the Proxy class: TypeScript const proxiedTokenService = new Proxy(TokenService, tokenServiceHandler); Now, instead of calling TokenService.verify or TokenService.sign directly, I can use proxiedTokenService for these operations. The proxy ensures that JWT secret check is performed automatically before any method logic is executed: TypeScript try { const token = proxiedTokenService.sign({ id: 123 }); console.log(token); } catch (error) { console.error(error.message); } try { const payload = proxiedTokenService.verify('<token>'); console.log(payload); } catch (error) { console.error(error.message); } This approach abstracts away the repetitive pre-execution checks into the proxy mechanism, keeping this method's implementations clean and focused on their core logic. The proxy handler acts as a middleware layer for my static methods, applying the necessary preconditions transparently. Constructor What about constructor usage? There’s a significant distinction between initializing and checking environment variables in each method call; the former approach doesn’t account for changes to environment variables after initial setup: TypeScript export class TokenService { private jwt_secret: string; constructor() { if (!process.env.JWT_SECRET) { throw new Error('JWT secret not found in environment variables!'); } this.jwt_secret = process.env.JWT_SECRET; } public verify(token: string) { // Implementation... } public sign(payload) { // Implementation... } } const tokenService = new TokenService(); Constructor Approach The way the service is utilized will stay consistent; the only change lies in the timing of the service’s initialization. Service Initialization We’ve reached the stage of initialization where we can perform necessary checks before using the service. This is a beneficial practice with extensive scalability options. TypeScript require('dotenv').config(); import jsonwebtoken from 'jsonwebtoken'; export class TokenService { private static jwt_secret: string = process.env.JWT_SECRET!; static initialize = () => { if (!this.jwt_secret) { throw new Error('JWT secret not found in environment variables!'); } this.jwt_secret = process.env.JWT_SECRET!; }; public static verify = <Result>(token: string): Promise<Result> => new Promise((resolve, reject) => { jsonwebtoken.verify(token, TokenService.jwt_secret, (error, decoded) => { if (error) { reject(error); } else { resolve(decoded as Result); } }); }); public static sign = (payload: string | object | Buffer): Promise<string> => new Promise((resolve, reject) => { try { resolve(jsonwebtoken.sign(payload, TokenService.jwt_secret)); } catch (error) { reject(error); } }); } Token Service With Initialization Initialization acts as a crucial dependency, without which the service cannot function. To use this approach effectively, I need to call TokenService.initialize() early in my application startup sequence, before any other parts of my application attempt to use the TokenService. This ensures that my service is properly configured and ready to use. TypeScript import { TokenService } from 'src/services/TokenService'; TokenService.initialize(); This approach assumes that my environment variables and any other required setup do not change while my application is running. But what if my application needs to support dynamic reconfiguration, I might consider additional mechanisms to refresh or update the service configuration without restarting the application, right? Dynamic Reconfiguration Supporting dynamic reconfiguration in the application, especially for critical components like TokenService that rely on configurations like JWT_SECRET, requires a strategy that allows the service to update its configurations at runtime without a restart. For that, we need something like configuration management which allows us to refresh configurations dynamically from a centralized place. Dynamic configuration refresh mechanism — this could be a method in my service that can be called to reload its configuration without restarting the application: TypeScript export class TokenService { private static jwt_secret = process.env.JWT_SECRET!; public static refreshConfig = () => { this.jwt_secret = process.env.JWT_SECRET!; if (!this.jwt_secret) { throw new Error('JWT secret not found in environment variables!'); } }; // our verify and sign methods will be the same } Token Service With Refreshing Config I need to implement a way to monitor my configuration sources for changes. This could be as simple as watching a file for changes or as complex as subscribing to events from a configuration service. This is just an example: TypeScript import fs from 'fs'; fs.watch('config.json', (eventType, filename) => { if (filename) { console.log(`Configuration file changed, reloading configurations.`); TokenService.refreshConfig(); } }); If active monitoring is not feasible or reliable, we can consider scheduling periodic checks to refresh configurations. This approach is less responsive but can be sufficient depending on how frequently my configurations change. Cron Job Another example can be valuable with using a cron job within a Node.js application to periodically check and refresh configuration for services, such as a TokenService, is a practical approach for ensuring my application adapts to configuration changes without needing a restart. This can be especially useful for environments where configurations might change dynamically (e.g., in cloud environments or when using external configuration management services). For that, we can use node-cron package to achieve the periodical check: TypeScript import cron from 'node-cron'' import { TokenService } from 'src/services/TokenService' cron.schedule('0 * * * *', () => { TokenService.refreshConfiguration(); }, { scheduled: true, timezone: "America/New_York" }); console.log('Cron job scheduled to refresh TokenService configuration every hour.'); Cron Job periodically checks the latest configurations. In this setup, cron.schedule is used to define a task that calls TokenService.refreshConfiguration every hour ('0 * * * *' is a cron expression that means "at minute 0 of every hour"). Conclusion Proper initialization ensures the service is configured with essential environment variables, like the JWT secret, safeguarding against runtime errors and security vulnerabilities. By employing best practices for dynamic configuration, such as periodic checks or on-demand reloading, applications can adapt to changes without downtime. Effectively integrating and managing the TokenService enhances the application's security, maintainability, and flexibility in handling user authentication. I trust this exploration has provided you with meaningful insights and enriched your understanding of service configurations.

By Anton Kalik

Top JavaScript Experts

expert thumbnail

John Vester

Staff Engineer,
Marqeta

IT professional with 30+ years expertise in app design and architecture, feature development, and project and team management. Currently focusing on establishing resilient cloud-based services running across multiple regions and zones. Additional expertise architecting (Spring Boot) Java and .NET APIs against leading client frameworks, CRM design, and Salesforce integration.
expert thumbnail

Justin Albano

Software Engineer,
IBM

I am devoted to continuously learning and improving as a software developer and sharing my experience with others in order to improve their expertise. I am also dedicated to personal and professional growth through diligent studying, discipline, and meaningful professional relationships. When not writing, I can be found playing hockey, practicing Brazilian Jiu-jitsu, watching the NJ Devils, reading, writing, or drawing. ~II Timothy 1:7~ Twitter: @justinmalbano

The Latest JavaScript Topics

article thumbnail
Using Zero-Width Assertions in Regular Expressions
Explore anchors, lookahead, and lookbehind assertions, which allow you to manage which characters will be included in a match and more.
July 8, 2024
by Peter Kankowski
· 1,278 Views · 2 Likes
article thumbnail
Strengthening Web Application Security With Predictive Threat Analysis in Node.js
Enhance your Node.js web application security by implementing predictive threat analysis using tools like Express.js, TensorFlow.js, JWT, and MongoDB.
July 5, 2024
by Sameer Danave
· 2,354 Views · 1 Like
article thumbnail
Comparing Axios, Fetch, and Angular HttpClient for Data Fetching in JavaScript
In this article, we will explore how to use these tools for data fetching, including examples of standard application code and error handling.
July 4, 2024
by Nitesh Upadhyaya
· 2,142 Views · 3 Likes
article thumbnail
A Comprehensive Guide To Building and Managing a White-Label Platform
Today we will be building a white-label system that can accommodate any number of subsidiaries. It requires careful planning and a flexible architecture.
July 3, 2024
by Nitesh Upadhyaya
· 1,887 Views · 2 Likes
article thumbnail
An Effective Way To Start a NextJS Project
Often, starting a NextJS project is easier with a ready-made boilerplate, saving time and ensuring best practices. This approach accelerates your development process.
July 2, 2024
by Olena Vl
· 1,926 Views · 2 Likes
article thumbnail
Theme-Based Front-End Architecture Leveraging Tailwind CSS for White-Label Systems
Analyzing multi-facet tailwind.css to implement the white-label system architecture using React-Context and Redux Toolkit.
July 2, 2024
by Nitesh Upadhyaya
· 1,870 Views · 2 Likes
article thumbnail
Front-End Application Performance Monitoring (APM)
Join us as we delve further into the front-end enterprise application architecture with APM integration and discuss its pros and cons.
July 1, 2024
by Nitesh Upadhyaya
· 2,431 Views · 3 Likes
article thumbnail
Node.js Walkthrough: Build a Simple Event-Driven Application With Kafka
Build a real-time event-driven app with Node.js and Kafka on Heroku. Follow this step-by-step guide to set up, deploy, and manage your application efficiently.
June 27, 2024
by Alvin Lee DZone Core CORE
· 11,232 Views · 1 Like
article thumbnail
How To Use Thread.sleep() in Selenium
Learn how to pause test execution with Thread.sleep() in Selenium. Control timing for effective automation testing.
June 25, 2024
by Faisal Khatri
· 3,272 Views · 1 Like
article thumbnail
Improving Serialization and Memory Efficiency With a LongConverter
Chronicle Wire is an OSS library that makes it easier to work with fields encoding short strings and timestamps as 64-bit long.
June 20, 2024
by Peter Lawrey
· 3,645 Views · 1 Like
article thumbnail
The Definitive Guide to TDD in React: Writing Tests That Guarantee Success
Learn how to build robust React components using Test-Driven Development (TDD). This step-by-step guide ensures your code is reliable and maintainable.
June 20, 2024
by Anujkumarsinh Donvir
· 3,905 Views · 5 Likes
article thumbnail
Consuming GraphQL API With React.js
If you have created your GraphQL APIs and are looking to consume them in a React front-end, learn how to consume your GraphQL API using React.js in this post.
June 19, 2024
by Rose Chege
· 3,940 Views · 1 Like
article thumbnail
Getting Started With Valkey Using JavaScript
I will walk through how to use Valkey for JavaScript applications using existing clients in Redis ecosystem as well as iovalkey (a friendly fork of ioredis).
June 18, 2024
by Abhishek Gupta DZone Core CORE
· 5,107 Views · 1 Like
article thumbnail
Harnessing Kafka Streams for Seamless Data Tasks
Learn about the use of Kafka for convenient data exploration and dynamic data integration with its "publish-subscribe" model, organizing data into topics.
June 18, 2024
by Ilia Ivankin
· 3,388 Views · 1 Like
article thumbnail
HTMX vs. React: Choosing The Right Frontend Approach For Your Project
In the evolving landscape of frontend development, technologies like HTMX are redefining the way developers approach building modern web applications.
June 17, 2024
by Beste Bayhan
· 2,610 Views · 2 Likes
article thumbnail
React 19: Comprehensive Guide To the Latest Features
Take a look at React 19 Beta, which marks a milestone in improving accessibility and compatibility, with full support for Web components and custom elements.
June 17, 2024
by Beste Bayhan
· 3,290 Views · 2 Likes
article thumbnail
Benchmarking Java Streams
Take a deep dive into the performance characteristics of Java streams. With the help of JMH, learn how Java streams behave when put under pressure.
June 13, 2024
by Bartłomiej Żyliński DZone Core CORE
· 7,653 Views · 6 Likes
article thumbnail
The Benefits of Using RTK Query: A Scalable and Efficient Solution
Learn how RTK Query simplifies asynchronous data fetching, provides automatic caching and invalidation, promotes scalability and maintainability, and more.
June 12, 2024
by Oren Farhi
· 2,994 Views · 2 Likes
article thumbnail
JavaScript, Node.js, and Apache Kafka for Full-Stack Data Streaming
Explore JavaScript, Node.js, and Apache Kafka for full-stack development for data streaming with open-source clients.
June 10, 2024
by Kai Wähner DZone Core CORE
· 4,773 Views · 2 Likes
article thumbnail
GenAI-Powered Automation and Angular
Learn how GenAI-Powered microservice automation provides a prompt-driven way to create and deliver a complete end-to-end Angular API-enabled application.
June 6, 2024
by Tyler Band
· 4,700 Views · 7 Likes
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • ...
  • Next

ABOUT US

  • About DZone
  • Send feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends: