Low-Code Development: Leverage low and no code to streamline your workflow so that you can focus on higher priorities.
DZone Security Research: Tell us your top security strategies in 2024, influence our research, and enter for a chance to win $!
A framework is a collection of code that is leveraged in the development process by providing ready-made components. Through the use of frameworks, architectural patterns and structures are created, which help speed up the development process. This Zone contains helpful resources for developers to learn about and further explore popular frameworks such as the Spring framework, Drupal, Angular, Eclipse, and more.
GenAI-Powered Automation and Angular
Tackling Records in Spring Boot
The rapid integration of artificial intelligence (AI) into software systems brings unprecedented opportunities and challenges for the software development community. As developers, we're not only responsible for building functional AI systems, but also for ensuring they operate safely, ethically, and responsibly. This article delves into the technical details of the NIST AI Risk Management Framework, providing concrete guidance for software developers building and deploying AI solutions. Image from NIST webpage The NIST framework lays out 4 important steps for AI developers to adopt to reduce the risk associated with AI. 1. Govern: Setting up the Fundamentals Governance is the most important and the foundation for this framework. Effective governance of AI risk starts with solid technical groundwork. In order to implement robust governance, developers of AI systems should explore some of the following approaches Version control and reproducibility: Implement rigorous version control for datasets, model architectures, training scripts, and configuration parameters. This ensures reproducibility, enabling tracking of changes, debugging issues, and auditing model behavior. Documentation and code review: Establish clear documentation requirements for all aspects of AI development. Conduct thorough code reviews to identify potential vulnerabilities, enforce coding best practices, and ensure adherence to established standards. Testing and validation frameworks: Build comprehensive testing frameworks to validate data quality, model performance, and system robustness. Employ unit tests, integration tests, and regression tests to catch errors early in the development cycle. Table 1: Examples of Technical Governance Approaches Aspect Approach Example Version Control Utilize Git for tracking code, data, and model versions. Document commit messages with specific changes, link to relevant issue trackers. Documentation Use Sphinx or MkDocs to generate documentation from code comments and Markdown files. Include API references, tutorials, and explanations of design decisions. Testing Employ frameworks like Pytest or JUnit for automated testing. Write tests for data loading, model training, prediction accuracy, and security vulnerabilities. 2. Map: Identifying Technical Risks in AI Systems Understanding the technical nuances of AI systems is crucial for identifying potential risks. Some of the key areas to explore for mapping the AI risks are: Data quality and bias: Assess the quality and representativeness of training data. Identify potential biases stemming from data collection, labeling, or sampling methodologies. Implement data pre-processing techniques (e.g., outlier detection, data cleaning) to mitigate data quality issues. Model robustness and adversarial attacks: Evaluate the vulnerability of AI models to adversarial examples – inputs designed to mislead the model. Implement adversarial training techniques to enhance model robustness and resilience against malicious inputs. Security vulnerabilities: Analyze the software architecture for security flaws. Implement secure coding practices to prevent common vulnerabilities like SQL injection, cross-site scripting, and authentication bypass. Employ penetration testing and vulnerability scanning tools to identify and address security weaknesses. Table 2: Examples of Technical Risk Identification Risk Category Description Example Data Bias Training data reflects historical or societal biases. An AI-powered credit card approval trained on data with historical bias against certain demographic groups might unfairly deny credit cards to individuals from those groups. Adversarial Attacks Maliciously crafted inputs designed to fool the model. An image recognition system could be tricked by an adversarial image to misclassify a positive as a negative result. Data Poisoning Injecting malicious data into the training dataset to compromise model performance. An attacker could insert corrupted data into a spam detection system's training set, causing it to misclassify spam messages as legitimate. 3. Measure: Evaluating and Measuring Technical Risks Evaluating the technical severity of risks requires quantitative metrics and rigorous analysis. A few metrics that we could deploy to measure the performance of AI include, Model performance metrics: Utilize relevant performance metrics to assess model accuracy, precision, recall, and F1 score. Monitor these metrics over time to detect performance degradation and identify potential retraining needs. Explainability and interpretability: Implement techniques like LIME (Local Interpretable Model-agnostic Explanations) or SHAP (SHapley Additive exPlanations) to understand model decision-making processes. Utilize visualization tools to interpret model behavior and identify potential biases. Security assessment tools: Employ static code analysis tools to identify security flaws in the source code. Use dynamic analysis tools (e.g., fuzzing, penetration testing) to uncover vulnerabilities in running systems. Table 3: Technical Risk Measurement Techniques Technique Description Example Confusion Matrix Visualizes the performance of a classification model by showing true positives, true negatives, false positives, and false negatives. Analyzing a confusion matrix can reveal if a model is consistently misclassifying certain categories, indicating potential bias. LIME Generates local explanations for model predictions by perturbing input features and observing the impact on the output. Using LIME, you can understand which features were most influential in a specific loan denial decision made by an AI model. Penetration Testing Simulates real-world attacks to identify security vulnerabilities in a system. A penetration test could uncover SQL injection vulnerabilities in an AI-powered chatbot, enabling attackers to steal user data. 4. Manage: Implementing Risk Controls Managing technical risks demands the implementation of robust controls and mitigation strategies. Some of the strategies to explore for managing the technical risks are Data de-biasing techniques: Implement techniques like re-weighting, data augmentation, or adversarial de-biasing to address biases in training data. if possible conduct fairness audits using appropriate metrics to evaluate the fairness of model outcomes. Secure software development practices: Adhere to secure coding principles to minimize security vulnerabilities. Use strong authentication mechanisms, encrypt sensitive data, and implement access control measures to safeguard systems and data. Model monitoring and anomaly detection: Establish continuous monitoring systems to track model performance and detect anomalies. Implement techniques like statistical process control or machine learning-based anomaly detection to identify deviations from expected behavior. Table 4: Technical Risk Mitigation Strategies Risk Mitigation Strategy Example Data Bias Data Augmentation: Generate synthetic data to increase the representation of underrepresented groups. Augment a dataset for facial recognition with synthetic images of individuals from diverse ethnic backgrounds to reduce bias. Adversarial Attacks Adversarial Training: Train the model on adversarial examples to improve its robustness against such attacks. Use adversarial training to improve the resilience of an image classification model against attacks that aim to manipulate image pixels. Data Poisoning Data Sanitization: Implement rigorous data validation and cleaning processes to detect and remove malicious data. Employ anomaly detection algorithms to identify and remove outliers or malicious data points injected into a training dataset. Conclusion As AI developers, we play a pivotal role in shaping the future of AI. By integrating the NIST AI Risk Management Framework into our development processes, we can build AI systems that are not only technically sound but also ethically responsible, socially beneficial, and worthy of public trust. This framework empowers us to address the technical complexities of AI risk, allowing us to create innovative solutions that benefit individuals, organizations, and society as a whole.
Dynamic query building is a critical aspect of modern application development, especially in scenarios where the search criteria are not known at compile time. In this publication, let's deep dive into the world of dynamic query building in Spring Boot applications using JPA criteria queries. We’ll explore a flexible and reusable framework that allows developers to construct complex queries effortlessly. Explanation of Components Criteria Interface The Criteria interface serves as the foundation for our framework. It extends Specification<T> and provides a standardized way to build dynamic queries. By implementing the toPredicate method, the Criteria interface enables the construction of predicates based on the specified criteria. Java package com.core.jpa; import java.util.ArrayList; import java.util.List; import org.springframework.data.jpa.domain.Specification; import jakarta.persistence.criteria.CriteriaBuilder; import jakarta.persistence.criteria.CriteriaQuery; import jakarta.persistence.criteria.Predicate; import jakarta.persistence.criteria.Root; public class Criteria<T> implements Specification<T> { private static final long serialVersionUID = 1L; private transient List<Criterion> criterions = new ArrayList<>(); @Override public Predicate toPredicate(Root<T> root, CriteriaQuery<?> query, CriteriaBuilder builder) { if (!criterions.isEmpty()) { List<Predicate> predicates = new ArrayList<>(); for (Criterion c : criterions) { predicates.add(c.toPredicate(root, query, builder)); } if (!predicates.isEmpty()) { return builder.and(predicates.toArray(new Predicate[predicates.size()])); } } return builder.conjunction(); } public void add(Criterion criterion) { if (criterion != null) { criterions.add(criterion); } } } Criterion Interface The Criterion interface defines the contract for building individual predicates. It includes the toPredicate method, which is implemented by various classes to create specific predicates such as equals, not equals, like, etc. Java public interface Criterion { public enum Operator { EQ, IGNORECASEEQ, NE, LIKE, GT, LT, GTE, LTE, AND, OR, ISNULL } public Predicate toPredicate(Root<?> root, CriteriaQuery<?> query, CriteriaBuilder builder); } LogicalExpression Class The LogicalExpression class facilitates the combination of multiple criteria using logical operators such as AND and OR. By implementing the toPredicate method, this class allows developers to create complex query conditions by chaining together simple criteria. Java public class LogicalExpression implements Criterion { private Criterion[] criterion; private Operator operator; public LogicalExpression(Criterion[] criterions, Operator operator) { this.criterion = criterions; this.operator = operator; } @Override public Predicate toPredicate(Root<?> root, CriteriaQuery<?> query, CriteriaBuilder builder) { List<Predicate> predicates = new ArrayList<>(); for(int i=0;i<this.criterion.length;i++){ predicates.add(this.criterion[i].toPredicate(root, query, builder)); } if(null != operator && operator.equals(Criterion.Operator.OR)) { return builder.or(predicates.toArray(new Predicate[predicates.size()])); } return null; } } Restrictions Class The Restrictions class provides a set of static methods for creating instances of SimpleExpression and LogicalExpression. These methods offer convenient ways to build simple and complex criteria, making it easier for developers to construct dynamic queries. Java public class Restrictions { private Restrictions() { } public static SimpleExpression eq(String fieldName, Object value, boolean ignoreNull) { if (ignoreNull && (ObjectUtils.isEmpty(value))) return null; return new SimpleExpression(fieldName, value, Operator.EQ); } public static SimpleExpression ne(String fieldName, Object value, boolean ignoreNull) { if (ignoreNull && (ObjectUtils.isEmpty(value))) return null; return new SimpleExpression(fieldName, value, Operator.NE); } public static SimpleExpression like(String fieldName, String value, boolean ignoreNull) { if (ignoreNull && (ObjectUtils.isEmpty(value))) return null; return new SimpleExpression(fieldName, value.toUpperCase(), Operator.LIKE); } public static SimpleExpression gt(String fieldName, Object value, boolean ignoreNull) { if (ignoreNull && (ObjectUtils.isEmpty(value))) return null; return new SimpleExpression(fieldName, value, Operator.GT); } public static SimpleExpression lt(String fieldName, Object value, boolean ignoreNull) { if (ignoreNull && (ObjectUtils.isEmpty(value))) return null; return new SimpleExpression(fieldName, value, Operator.LT); } public static SimpleExpression gte(String fieldName, Object value, boolean ignoreNull) { if (ignoreNull && (ObjectUtils.isEmpty(value))) return null; return new SimpleExpression(fieldName, value, Operator.GTE); } public static SimpleExpression lte(String fieldName, Object value, boolean ignoreNull) { if (ignoreNull && (ObjectUtils.isEmpty(value))) return null; return new SimpleExpression(fieldName, value, Operator.LTE); } public static SimpleExpression isNull(String fieldName, boolean ignoreNull) { if (ignoreNull) return null; return new SimpleExpression(fieldName, null, Operator.ISNULL); } public static LogicalExpression and(Criterion... criterions) { return new LogicalExpression(criterions, Operator.AND); } public static LogicalExpression or(Criterion... criterions) { return new LogicalExpression(criterions, Operator.OR); } public static <E> LogicalExpression in(String fieldName, Collection<E> value, boolean ignoreNull) { if (ignoreNull && CollectionUtils.isEmpty(value)) return null; SimpleExpression[] ses = new SimpleExpression[value.size()]; int i = 0; for (Object obj : value) { if(obj instanceof String) { ses[i] = new SimpleExpression(fieldName, String.valueOf(obj), Operator.IGNORECASEEQ); } else { ses[i] = new SimpleExpression(fieldName, obj, Operator.EQ); } i++; } return new LogicalExpression(ses, Operator.OR); } public static Long convertToLong(Object o) { String stringToConvert = String.valueOf(o); if (!"null".equals(stringToConvert)) { return Long.parseLong(stringToConvert); } else { return Long.valueOf(0); } } } SimpleExpression Class The SimpleExpression class represents simple expressions with various operators such as equals, not equals, like, greater than, less than, etc. By implementing the toPredicate method, this class translates simple expressions into JPA criteria predicates, allowing for precise query construction. The SimpleExpression class represents simple expressions with various operators such as equals, not equals, like, greater than, less than, etc. By implementing the toPredicate method, this class translates simple expressions into JPA criteria predicates, allowing for precise query construction. Java public class SimpleExpression implements Criterion { private String fieldName; private Object value; private Operator operator; protected SimpleExpression(String fieldName, Object value, Operator operator) { this.fieldName = fieldName; this.value = value; this.operator = operator; } @Override @SuppressWarnings({ "rawtypes", "unchecked" }) public Predicate toPredicate(Root<?> root, CriteriaQuery<?> query, CriteriaBuilder builder) { Path expression = null; if (fieldName.contains(".")) { String[] names = StringUtils.split(fieldName, "."); if(names!=null && names.length>0) { expression = root.get(names[0]); for (int i = 1; i < names.length; i++) { expression = expression.get(names[i]); } } } else { expression = root.get(fieldName); } switch (operator) { case EQ: return builder.equal(expression, value); case IGNORECASEEQ: return builder.equal(builder.upper(expression), value.toString().toUpperCase()); case NE: return builder.notEqual(expression, value); case LIKE: return builder.like(builder.upper(expression), value.toString().toUpperCase() + "%"); case LT: return builder.lessThan(expression, (Comparable) value); case GT: return builder.greaterThan(expression, (Comparable) value); case LTE: return builder.lessThanOrEqualTo(expression, (Comparable) value); case GTE: return builder.greaterThanOrEqualTo(expression, (Comparable) value); case ISNULL: return builder.isNull(expression); default: return null; } } } Usage Example Suppose we have a User entity and a corresponding UserRepository interface defined in our Spring Boot application: Java @Entity public class User { @Id @GeneratedValue(strategy = GenerationType.IDENTITY) private Long id; private String name; private int age; private double salary; // Getters and setters } public interface UserRepository extends JpaRepository<User, Long> { } With these entities in place, let’s demonstrate how to use our dynamic query-building framework to retrieve a list of users based on certain search criteria: Java Criteria<User> criteria = new Criteria<>(); criteria.add(Restrictions.eq("age", 25, true)); criteria.add(Restrictions.like("name", "John", true)); criteria.add(Restrictions.or( Restrictions.gt("salary", 50000, true), Restrictions.isNull("salary", null, false) )); List<User> users = userRepository.findAll(criteria); In this example, we construct a dynamic query using the Criteria interface and various Restrictions provided by our framework. We specify criteria such as age equals 25, name contains "John", and salary greater than 50000 or null. Finally, we use the UserRepository to execute the query and retrieve the matching users. Conclusion Dynamic query building with JPA criteria queries in Spring Boot applications empowers developers to create sophisticated queries tailored to their specific needs. By leveraging the framework outlined in this publication, developers can streamline the process of constructing dynamic queries and enhance the flexibility and efficiency of their applications. Additional Resources Spring Data JPA Documentation
I love playing with new toys. For development, that translates to trying out new tools and frameworks. I’m always on the hunt for ways to speed up and streamline my app development workflow. And so, recently, I came across Remix, a JavaScript-based web framework that prioritizes fast page loads and is focused on running at the edge. Since I already have extensive experience with Node.js and React, I wanted to give Remix a try. I decided to go through the Remix tutorial to see just how quickly I could get through it while still understanding what I was doing. Then, I wanted to see how easy it would be to deploy the final app to the cloud. I used Heroku for that. So, let’s dive in together. Here we go! Introducing Remix This is what Remix says about itself: Remix is a full stack web framework that lets you focus on the user interface and work back through web standards to deliver a fast, slick, and resilient user experience. Most important for me, though, is how Remix aims to simplify the developer experience. I really enjoyed reading their take on why Remix isn’t just another framework but is — in addition to being a robust web dev framework — also a way to gain transferable knowledge that will serve you even long after you’ve stopped using Remix. Remix seems to be great for developers who have React experience but — like me — want something that helps them spin up an app faster, prettier, and easier. I found that Remix works to remove some of that slog, providing conveniences to help streamline the development experience. The Remix tutorial was very comprehensive, covering lots of different features. And I got through it in less than 30 minutes. Introducing Heroku At the end of the day, Remix was going to give me a full-fledged SPA. And that’s awesome. But I also wanted a way to deploy my application to run the code quickly and simply. Lately, I’ve been using Heroku more and more for this kind of thing. It’s fast, and it’s reliable. Remix Tutorial Highlights I encourage you to go through the tutorial yourself. I’m not going to rehash it all here, but I will provide some highlights of features. Built on React Router Remix is built on top of React Router, which is an easy-to-use routing library that integrates seamlessly into your React applications. React Router supports nested routes, so you can render the layout for child routes inside parent layouts. This is one of the things I love. Done this way, routing just makes sense. It’s easy and intuitive to implement. Note: In the Contact Route UI section of the tutorial, the code snippet for app/routes/contacts.$contactID.tsx (at line 10) references a URL for an avatar. It turns out that the URL for the avatar wasn’t returning any data. I needed to change it to https://placekitten.com/200/200 (removed the /g in the original path) instead. Client-Side Routing and Fast Data Loading Remix also features client-side routing through React Router. This means clicks that are meant to fetch data and render UI components won’t need to request and reload the entire document. Navigating between views is snappy, giving you a smooth experience with fast transitions. On top of this, Remix loads data in parallel with fetch requests, so all of the heavy lifting is happening in the background. The end user doesn’t feel it or notice it. Data just loads into components super fast. Remix does this through its loader API, which lets you define a function in each route that provides data to the route upon render. Submitting Forms Without Navigation This is a challenge that I’ve dealt with on many occasions. Sometimes, you want to submit a form to make some sort of data update on the backend, but you don’t want to trigger a navigation in your browser. The Remix tutorial describes the situation this way: We aren't creating or deleting a new record, and we don't want to change pages. We simply want to change the data on the page we're looking at. For this, Remix provides the useFetcher hook to work in conjunction with a route action. Forms and data manipulation actions on the pages felt smooth and fast. Here’s a quick screen vid of the final result of the tutorial app: They said that going through the tutorial would take about 30 minutes, and they were right. And I learned a lot along the way. Deploying to Heroku Was Blazing Fast Ahh, but then it came time to deploy. Running the tutorial app on my local machine was great. But how easy would it be to get this app — or any app I build with Remix — deployed to the cloud? The one thing I needed to add to my GitHub repo with all the completed code was a Procfile, which tells Heroku how to spin up my app. My Procfile is one line, and it looks like this: Shell web: npm run dev I logged into Heroku from the CLI. Next, I needed to create a Heroku app and deploy my code. How long would that take? And… it took 42 seconds. I challenge you to beat my time. Just like that, my Remix app was up and running! Production Deployment Remix provides the Remix App Server (@remix-run/serve) to serve up its applications. To deploy your application to production, just make sure you’ve added @remix-run/serve to your project dependencies. Heroku will automatically execute npm run build for you. So, the only other step is to change your Procfile to the following: Shell web: npm run start Then, push your updated code to Heroku. Your production deployment will be off and running! Conclusion I’m always looking out for newer, faster, and better ways to accomplish the tasks that I need to tackle regularly. I often find myself trying to spin up simple apps with clean and fast navigation, forms, and data handling. Usually, I’m building quick prototypes for prospective clients or some helpful mini-app for a friend. For deployment and running my code on the web, I keep going back to my tried-and-true, Heroku. But in stumbling upon Remix, I think I’ve found my go-to for easy single-page app development in the coming days. I know that by going through the tutorial, I’ve really only scratched the surface of what Remix has to offer. So I’m looking forward to playing with it a lot more. Happy coding!
In the evolving landscape of front-end development, technologies like HTMX are redefining the way developers approach building modern web applications. HTMX, ranked second in the prestigious 2023 JavaScript Rising Stars “Front-end Frameworks” category, just behind the ubiquitous React, and earning a spot in the GitHub Accelerator program. HTMX’s popularity continues with over 20k stars on GitHub, appealing to developers seeking lightweight and efficient solutions for modern web development challenges. In this article, we explore key features of HTMX, its advantages, and use cases, while also drawing comparisons with React. By examining how HTMX differs from React and understanding the unique strengths and weaknesses of each, developers can make decisions when selecting the appropriate toolset for their projects. Whether prioritizing simplicity, rapid prototyping, or robust component-based architectures, HTMX and React offer distinct approaches. What Is HTMX? HTMX is a lightweight, dependency-free library that enables handling AJAX requests, CSS Transitions, WebSockets, and Server-Sent Events within HTML code. By extending HTML with custom attributes, HTMX facilitates AJAX requests without the need for JavaScript code. The core principle behind HTMX is simplicity, empowering developers to leverage the capabilities of the web while staying grounded in familiar HTML structures. Recognized for its compact size (14k min. gzipped) and dependency-free nature, HTMX empowers developers to create advanced user interfaces effortlessly using the power of hypertext (markup). It simplifies the creation of interactive web applications by shifting dynamic behavior to the server side, resulting in cleaner, more maintainable code. With its ability to update content dynamically without full-page reloads, HTMX is lauded for its cost-efficiency and enhanced user experience, making it a compelling choice for modern web development projects. Comparing React and HTMX React and HTMX serve different purposes and have different approaches to front-end development. React focuses on a component-based paradigm and uses a virtual DOM for efficient updates, on the other hand, HTMX uses HTML attributes to trigger AJAX requests and manipulate the DOM. HTMX is a great choice for applications with simple interactions (chatrooms, dashboards, lists, and tables) while React is suitable for large-scale applications (SPAs and pages with rich interactive UX). HTMX can be embedded in any existing HTML page and integrates with backend technologies that can return raw HTML content such as Node.js, Django, Flask, etc. At the same time, React would need additional configuration in frontend projects not built with JS. HTMX can swap content on the page based on the server’s response, dictated by the hx-swap attribute, while React handles content swapping through state management and component re-rendering. Moreover, HTMX provides mechanisms to handle errors, ensuring a smooth user experience, React requires developers to implement error handling within components or through global error boundaries. HTMX works with CSS, allowing developers to apply styles and animations using the hx-indicator attribute. While React offers CSS-in-JS support and traditional CSS styling methods. HTMX offers several advantages, including its straightforward HTML-based syntax, enabling developers to achieve AJAX requests and DOM updates with minimal effort, facilitating faster page loads, and reducing latency. With a lightweight footprint, HTMX shifts much of the dynamic behavior to server-side logic, its simplicity and minimal overhead make it an excellent choice for rapid prototyping and iterative development. Moreover, HTMX enhances the user experience by enabling content updates without full page reloads, ensuring smooth transitions and a browsing experience. It provides support for real-time updates and efficient communication with the server using technologies like Server-Sent Events, AJAX, and WebSockets. However, HTMX requires backend UI endpoints that return raw HTML, which may lead to coupling between the front-end and the back-end. Furthermore, its limited Domain Specific Language (DSL) can make development less convenient, and debugging can be challenging due to the lack of advanced tools. In contrast, React offers its own set of advantages, including the structuring of UI with reusable components written in JSX and robust state management capabilities. As the most widely used front-end web library, React enjoys extensive community support. Its rich ecosystem, with a vast library of third-party components and tools, accelerates development. Additionally, React may require substantial amounts of JavaScript code for rendering, data management, and event handling, potentially resulting in larger bundle sizes and increased load times. React can be challenging to integrate into non-JavaScript-based projects. In terms of performance, React leverages its virtual DOM to efficiently update the UI, making it suitable for large-scale applications with frequent updates. Conclusion In summary, HTMX prioritizes simplicity, rapid implementation, and efficient data interchange with minimal JavaScript overhead, making it ideal for smaller projects and rapid prototyping. React, with its component-based architecture, virtual DOM, and rich ecosystem, excels in building SPAs and large-scale projects but may require more upfront development time and effort. While both HTMX and React excel in different aspects of frontend development, developers should carefully consider their project requirements and development goals when choosing between them.
I've always liked GUI, both desktop-based and browser-based before you needed five years of training on the latter. That's the reason I loved, and still love Vaadin: you can develop web UIs without writing a single line of HTML, JavaScript, and CSS. I'm still interested in the subject; a couple of years ago, I analyzed the state of JVM desktop frameworks. I also like the Rust programming language a lot. Tauri is a Rust-based framework for building desktop applications. Here's my view. Overview Build an optimized, secure, and frontend-independent application for multi-platform deployment. — Tauri website A Tauri app is composed of two modules: the client-side module in standard Web technologies (HTML, Javascript, and CSS) and the backend module in Rust. Tauri runs the UI in a dedicated Chrome browser instance. Users interact with the UI as usual. Tauri offers a binding between the client-side JavaScript and the backend Rust via a specific JS module, i.e, window.__TAURI__.tauri. It also offers other modules for interacting with the local system, such as the filesystem, OS, clipboard, window management, etc. Binding is based on strings. Here's the client-side code: JavaScript const { invoke } = window.__TAURI__.tauri; let greetInputEl; let greetMsgEl; greetMsgEl.textContent = await invoke("greet", { name: greetInputEl.value }); //1 Invoke the Tauri command named greet Here's the corresponding Rust code: Rust #[tauri::command] //1 fn greet(name: &str) -> String { //1 format!("Hello, {}! You've been greeted from Rust!", name) } 2. Define a Tauri command named greet In the following sections, I'll list Tauri's good, meh, and bad points. Remember that it's my subjective opinion based on my previous experiences. The Good Getting Started Fortunately, it is becoming increasingly rare, but some technologies need to remember that before you're an expert, you're a newbie. The first section of any site should be a quick explanation of the technology, and the second a getting started. Tauri succeeds in this; I got my first Tauri app in a matter of minutes by following the Quick start guide. Documentation Tauri's documentation is comprehensive, extensive (as far as my musings browsed them), and well-structured. Great Feedback Loop I've experienced exciting technologies where the feedback loop, the time it takes to see the results of a change, makes the technology unusable. GWT, I'm looking at you. Short feedback loops contribute to a great developer experience. In this regard, Tauri scores points. One can launch the app with a simple cargo tauri dev command. If the front end changes, Tauri reloads it. If any metadata changes, e.g., anything stored in tauri.conf.json, Tauri restarts the app. The only downside is that both behaviors lose the UI state. Complete Lifecycle Management Tauri doesn't only help you develop your app, it also provides the tools to debug, test, build, and distribute it. The Meh At first, I wanted to create my usual showcase for desktop applications, a file renamer app. However, I soon hit an issue when I wanted to select a directory using the file browser button. First, Tauri doesn't allow to use the regular JavaScript file-related API; Instead, it provides a more limited API. Worse, you need to explicitly configure which file system paths are available at build time, and they are part of an enumeration. I understand that security is a growing concern in modern software. Yet, I fail to understand this limitation on a desktop app, where every other app can access any directory. The Bad However, Tauri's biggest problem is its design, more precisely, its separation between the front and the back end. What I love in Vaadin is its management of all things frontend, leaving you to learn the framework only. It allows your backend developers to build web apps without dealing with HTML, CSS, and JavaScript. Tauri, though a desktop framework, made precisely the opposite choice. Your developers will need to know frontend technologies. Worse, the separation reproduces the request-response model created using browser technologies to create UIs. Reminder: early desktop apps use the Observer model, which better fits user interactions. We designed apps around the request-response model only after we ported them on the web. Using this model in a desktop app is a regression, in my opinion. Conclusion Tauri has many things to like, mainly everything that revolves around the developer experience. If you or your organization uses and likes web technologies, try Tauri. However, it's a no-go for me: to create a simple desktop app, I don't want to learn how to center a div or about the flexbox layout, etc. To Go Further Tauri
In any microservice, managing database interactions with precision is crucial for maintaining application performance and reliability. Usually, we will unravel weird issues with database connection during performance testing. Recently, a critical issue surfaced within the repository layer of a Spring microservice application, where improper exception handling led to unexpected failures and service disruptions during performance testing. This article delves into the specifics of the issue and also highlights the pivotal role of the @Transactional annotation, which remedied the issue. Spring microservice applications rely heavily on stable and efficient database interactions, often managed through the Java Persistence API (JPA). Properly managing database connections, particularly preventing connection leaks, is critical to ensuring these interactions do not negatively impact application performance. Issue Background During a recent round of performance testing, a critical issue emerged within one of our essential microservices, which was designated for sending client communications. This service began to experience repeated Gateway time-out errors. The underlying problem was rooted in our database operations at the repository layer. An investigation into these time-out errors revealed that a stored procedure was consistently failing. The failure was triggered by an invalid parameter passed to the procedure, which raised a business exception from the stored procedure. The repository layer did not handle this exception efficiently; it bubbled up. Below is the source code for the stored procedure call: Java public long createInboxMessage(String notifCode, String acctId, String userId, String s3KeyName, List<Notif> notifList, String attributes, String notifTitle, String notifSubject, String notifPreviewText, String contentType, boolean doNotDelete, boolean isLetter, String groupId) throws EDeliveryException { try { StoredProcedureQuery query = entityManager.createStoredProcedureQuery("p_create_notification"); DbUtility.setParameter(query, "v_notif_code", notifCode); DbUtility.setParameter(query, "v_user_uuid", userId); DbUtility.setNullParameter(query, "v_user_id", Integer.class); DbUtility.setParameter(query, "v_acct_id", acctId); DbUtility.setParameter(query, "v_message_url", s3KeyName); DbUtility.setParameter(query, "v_ecomm_attributes", attributes); DbUtility.setParameter(query, "v_notif_title", notifTitle); DbUtility.setParameter(query, "v_notif_subject", notifSubject); DbUtility.setParameter(query, "v_notif_preview_text", notifPreviewText); DbUtility.setParameter(query, "v_content_type", contentType); DbUtility.setParameter(query, "v_do_not_delete", doNotDelete); DbUtility.setParameter(query, "v_hard_copy_comm", isLetter); DbUtility.setParameter(query, "v_group_id", groupId); DbUtility.setOutParameter(query, "v_notif_id", BigInteger.class); query.execute(); BigInteger notifId = (BigInteger) query.getOutputParameterValue("v_notif_id"); return notifId.longValue(); } catch (PersistenceException ex) { logger.error("DbRepository::createInboxMessage - Error creating notification", ex); throw new EDeliveryException(ex.getMessage(), ex); } } Issue Analysis As illustrated in our scenario, when a stored procedure encountered an error, the resulting exception would propagate upward from the repository layer to the service layer and finally to the controller. This propagation was problematic, causing our API to respond with non-200 HTTP status codes—typically 500 or 400. Following several such incidents, the service container reached a point where it could no longer handle incoming requests, ultimately resulting in a 502 Gateway Timeout error. This critical state was reflected in our monitoring systems, with Kibana logs indicating the issue: `HikariPool-1 - Connection is not available, request timed out after 30000ms.` The issue was improper exception handling, as exceptions bubbled up through the system layers without being properly managed. This prevented the release of database connections back into the connection pool, leading to the depletion of available connections. Consequently, after exhausting all connections, the container was unable to process new requests, resulting in the error reported in the Kibana logs and a non-200 HTTP error. Resolution To resolve this issue, we could handle the exception gracefully and not bubble up further, letting JPA and Spring context release the connection to the pool. Another alternative is to use @Transactional annotation for the method. Below is the same method with annotation: Java @Transactional public long createInboxMessage(String notifCode, String acctId, String userId, String s3KeyName, List<Notif> notifList, String attributes, String notifTitle, String notifSubject, String notifPreviewText, String contentType, boolean doNotDelete, boolean isLetter, String groupId) throws EDeliveryException { ……… } The implementation of the method below demonstrates an approach to exception handling that prevents exceptions from propagating further up the stack by catching and logging them within the method itself: Java public long createInboxMessage(String notifCode, String acctId, String userId, String s3KeyName, List<Notif> notifList, String attributes, String notifTitle, String notifSubject, String notifPreviewText, String contentType, boolean doNotDelete, boolean isLetter, String loanGroupId) { try { ....... query.execute(); BigInteger notifId = (BigInteger) query.getOutputParameterValue("v_notif_id"); return notifId.longValue(); } catch (PersistenceException ex) { logger.error("DbRepository::createInboxMessage - Error creating notification", ex); } return -1; } With @Transactional The @Transactional annotation in Spring frameworks manages transaction boundaries. It begins a transaction when the annotated method starts and commits or rolls it back when the method completes. When an exception occurs, @Transactional ensures that the transaction is rolled back, which helps appropriately release database connections back to the connection pool. Without @Transactional If a repository method that calls a stored procedure is not annotated with @Transactional, Spring does not manage the transaction boundaries for that method. The transaction handling must be manually implemented if the stored procedure throws an exception. If not properly managed, this can result in the database connection not being closed and not being returned to the pool, leading to a connection leak. Best Practices Always use @Transactional when the method's operations should be executed within a transaction scope. This is especially important for operations involving stored procedures that can modify the database state. Ensure exception handling within the method includes proper transaction rollback and closing of any database connections, mainly when not using @Transactional. Conclusion Effective transaction management is pivotal in maintaining the health and performance of Spring Microservice applications using JPA. By employing the @Transactional annotation, we can safeguard against connection leaks and ensure that database interactions do not degrade application performance or stability. Adhering to these guidelines can enhance the reliability and efficiency of our Spring Microservices, providing stable and responsive services to the consuming applications or end users.
This article is part of a range of articles called “Mastering Object-Oriented Design Patterns.” The collection consists of four articles and aims to provide profound guidance on object-oriented design patterns. The articles address the introduction of the design patterns issues, their sources, and the advantages of their use. In addition, the tutorial series provides full explanations of the common design patterns. Every article starts with real-life analogies, discusses the pros and cons of each pattern, and provides a Java example implementation. Once you find the title, “Mastering Object-Oriented Design Patterns,” you can explore the whole series and master object-oriented design patterns. Once upon a time, there was a new notion called “design patterns” in software engineering. This concept has revolutionized how developers approach complex software design. Design patterns are verified solutions to frequently encountered problems. However, where did this idea originate, and how did it significantly contribute to object-oriented programming? Origin of Design Pattern Design patterns first appeared in architecture, not in software. An architect and design theorist, Christopher Alexander, introduced the idea in his influential work, “A Pattern Language: Towns, Buildings, Construction.” Alexander sought to develop a pattern language to solve some city spatial and communal problems. These patterns included several details, such as window heights and the organization of green zones within the neighborhoods. This way, it sets the ground for a design approach focusing on reusable solutions to the same problems. Captivated by the concept of Alexander, a group of four software engineers (Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides), also known as the Gang of Four (GoF), recognized the potential of using this concept in software development. In 1994, they published “Design Patterns: Book “Elements of Reusable Object-Oriented” Software that translated the pattern language of architecture into the world of object-oriented programming (OOP). This seminal publication presented twenty-three design patterns targeted at addressing typical design issues. It soon became a best-seller and a vital tool in software engineering instruction. Introduction to Design Patterns What Are Design Patterns? Design patterns are not recipes but recommendations and tips for solving typical design problems. They are a pool of bright ideas and experiences of the software development community. These patterns assist the developers in building flexible, low-maintenance, and reusable code. Design patterns guide common language and methodology for solving design problems, simplifying collaboration among developers, and speeding up the development process. Picture-making software is like assembling a puzzle, except that you can continuously be given the same piece. Design patterns are your map indicating how you can fit those pieces every time. Design patterns are helpful techniques for resolving common coding issues. They can be understood as a set of coding challenge cookbooks. Rather than giving you ready-made code snippets, they present ways to solve particular problems in your projects. The purpose of design patterns is to reduce coding complexities, help you solve problems faster, and keep your code as flexible as possible for the future. Design Patterns vs. Algorithms Nevertheless, both provide solutions, but an algorithm is a sequence of steps to reach a goal, just like a cooking recipe. On the other hand, a design pattern is more of a template. It provides the layout and major components of the solution but does not specify building details; consequently, it is flexible in how this solution is being implemented in your project. Both algorithms and design patterns provide solutions. An algorithm is like a process, a recipe in the kitchen that makes you reach a target. Alternatively, a design pattern is like a blueprint. It gives the framework and the factor elements of the solution but lets you select the structure details, which makes it flexible for your project demands. Inside a Design Pattern A design pattern typically includes: A design pattern typically includes: Intent: What the pattern does and what it solves. Motivation: The reason and the way it can help. Structure of classes: A schematic indicating how its parts communicate. Code example: Commonly made available in popular programming languages to facilitate comprehension. Some will also address when to use the pattern, how to apply it, and its interaction with other patterns, leaving you with a complete toolset for more innovative coding. Why Use Design Patterns? Design patterns in coding are a kind of secret toolset. They make solving common problems easier, and here’s why embracing design patterns can be a game-changer: They make solving common issues more accessible and that’s why embracing design patterns can be a game-changer: Proven and ready-to-use solutions: Imagine owning a treasure chest of brilliant hacks already worked out by professional coders. That’s what design patterns are—several clever, immediately applicable, professional-quality solutions that allow you to solve problems quickly and correctly. Simplifying complexity: Any great software is minimalistic in a sense. Design patterns assist you in splitting large and daunting problems into small and manageable chunks, thus making your code neater and your life simpler. Big picture focus: Design patterns allow you to spend less time on code structure and more time on doing cool stuff. This will enable you to concentrate more on producing great features rather than struggling with the fundamentals. Common language: Design patterns provide the developers with a common language, so when you say, “Let’s use a Singleton here,” everyone gets it. This leads to more efficient work and less confusion. Reusability and maintainability: The design patterns encourage code reuse via inheritance and interfaces, which allows classes to be adaptable and systems easy to maintain. This method shortens development cycles and keeps systems fortified over time. Improved scalability and flexibility: The MVC pattern allows for a more defined separation of the different parts of your code, making your system more flexible and able to grow with little adjustments. Boosted readability and understandability: Properly implemented design patterns increase the readability and understandability of your code, making it easier for other people to understand and contribute without too much explanation. In a nutshell, design patterns are all about making coding more comfortable, efficient, and even entertaining. They enable you to work on extension rather than invention, which allows you to improve the software without reinventing the wheel. Navigating the Tricky Side of Design Patterns Design patterns are secret ingredients that make writing code more accessible and practical. But they are not ideal. Here are a couple of things to be aware of: Not suitable for every programming language: However, using a design pattern may sometimes not be appropriate for a specific language in a programming language. For instance, a complex pattern may be redundant if the language has a simple feature that can do the job. It is just like employing an intelligent instrument while a simple one is sufficient. Being too rigid with patterns: Although design patterns are derived from best practices, their strict adherence may cause undesirable behavior. It’s similar to sticking to a recipe so rigidly that you do not make it according to your taste. At times, you need to modify to suit the particular requirements of your project. Overusing patterns: It is pretty simple to lose control and believe that every problem can be addressed through a design pattern. Yet, not all problems need a pattern. It is akin to using a hammer for all tasks when, at times, a screwdriver is sufficient. Adding unnecessary complexity: Design patterns can also introduce complexity to your code. If not handled with care, they can complicate your project. How To Avoid the Pitfalls However, despite the troubles, design patterns are still quite helpful. The key is to use them wisely: Choose the appropriate tool for the task: Not all problems need a design pattern. Sometimes, simpler is better. Adapt and customize: Never be afraid to adjust a pattern to make it suit you better. Please keep it simple: Do not make your code more complicated by using patterns that are not required. In summary, design patterns are similar to spices in cooking: applied correctly, they can improve your dish (or project). Yet, it’s necessary to employ them in moderation and not let them overcome the food. Types of Design Patterns Design patterns are beneficial methods applied in software design. They facilitate code organization and management during the development and preservation of applications. Regard them as clever construction techniques and improvements to your software projects. Let’s quickly check out the three main types: Creational Patterns: Building Blocks Creational patterns are equivalent to picking up the suitable LEGO blocks to begin your model building. Their attention is directed to simplifying the process of creating objects or groups of objects. This way, you can build up the software flexibly and efficiently, as if picking out the LEGO pieces that fit your design. Structural Patterns: Putting It All Together Structural patterns are all about how you build your LEGO bricks. They help you arrange the pieces (or objects) into more significant structures, with everything neat and well-arranged. It is akin to following a LEGO manual to guarantee your spaceship or castle will be sturdy and neat. Behavioral Patterns: Making It Work LEGO behavioral patterns are just about making your LEGO creation do extraordinary things. For instance, think about making the wings of your LEGO spaceship move. In software, these patterns enable various program components to interact and cooperate, ensuring everything functions as intended. Design patterns could be as simple as idioms that only run in a programming language or as complicated as architectural patterns that shape the entire application. They are your tool in the tool kit, available during a small function and throughout the software’s structure. Comprehending these patterns is like learning the tricks of constructing the most incredible LEGO sets. They make you a software genius; all your coding will seem relaxed and fun! Conclusion Our first module is finally over. It has been a fantastic trip into the principles behind design patterns and how the patterns are leveraged in software engineering. We found it fascinating to understand the concept of design patterns and their role in software engineering. Design patterns are not merely coding shortcuts but crystallized wisdom that provides reusable solutions for typical design issues. They simplify the object-oriented programming process and make it work faster, thus creating cleaner codes. On the other hand, they are not simple. We have pointed out that it is essential to know when and how to use them appropriately. In closing this chapter, we invite you to browse the other parts of the “Mastering Object-Oriented Design Patterns” series. Each part reinforces your comprehension and skill, making you more confident when applying design patterns to your projects. If you want to develop your architectural skills, speed up your development process, or improve the quality of your code, this series is here to help you. References Design Patterns Head First Design
Deciding on full-stack technology is daunting. There's a large number of frameworks to assess. I'll share my contrarian viewpoint. Choose Django. The Django Python framework is old, but it gets the job done quickly and cheaply. Django has an "everything included and opinionated" philosophy. This approach makes it very fast to get started with Django. As your project scales, separate Django into individual components that can be used as a SQL database manager, an object-relational mapper, or an API server. In 2024, we stopped using Firebase and MongoDB on the backend and moved to Django. While many projects can benefit from more modern frameworks, choosing newer technology can also lead to more costs as teams struggle to find developers with the required skills and later find the development budget expanding as unforeseen problems arise. With Django, the development path has been smoothly paved by tens of thousands of developers before you. For the types of small projects we build for business workflows, Django is not only faster to bring ideas into prototype, it is also cheaper to build and maintain. Here are the top 5 reasons why Django should be considered in 2024. 1. SQL as a Service Django uses an SQL database. Like Django, SQL is considered old and not as glamorous as newer NoSQL databases like MongoDB. One way to view Django is as an SQL database with pre-built code to make the database easier to use. If Django fails to meet your future requirements for a specific function, you can still use the SQL database alone or use portions of Django to make the database easier to access. In our case, part of our Django deployments consists of a hosted PostgreSQL database as a service. To start, we simply add the connection information to the Django app. This architecture also allows us to use Django as a temporary prototype tool to set up a working interface and logic to data. As the project evolves, we can bypass Django and connect directly to the database with another application such as Flutter or React, or set up a stripped-down intermediary server to bridge the database and front end. The maturity of SQL is also an advantage when hiring talent. Many universities and technical schools include SQL as part of the mainstream curriculum. Hiring talent from schools is easier as the students have recent academic experience and are eager to gain more real-world experience. Most business software projects can be completed without advanced SQL database administration skills. 2. API Service The ability to use SQL or the Django REST framework to expose API endpoints allows us to quickly build multiple interfaces to the database using different technologies. Although Django templates are easy to learn and powerful, the templates aren't as interactive as more reactive frameworks. Fortunately, Django can serve both the Django page templates and a REST API for the same data. This means that you can test out different front-end architectures while reusing most of the core logic of the backend. The benefit of starting with Django instead of an API server is that it is easier to get started with Django and prototype a concept. Django is super easy to install and comes with everything needed for a web application. pip install django will get you going. It's very fast to build a functional prototype and deploy it. Django comes with authentication, a database, a development web server, and template syntax. This allows complex sites to be built quickly with different security groups and a built-in admin panel for staff. 3. Serverless Architecture Django makes it easy to deploy to a serverless architecture and eliminates virtual server setup and maintenance. Starting in 2023, we moved our Django projects to a serverless architecture. Previously, we used Ubuntu Linux in virtual servers. We used to run PostgreSQL, NGINX, and Django in the same scalable virtual server. Moving to a set of cloud services for database, storage, CPU, and network took less than a day to start a new project. During software development, our code is pushed to GitHub on a branch that we can preview as a web application in a browser. After a peer or manager reviews the pull request, it's merged into the main branch which triggers automatic deployment of the code to the production service to run the application. The GitHub deployment script for Django as well as the settings was provided by the application service. We just copied it and deployed. As our application service can't be used to storage media and data, we use a different service to store media such as images. Since Django has been used for a long time, there is going to be a solution for almost any connectivity problem you encounter. For example, AWS SDK for Python, Boto3, provides easy access to S3, which is how we handle storage. 4. Python Python remains one of the most popular languages. It is widely taught at all levels of education in the US, including elementary, middle, high, and college. This vast pool of people makes recruitment easier and the integration of Python into a structured academic curriculum leads to better programming practices. Even with no experience with Python, people familiar with another language can get productive quickly. With no compilation, Python promotes a very fast development workflow. The lack of compilation does come with a performance penalty. For most projects, hardware is cheaper than labor. With a scalable cloud-based architecture, the performance of Django is never an issue for projects with low simultaneous connections. Although Python is dynamically typed, you can use typing to take advantage of type hints. Perhaps more important than the language features is the toolchain. As Python does not require compilation, the toolchain is easier to set up and maintain. Lightweight tools such as pyenv and venv can keep the version of Python and packages consistent across the development team. As long as the PATH is set up correctly on the developer workstations, development and deployment are usually free of drama. Our team uses a mix of Windows 10, Windows 11, macOS, and Ubuntu 22.04 for development. Although not required, everyone uses VS Code with a range of Python extensions. 5. Community After more than 20 years, almost all Django problems have been solved by other people. The huge size and long history of the community also help with staff changes. Most of the information on Django can be found with a search, either using a QA site or an AI site. For comparison, when I use other, newer frameworks, I find that I need to get my information by asking questions to people, often on systems such as Discord or Slack. For Django, I usually don't need to wait for a person to answer my question as the question has usually been asked and answered by the tens of thousands of Django developers before me. Although there are a number of free and inexpensive courses on both Django and Python, most people simply start coding and search for answers. While the Python language and Django are both under active development, most problems in small projects do not occur with the edge case of the language or framework. Django Growth in the Future I'm not going to predict the future. Young people are going to create it. In my company, we have an active undergraduate intern program. Of course, they use and like Django. However, that's part of their job. What surprises me is that they use Python and Django in their own projects outside of work. I usually expect people under 24 to use JavaScript and possibly TypeScript for both the back end and the front end of a full-stack project. The young programmers are the future and they often use Python as the backend and a combination of Django admin panel, Django template syntax, and something like React on the front end. Ultimately, a more modern framework may work better for your project. If you like Python, then Flask or FastAPI may be better than Django. GraphQL with Django may be better than the tried and true REST protocol. However, before you dismiss Django as old and monolithic, take another look at it from a componentized perspective. It may be the fastest and cheapest way to bring your creative ideas to life. After 20 years, Django still works great.
Flyway is a popular open-source tool for managing database migrations. It makes it easy to manage and version control the database schema for your application. Flyway supports almost all popular databases including Oracle, SQL Server, DB2, MySQL, Amazon RDS, Aurora MySQL, MariaDB, PostgreSQL, and more. For the full list of supported databases, you can check the official documentation here. How Flyway Migrations Works Any changes to the database are called migrations. Flyway supports two types of migrations; versioned or repeatable migrations. Versioned migrations are the most common type of migration, they are applied once in the order they appear. Versioned migrations are used for creating, altering, and dropping tables, indexes or foreign keys. Versioned migration files use naming conventions using [Prefix][Separator][Migration Description][Suffix] for example, V1__add_user_table.sql and V2__alter_user_table.sql Repeatable migrations, on the other hand, are (re-)applied every time they change. Repeatable migrations are useful for managing views, stored procedures, or bulk reference data updates where the latest version should replace the previous one without considering versioning. Repeatable migrations are always applied last after all pending versioned migrations are been executed. Repeatable migration files use naming conventions such as R__add_new_table.sql The migration schemas can be written in either SQL or Java. When we start the application to an empty database, Flyway will first create a schema history table (flyway_schema_history) table. This table IS used to track the state of the database. After the flyway_schema_history the table is created, it will scan the classpath for the migration files. The migrations are then sorted based on their version number and applied in order. As each migration gets applied, the schema history table is updated accordingly. Integrating Flyway in Spring Boot In this tutorial, we will create a Spring Boot application to deal with MySQL8 database migration using Flyway. This example uses Java 17, Spring Boot 3.2.4, and MySQL 8.0.26. For the database operation, we will use Spring boot JPA. Install Flyway Dependencies First, add the following dependencies to your pom.xml or your build.gradle file. The spring-boot-starter-data-jpa dependency is used for using Spring Data Java Persistence API (JPA) with Hibernate. The mysql-connector-j is the official JDBC driver for MySQL databases. It allows your Java application to connect to a MySQL database for operations such as creating, reading, updating, and deleting records. The flyway-core dependency is essential for integrating Flyway into your project, enabling migrations and version control for your database schema. The flyway-mysql dependency adds the Flyway support for MySQL databases. It provides MySQL-specific functionality and optimizations for Flyway operations. It's necessary when your application uses Flyway for managing database migrations on a MySQL database. pom.xml XML <dependencies> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-data-jpa</artifactId> </dependency> <dependency> <groupId>com.mysql</groupId> <artifactId>mysql-connector-j</artifactId> <scope>runtime</scope> </dependency> <dependency> <groupId>org.flywaydb</groupId> <artifactId>flyway-core</artifactId> </dependency> <dependency> <groupId>org.flywaydb</groupId> <artifactId>flyway-mysql</artifactId> </dependency> <!-- Other dependencies--> </dependencies> Configure the Database Connection Now let us provide the database connection properties in your application.properties file. Properties files # DB properties spring.datasource.url=jdbc:mysql://localhost:3306/flyway_demo spring.datasource.username=root spring.datasource.password=Passw0rd spring.datasource.driver-class-name=com.mysql.cj.jdbc.Driver #JPA spring.jpa.show-sql=true Create Database Changelog Files Let us now create a couple of database migration schema files inside the resources/db/migrations directory. V1__add_movies_table SQL CREATE TABLE movie ( id bigint NOT NULL AUTO_INCREMENT, title varchar(255) DEFAULT NULL, headline varchar(255) DEFAULT NULL, language varchar(255) DEFAULT NULL, region varchar(255) DEFAULT NULL, thumbnail varchar(255) DEFAULT NULL, rating enum('G','PG','PG13','R','NC17') DEFAULT NULL, PRIMARY KEY (id) ) ENGINE=InnoDB; V2__add_actor_table.sql SQL CREATE TABLE actor ( id bigint NOT NULL AUTO_INCREMENT, first_name varchar(255) DEFAULT NULL, last_name varchar(255) DEFAULT NULL, PRIMARY KEY (id) ) ENGINE=InnoDB; V3__add_movie_actor_relations.sql SQL CREATE TABLE movie_actors ( actors_id bigint NOT NULL, movie_id bigint NOT NULL, PRIMARY KEY (actors_id, movie_id), KEY fk_ref_movie (movie_id), CONSTRAINT fk_ref_movie FOREIGN KEY (movie_id) REFERENCES movie (id), CONSTRAINT fl_ref_actor FOREIGN KEY (actors_id) REFERENCES actor (id) ) ENGINE=InnoDB; R__create_or_replace_movie_view.sql SQL CREATE OR REPLACE VIEW movie_view AS SELECT id, title FROM movie; V4__insert_test_data.sql SQL INSERT INTO movie (title, headline, language, region, thumbnail, rating) VALUES ('Inception', 'A thief who steals corporate secrets through the use of dream-sharing technology.', 'English', 'USA', 'inception.jpg', 'PG13'), ('The Godfather', 'The aging patriarch of an organized crime dynasty transfers control of his clandestine empire to his reluctant son.', 'English', 'USA', 'godfather.jpg', 'R'), ('Parasite', 'A poor family, the Kims, con their way into becoming the servants of a rich family, the Parks. But their easy life gets complicated when their deception is threatened with exposure.', 'Korean', 'South Korea', 'parasite.jpg', 'R'), ('Amélie', 'Amélie is an innocent and naive girl in Paris with her own sense of justice. She decides to help those around her and, along the way, discovers love.', 'French', 'France', 'amelie.jpg', 'R'); -- Inserting data into the 'actor' table INSERT INTO actor (first_name, last_name) VALUES ('Leonardo', 'DiCaprio'), ('Al', 'Pacino'), ('Song', 'Kang-ho'), ('Audrey', 'Tautou'); -- Leonardo DiCaprio in Inception INSERT INTO movie_actors (actors_id, movie_id) VALUES (1, 1); -- Al Pacino in The Godfather INSERT INTO movie_actors (actors_id, movie_id) VALUES (2, 2); -- Song Kang-ho in Parasite INSERT INTO movie_actors (actors_id, movie_id) VALUES (3, 3); -- Audrey Tautou in Amélie INSERT INTO movie_actors (actors_id, movie_id) VALUES (4, 4); These tables are mapped to the following entity classes. Movie.java Java @Entity @Data public class Movie { @Id @GeneratedValue(strategy = GenerationType.IDENTITY) private Long id; private String title; private String headline; private String thumbnail; private String language; private String region; @Enumerated(EnumType.STRING) private ContentRating rating; @ManyToMany Set<Actor> actors; } public enum ContentRating { G, PG, PG13, R, NC17 } Actor.java Java @Entity @Data public class Actor { @Id @GeneratedValue(strategy = GenerationType.IDENTITY) Long id; String firstName; String lastName; } Configure Flyway We can control the migration process using the following properties in the application.properties file: application.properties spring.flyway.enabled=true spring.flyway.locations=classpath:db/migrations spring.flyway.baseline-on-migrate=true spring.flyway.validate-on-migrate=true Property Use spring.flyway.enabled=true Enables or disables Flyway's migration functionality for your application spring.flyway.validate-on-migrate=true When this property is set to true, Flyway will validate the applied migrations against the migration scripts every time it runs a migration. This ensures that the migrations applied to the database match the ones available in the project. If validation fails, Flyway will prevent the migration from running, which helps catch potential problems early. spring.flyway.baseline-on-migrate=true Used when you have an existing database that wasn't managed by Flyway and you want to start using Flyway to manage it. Setting this to true allows Flyway to baseline an existing database, marking it as a baseline and starting to manage subsequent migrations. spring.flyway.locations Specifies the locations of migration scripts within your project. Run the Migrations When you start your Spring Boot application, Flyway will automatically check the db/migrations directory for any new migrations that have not yet been applied to the database and will apply them in version order. ./mvnw spring-boot:run Reverse/Undo Migrations in Flyway Flyway allows you to revert migrations that were applied to the database. However, this feature requires you to have a Flyway Teams (Commercial) license. If you're using the community/free version of Flyway, the workaround is to create a new migration changelog file to undo the changes made by the previous migration and apply them. For example, V5__delete_movie_actors_table.sql DROP TABLE movie_actors; Now run the application to apply the V5 migration changelog to your database. Using Flyway Maven Plugin Flyway provides a maven plugin to manage the migrations from the command line. It provides 7 goals. Goal Description flyway:baseline Baselines an existing database, excluding all migrations up to and including baselineVersion. flyway:clean Drops all database objects (tables, views, procedures, triggers, ...) in the configured schemas. The schemas are cleaned in the order specified by the schemas property.. flyway:info Retrieves the complete information about the migrations including applied, pending and current migrations with details and status flyway:migrate Triggers the migration of the configured database to the latest version. flyway:repair Repairs the Flyway schema history table. This will remove any failed migrations on databases without DDL transactions flyway:undo Undoes the most recently applied versioned migration. Flyway teams only flyway:validate Validate applied migrations against resolved ones on the classpath. This detect accidental changes that may prevent the schema(s) from being recreated exactly. To integrate the flyway maven plugin into your maven project, we need to add flyway-maven-plugin plugin to your pom.xml file. XML <properties> <database.url>jdbc:mysql://localhost:3306/flyway_demo</database.url> <database.username>YOUR_DB_USER</database.username> <database.password>YOUR_DB_PASSWORD</database.password> </properties> <build> <plugins> <plugin> <groupId>org.flywaydb</groupId> <artifactId>flyway-maven-plugin</artifactId> <version>10.10.0</version> <configuration> <url>${database.url}</url> <user>${database.username}</user> <password>${database.password}</password> </configuration> </plugin> <!-- other plugins --> </plugins> </build> Now you can use the Maven goals. ./mvnw flyway:migrate Maven allows you to define properties in the project's POM and pass the value from the command line. ./mvnw -Ddatabase.username=root -Ddatabase.password=Passw0rd flyway:migrate
Reactive programming has become increasingly popular in modern software development, especially in building scalable and resilient applications. Kotlin, with its expressive syntax and powerful features, has gained traction among developers for building reactive systems. In this article, we’ll delve into reactive programming using Kotlin Coroutines with Spring Boot, comparing it with WebFlux, another choice for reactive programming yet more complex in the Spring ecosystem. Understanding Reactive Programming Reactive programming is a programming paradigm that deals with asynchronous data streams and the propagation of changes. It focuses on processing streams of data and reacting to changes as they occur. Reactive systems are inherently responsive, resilient, and scalable, making them well-suited for building modern applications that need to handle high concurrency and real-time data. Kotlin Coroutines Kotlin Coroutines provides a way to write asynchronous, non-blocking code in a sequential manner, making asynchronous programming easier to understand and maintain. Coroutines allow developers to write asynchronous code in a more imperative style, resembling synchronous code, which can lead to cleaner and more readable code. Kotlin Coroutines vs. WebFlux Spring Boot is a popular framework for building Java and Kotlin-based applications. It provides a powerful and flexible programming model for developing reactive applications. Spring Boot’s support for reactive programming comes in the form of Spring WebFlux, which is built on top of Project Reactor, a reactive library for the JVM. Both Kotlin Coroutines and WebFlux offer solutions for building reactive applications, but they differ in their programming models and APIs. 1. Programming Model Kotlin Coroutines: Kotlin Coroutines use suspend functions and coroutine builders like launch and async to define asynchronous code. Coroutines provide a sequential, imperative style of writing asynchronous code, making it easier to understand and reason about. WebFlux: WebFlux uses a reactive programming model based on the Reactive Streams specification. It provides a set of APIs for working with asynchronous data streams, including Flux and Mono, which represent streams of multiple and single values, respectively. 2. Error Handling Kotlin Coroutines: Error handling in Kotlin Coroutines is done using standard try-catch blocks, making it similar to handling exceptions in synchronous code. WebFlux: WebFlux provides built-in support for error handling through operators like onErrorResume and onErrorReturn, allowing developers to handle errors in a reactive manner. 3. Integration With Spring Boot Kotlin Coroutines: Kotlin Coroutines can be seamlessly integrated with Spring Boot applications using the spring-boot-starter-web dependency and the kotlinx-coroutines-spring library. WebFlux: Spring Boot provides built-in support for WebFlux, allowing developers to easily create reactive RESTful APIs and integrate with other Spring components. Show Me the Code The Power of Reactive Approach Over Imperative Approach The provided code snippets illustrate the implementation of a straightforward scenario using both imperative and reactive paradigms. This scenario involves two stages, each taking 1 second to complete. In the imperative approach, the service responds in 2 seconds as it executes both stages sequentially. Conversely, in the reactive approach, the service responds in 1 second by executing each stage in parallel. However, even in this simple scenario, the reactive solution exhibits some complexity, which could escalate significantly in real-world business scenarios. Here’s the Kotlin code for the base service: Kotlin @Service class HelloService { fun getGreetWord() : Mono<String> = Mono.fromCallable { Thread.sleep(1000) "Hello" } fun formatName(name:String) : Mono<String> = Mono.fromCallable { Thread.sleep(1000) name.replaceFirstChar { it.uppercase() } } } Imperative Solution Kotlin fun greet(name:String) :String { val greet = helloService.getGreetWord().block(); val formattedName = helloService.formatName(name).block(); return "$greet $formattedName" } Reactive Solution Kotlin fun greet(name:String) :Mono<String> { val greet = helloService.getGreetWord().subscribeOn(Schedulers.boundedElastic()) val formattedName = helloService.formatName(name).subscribeOn(Schedulers.boundedElastic()) return greet .zipWith(formattedName) .map { it -> "${it.t1} ${it.t2}" } } In the imperative solution, the greet function awaits the completion of the getGreetWord and formatName methods sequentially before returning the concatenated result. On the other hand, in the reactive solution, the greet function uses reactive programming constructs to execute the tasks concurrently, utilizing the zipWith operator to combine the results once both stages are complete. Simplifying Reactivity With Kotlin Coroutines To simplify the complexity inherent in reactive programming, Kotlin’s coroutines provide an elegant solution. Below is a Kotlin coroutine example demonstrating the same scenario discussed earlier: Kotlin @Service class CoroutineHelloService() { suspend fun getGreetWord(): String { delay(1000) return "Hello" } suspend fun formatName(name: String): String { delay(1000) return name.replaceFirstChar { it.uppercase() } } fun greet(name:String) = runBlocking { val greet = async { getGreetWord() } val formattedName = async { formatName(name) } "${greet.await()} ${formattedName.await()}" } } In the provided code snippet, we leverage Kotlin coroutines to simplify reactive programming complexities. The HelloServiceCoroutine class defines suspend functions getGreetWord and formatName, which simulates asynchronous operations using delay. The greetCoroutine function demonstrates an imperative solution using coroutines. Within a runBlocking coroutine builder, it invokes suspend functions sequentially to retrieve the greeting word and format the name, finally combining them into a single greeting string. Conclusion In this exploration, we compared reactive programming in Kotlin Coroutines with Spring Boot to WebFlux. Kotlin Coroutines offer a simpler, more sequential approach, while WebFlux, based on Reactive Streams, provides a comprehensive set of APIs with a steeper learning curve. Code examples demonstrated how reactive solutions outperform imperative ones by leveraging parallel execution. Kotlin Coroutines emerged as a concise alternative, seamlessly integrated with Spring Boot, simplifying reactive programming complexities. In summary, Kotlin Coroutines excels in simplicity and integration, making them a compelling choice for developers aiming to streamline reactive programming in Spring Boot applications.
Justin Albano
Software Engineer,
IBM
Thomas Hansen
CTO,
AINIRO.IO