Low-Code Development: Leverage low and no code to streamline your workflow so that you can focus on higher priorities.
DZone Security Research: Tell us your top security strategies in 2024, influence our research, and enter for a chance to win $!
Integration refers to the process of combining software parts (or subsystems) into one system. An integration framework is a lightweight utility that provides libraries and standardized methods to coordinate messaging among different technologies. As software connects the world in increasingly more complex ways, integration makes it all possible facilitating app-to-app communication. Learn more about this necessity for modern software development by keeping a pulse on the industry topics such as integrated development environments, API best practices, service-oriented architecture, enterprise service buses, communication architectures, integration testing, and more.
Strengthening Cloud Environments Through Python and SQL Integration
Developing Minimal APIs Quickly With Open Source ASP.NET Core
In this article, I would like to describe an approach to writing tests with a clear division into separate stages, each performing its specific role. This facilitates the creation of tests that are easier to read, understand, and maintain. The discussion will focus on using the Arrange-Act-Assert methodology for integration testing in the Spring Framework with mocking of HTTP requests to external resources encountered during the execution of the tested code within the system behavior. The tests under consideration are written using the Spock Framework in the Groovy language. MockRestServiceServer will be used as the mocking mechanism. There will also be a few words about WireMock. Problem Description When studying how to write integration tests for Spring, I often referred to materials on the topic. Examples for MockRestServiceServer mostly described an approach with the declaration of expectations as follows: Expected URI Number of requests to the expected URI Expectations for the structure and content of the request body Response to the request The code looked something like this: Java @Test public void testWeatherRequest() { mockServer.expect(once(), requestTo("https://external-weather-api.com/forecast")) .andExpect(method(HttpMethod.POST)) .andExpect(jsonPath("$.field1", equalTo("value1"))) .andExpect(jsonPath("$.field2", equalTo("value2"))) .andExpect(jsonPath("$.field3", equalTo("value3"))) .andRespond(withSuccess('{"result": "42"}', MediaType.APPLICATION_JSON)); weatherService.getForecast("London") mockServer.verify() assert .. assert .. } When applying this approach, I encountered a number of difficulties: Ambiguity in determining the reasons for AssertionErrorby the log text - the log text is the same for different scenarios: The HTTP call code is missing/not executed according to business logic. The HTTP call code is executed with an error. The HTTP call code is executed correctly, but there is an error in the mock description. Difficulty in determining the scope of the tested states due to their dispersion throughout the test code. Formally, the result verification is carried out at the end of the test (mockServer.verify()), but the verification assertions regarding the composition and structure of the request are described at the beginning of the test (as part of creating the mock). At the same time, verification assertions not related to the mock were presented at the end of the test. Important clarification: Using RequestMatcher for the purpose of isolating mocks within many requests seems like the right solution. Proposed Solution Clear division of test code into separate stages, according to the Arrange-Act-Assert pattern. Arrange-Act-Assert Arrange-Act-Assert is a widely used pattern in writing tests, especially in unit testing. Let's take a closer look at each of these steps: Arrange (Preparation) At this stage, you set up the test environment. This includes initializing objects, creating mocks, setting up necessary data, etc. The goal of this step is to prepare everything needed for the execution of the action being tested. Act (Execution) Here you perform the action you want to test. This could be a method call or a series of actions leading to a certain state or result to be tested. Assert (Result Verification) At the final stage, you check the results of the action. This includes assertions about the state of objects, returned values, changes in the database, messages sent, etc. The goal of this step is to ensure that the tested action has produced the expected result. Demonstration Scenarios The business logic of the service for which the tests will be provided can be described as follows: Gherkin given: The weather service provides information that the weather in city A equals B when: We request weather data from the service for city A then: We receive B Sequence Diagram Example Implementation for MockRestServiceServer Before Proposed Changes Tests for the above scenario will be described using MockRestServiceServer. Difficulty in Determining the Scope of Tested States Due to Their Dispersion Throughout the Test Code Groovy def "Forecast for provided city London is 42"() { setup: // (1) mockServer.expect(once(), requestTo("https://external-weather-api.com/forecast")) // (2) .andExpect(method(HttpMethod.POST)) .andExpect(jsonPath('$.city', Matchers.equalTo("London"))) // (3) .andRespond(withSuccess('{"result": "42"}', MediaType.APPLICATION_JSON)); // (4) when: // (5) def forecast = weatherService.getForecast("London") then: // (6) forecast == "42" // (7) mockServer.verify() // (8) } Setup stage: describing the mock Indicating that exactly one call is expected to https://external-weather-api.com Specifying expected request parameters Describing the response to return Execution stage, where the main call to get the weather for the specified city occurs Verification stage: Here, mockServer.verify() is also called to check the request (see item 3). Verification assertion regarding the returned value Calling to verify the mock's state Here we can observe the problem described earlier as "difficulty in determining the scope of tested states due to their dispersion throughout the test code" - some of the verification assertions are in the then block, some in the setup block. Ambiguity in Determining the Causes of AssertionError To demonstrate the problem, let's model different error scenarios in the code. Below are the situations and corresponding error logs. Scenario 1 - Passed an unknown city name: def forecast = weatherService.getForecast("Unknown") Java java.lang.AssertionError: No further requests expected: HTTP POST https://external-weather-api.com 0 request(s) executed. at org.springframework.test.web.client.AbstractRequestExpectationManager.createUnexpectedRequestError(AbstractRequestExpectationManager.java:193) Scenario 2: Incorrect URI declaration for the mock; for example, mockServer.expect(once(), requestTo("https://foo.com")) Java java.lang.AssertionError: No further requests expected: HTTP POST https://external-weather-api.com 0 request(s) executed. at org.springframework.test.web.client.AbstractRequestExpectationManager.createUnexpectedRequestError(AbstractRequestExpectationManager.java:193) Scenario 3: No HTTP calls in the code Java java.lang.AssertionError: Further request(s) expected leaving 1 unsatisfied expectation(s). 0 request(s) executed. The main observation: All errors are similar, and the stack trace is more or less the same. Example Implementation for MockRestServiceServer With Proposed Changes Ease of Determining the Scope of Tested States Due to Their Dispersion Throughout the Test Code Groovy def "Forecast for provided city London is 42"() { setup: // (1) def requestCaptor = new RequestCaptor() mockServer.expect(manyTimes(), requestTo("https://external-weather-api.com")) // (2) .andExpect(method(HttpMethod.POST)) .andExpect(requestCaptor) // (3) .andRespond(withSuccess('{"result": "42"}', MediaType.APPLICATION_JSON)); // (4) when: // (5) def forecast = weatherService.getForecast("London") then: // (6) forecast == "42" requestCaptor.times == 1 // (7) requestCaptor.entity.city == "London" // (8) requestCaptor.headers.get("Content-Type") == ["application/json"] } #3: Data capture object #7: Verification assertion regarding the number of calls to the URI #8: Verification assertion regarding the composition of the request to the URI In this implementation, we can see that all the verification assertions are in the then block. Unambiguity in Identifying the Causes of AssertionError To demonstrate the problem, let's attempt to model different error scenarios in the code. Below are the situations and corresponding error logs. Scenario 1: An unknown city name was provided def forecast = weatherService.getForecast("Unknown") Groovy requestCaptor.entity.city == "London" | | | | | | | false | | | 5 differences (28% similarity) | | | (Unk)n(-)o(w)n | | | (Lo-)n(d)o(-)n | | Unknown | [city:Unknown] <pw.avvero.spring.sandbox.weather.RequestCaptor@6f77917c times=1 bodyString={"city":"Unknown"} entity=[city:Unknown] headers=[Accept:[application/json, application/*+json], Content-Type:[application/json], Content-Length:[18]]> Scenario 2: Incorrect URI declaration for the mock; for example, mockServer.expect(once(), requestTo("https://foo.com")) Groovy java.lang.AssertionError: No further requests expected: HTTP POST https://external-weather-api.com 0 request(s) executed. Scenario 3: No HTTP calls in the code Groovy Condition not satisfied: requestCaptor.times == 1 | | | | 0 false <pw.avvero.spring.sandbox.weather.RequestCaptor@474a63d9 times=0 bodyString=null entity=null headers=[:]> Using WireMock WireMock provides the ability to describe verifiable expressions in the Assert block. Groovy def "Forecast for provided city London is 42"() { setup: // (1) wireMockServer.stubFor(post(urlEqualTo("/forecast")) // (2) .willReturn(aResponse() // (4) .withBody('{"result": "42"}') .withStatus(200) .withHeader("Content-Type", "application/json"))) when: // (5) def forecast = weatherService.getForecast("London") then: // (6) forecast == "42" wireMockServer.verify(postRequestedFor(urlEqualTo("/forecast")) .withRequestBody(matchingJsonPath('$.city', equalTo("London")))) // (7) } The above approach can also be used here, by describing the WiredRequestCaptor class. Groovy def "Forecast for provided city London is 42"() { setup: StubMapping forecastMapping = wireMockServer.stubFor(post(urlEqualTo("/forecast")) .willReturn(aResponse() .withBody('{"result": "42"}') .withStatus(200) .withHeader("Content-Type", "application/json"))) def requestCaptor = new WiredRequestCaptor(wireMockServer, forecastMapping) when: def forecast = weatherService.getForecast("London") then: forecast == "42" requestCaptor.times == 1 requestCaptor.body.city == "London" } This allows us to simplify expressions and enhance the idiomaticity of the code, making the tests more readable and easier to maintain. Conclusion Throughout this article, I have dissected the stages of testing HTTP requests in Spring, using the Arrange-Act-Assert methodology and mocking tools such as MockRestServiceServer and WireMock. The primary goal was to demonstrate how clearly dividing the test into separate stages significantly enhances readability, understanding, and maintainability. I highlighted the problems associated with the ambiguity of error determination and the difficulty of defining the scope of tested states and presented ways to solve them through a more structured approach to test writing. This approach is particularly important in complex integration tests, where every aspect is critical to ensuring the accuracy and reliability of the system. Furthermore, I showed how the use of tools like RequestCaptor and WiredRequestCaptor simplifies the test-writing process and improves their idiomaticity and readability, thereby facilitating easier support and modification. In conclusion, I want to emphasize that the choice of testing approach and corresponding tools should be based on specific tasks and the context of development. The approach to testing HTTP requests in Spring presented in this article is intended to assist developers facing similar challenges. The link to the project repository with demonstration tests can be found here. Thank you for your attention to the article, and good luck in your pursuit of writing effective and reliable tests!
In a previous article, we used Ollama with LangChain and SingleStore. LangChain provided an efficient and compact solution for integrating Ollama with SingleStore. However, what if we were to remove LangChain? In this article, we’ll demonstrate an example of using Ollama with SingleStore without relying on LangChain. We’ll see that while we can achieve the same results described in the previous article, the number of code increases, requiring us to manage more of the plumbing that LangChain normally handles. The notebook file used in this article is available on GitHub. Introduction From the previous article, we’ll follow the same steps to set up our test environment as described in these sections: Introduction Use a Virtual Machine or venv. Create a SingleStoreDB Cloud account Use Ollama Demo Group as the Workspace Group Name and ollama-demo as the Workspace Name. Make a note of the password and host name. Temporarily allow access from anywhere by configuring the firewall under Ollama Demo Group > Firewall. Create a Database CREATE DATABASE IF NOT EXISTS ollama_demo; Install Jupyter pip install notebook Install Ollama curl -fsSL https://ollama.com/install.sh | sh Environment Variable export SINGLESTOREDB_URL="admin:<password>@<host>:3306/ollama_demo"Replace <password> and <host> with the values for your environment. Launch Jupyter jupyter notebook Fill out the Notebook First, some packages: Shell !pip install ollama numpy pandas sqlalchemy-singlestoredb --quiet --no-warn-script-location Next, we’ll import some libraries: Python import ollama import os import numpy as np import pandas as pd from sqlalchemy import create_engine, text We’ll create embeddings using all-minilm (45 MB at the time of writing): Python ollama.pull("all-minilm") Example output: Plain Text {'status': 'success'} For our LLM we’ll use llama2 (3.8 GB at the time of writing): Python ollama.pull("llama2") Example output: Plain Text {'status': 'success'} Next, we’ll use the example text from the Ollama website: Python documents = [ "Llamas are members of the camelid family meaning they're pretty closely related to vicuñas and camels", "Llamas were first domesticated and used as pack animals 4,000 to 5,000 years ago in the Peruvian highlands", "Llamas can grow as much as 6 feet tall though the average llama between 5 feet 6 inches and 5 feet 9 inches tall", "Llamas weigh between 280 and 450 pounds and can carry 25 to 30 percent of their body weight", "Llamas are vegetarians and have very efficient digestive systems", "Llamas live to be about 20 years old, though some only live for 15 years and others live to be 30 years old" ] df_data = [] for doc in documents: response = ollama.embeddings( model = "all-minilm", prompt = doc ) embedding = response["embedding"] embedding_array = np.array(embedding).astype(np.float32) df_data.append({"content": doc, "vector": embedding_array}) df = pd.DataFrame(df_data) dimensions = len(df.at[0, "vector"]) We’ll set the embeddings to all-minilm and iterate through each document to build up the content for a Pandas DataFrame. Additionally, we’ll convert the embeddings to a 32-bit format as this is SingleStore’s default for the VECTOR data type. Lastly, we’ll determine the number of embedding dimensions for the first document in the Pandas DataFrame. Next, we’ll create a connection to our SingleStore instance: Python connection_url = "singlestoredb://" + os.environ.get("SINGLESTOREDB_URL") db_connection = create_engine(connection_url) Now we’ll create a table with the vector column using the dimensions we previously determined: Python query = text(""" CREATE TABLE IF NOT EXISTS pandas_docs ( id BIGINT AUTO_INCREMENT NOT NULL, content LONGTEXT, vector VECTOR(:dimensions) NOT NULL, PRIMARY KEY(id) ); """) with db_connection.connect() as conn: conn.execute(query, {"dimensions": dimensions}) We’ll now write the Pandas DataFrame to the table: Python df.to_sql( "pandas_docs", con = db_connection, if_exists = "append", index = False, chunksize = 1000 ) Example output: Plain Text 6 We'll now create an index to match the one we created in the previous article: Python query = text(""" ALTER TABLE pandas_docs ADD VECTOR INDEX (vector) INDEX_OPTIONS '{ "metric_type": "EUCLIDEAN_DISTANCE" }'; """) with db_connection.connect() as conn: conn.execute(query) We’ll now ask a question, as follows: Python prompt = "What animals are llamas related to?" response = ollama.embeddings( prompt = prompt, model = "all-minilm" ) embedding = response["embedding"] embedding_array = np.array(embedding).astype(np.float32) query = text(""" SELECT content FROM pandas_docs ORDER BY vector <-> :embedding_array ASC LIMIT 1; """) with db_connection.connect() as conn: results = conn.execute(query, {"embedding_array": embedding_array}) row = results.fetchone() data = row[0] print(data) We’ll convert the prompt to embeddings, ensure that the embeddings are converted to a 32-bit format, and then execute the SQL query which uses the infix notation <-> for Euclidean Distance. Example output: Plain Text Llamas are members of the camelid family meaning they're pretty closely related to vicuñas and camels Next, we’ll use the LLM, as follows: Python output = ollama.generate( model = "llama2", prompt = f"Using this data: {data}. Respond to this prompt: {prompt}" ) print(output["response"]) Example output: Plain Text Llamas are members of the camelid family, which means they are closely related to other animals such as: 1. Vicuñas: Vicuñas are small, wild camelids that are native to South America. They are known for their soft, woolly coats and are considered an endangered species due to habitat loss and poaching. 2. Camels: Camels are large, even-toed ungulates that are native to Africa and the Middle East. They are known for their distinctive humps on their backs, which store water and food for long periods of time. Both llamas and vicuñas are classified as members of the family Camelidae, while camels are classified as belonging to the family Dromedaryae. Despite their differences in size and habitat, all three species share many similarities in terms of their physical characteristics and behavior. Summary In this article, we've replicated the steps we followed in the previous article and achieved similar results. However, we had to write a series of SQL statements and manage several steps that LangChain would have handled for us. Additionally, there may be more time and cost involved in maintaining the code base long-term compared to the LangChain solution. Using LangChain instead of writing custom code for database access provides several advantages, such as efficiency, scalability, and reliability. LangChain offers a library of prebuilt modules for database interaction, reducing development time and effort. Developers can use these modules to quickly implement various database operations without starting from scratch. LangChain abstracts many of the complexities involved in database management, allowing developers to focus on high-level tasks rather than low-level implementation details. This improves productivity and time-to-market for database-driven applications. LangChain has a large, active, and growing community of developers, is available on GitHub, and provides extensive documentation and examples. In summary, LangChain offers developers a powerful, efficient, and reliable platform for building database-driven applications, enabling them to focus on business problems using higher-level abstractions rather than reinventing the wheel with custom code. Comparing the example in this article with the example we used in the previous article, we can see the benefits.
Hi Muleys! In this post, we will be learning about basic and useful functions of DataWeave 2.0 with quick examples. The list of functions used in the article below is selected out from the list of huge functions available in DataWeave(Developer's choice): Join (join) Left Join (leftJoin) Outer Join (outerJoin) Nested join with map operator Update as Function Update as Operator Max By (maxBy) Min By (minBy) Filtering an array (filter) Map an array (map) DistinctBy an array (distinctBy) GroupBy an array (groupBy) Reduce an array (reduce) Flatten an array (flatten) We may or may not have used the DataWeave function in our daily integrations. Let's see the examples below for each function. 1. Join The join function behaves similarly to a SQL database JOIN. The join function combines elements of two arrays by matching two ID criteria for the same index in both arrays. The left and right arrays must be arrays of objects. Importing core::Arrays function is required. Ignores unmatched objects DataWeave Code Plain Text %dw 2.0 output application/json import * from dw::core::Arrays var a = [{"Name": "Ram","Billing City": "BLR","Billing Country": "India","Message": "Hello world!","Type": "Account","Allowance": 1000},{"Name": "Max","Billing City": "NY","Billing Country": "USA","Message": "Hello world!!","Type": "Account","Allowance": 2000}] var b = [{"Name": "Shyam","Billing City": "HYD","Billing Country": "India","Message": "Hello world!","Type": "Account","Allowance": 3000}] --- //join(a, b, (emp) -> emp."Billing Country", (loc)-> loc."Billing Country") //join(a,b, (a)-> a."Billing Country", (b)-> b."Billing Country") join(a,b, (x)-> x."Billing Country", (y)-> y."Billing Country") Output: JSON [ { "l": { "Name": "Ram", "Billing City": "BLR", "Billing Country": "India", "Message": "Hello world!", "Type": "Account", "Allowance": 1000 }, "r": { "Name": "Shyam", "Billing City": "HYD", "Billing Country": "India", "Message": "Hello world!", "Type": "Account", "Allowance": 3000 } } ] 2. Left Join All the joined objects are returned. Importing core::Arrays function is required. Any unmatched left elements are also added. DataWeave Code Plain Text %dw 2.0 output application/json import * from dw::core::Arrays var a = [{"Name": "Ram","Billing City": "BLR","Billing Country": "India","Message": "Hello world!","Type": "Account","Allowance": 1000},{"Name": "Max","Billing City": "NY","Billing Country": "USA","Message": "Hello world!!","Type": "Account","Allowance": 2000}] var b = [{"Name": "Shyam","Billing City": "HYD","Billing Country": "India","Message": "Hello world!","Type": "Account","Allowance": 3000}] --- leftJoin(a,b, (x)-> x."Billing Country", (y)-> y."Billing Country") Output: JSON [ { "l": { "Name": "Ram", "Billing City": "BLR", "Billing Country": "India", "Message": "Hello world!", "Type": "Account", "Allowance": 1000 }, "r": { "Name": "Shyam", "Billing City": "HYD", "Billing Country": "India", "Message": "Hello world!", "Type": "Account", "Allowance": 3000 } }, { "l": { "Name": "Max", "Billing City": "NY", "Billing Country": "USA", "Message": "Hello world!!", "Type": "Account", "Allowance": 2000 } } ] 3. Outer Join All the joined objects are returned. Importing the core::Arrays function is required. Any unmatched left element or right elements are also added. DataWeave Code Plain Text %dw 2.0 output application/json import * from dw::core::Arrays var a = [{"Name": "Ram","Billing City": "BLR","Billing Country": "India","Message": "Hello world!","Type": "Account","Allowance": 1000},{"Name": "Max","Billing City": "NY","Billing Country": "USA","Message": "Hello world!!","Type": "Account","Allowance": 2000}] var b = [{"Name": "Shyam","Billing City": "HYD","Billing Country": "India","Message": "Hello world!","Type": "Account","Allowance": 3000}] --- outerJoin(a,b, (x)-> x."Billing Country", (y)-> y."Billing Country") Output: JSON [ { "l": { "Name": "Ram", "Billing City": "BLR", "Billing Country": "India", "Message": "Hello world!", "Type": "Account", "Allowance": 1000 }, "r": { "Name": "Shyam", "Billing City": "HYD", "Billing Country": "India", "Message": "Hello world!", "Type": "Account", "Allowance": 3000 } }, { "l": { "Name": "Max", "Billing City": "NY", "Billing Country": "USA", "Message": "Hello world!!", "Type": "Account", "Allowance": 2000 } } ] 4. Nested Join With Map Operator Use the map function to iterate over each joined object. Importing the core::Arrays function is required. DataWeave Code Plain Text %dw 2.0 output application/json import * from dw::core::Arrays var a = [{"Name": "Ram","Billing City":"BLR","Billing Country": "India","Message": "Hello world!","Type": "Account","Allowance": 1000},{"Name": "Max","Billing City": "NY","Billing Country": "USA","Message": "Hello world!!","Type": "Account","Allowance": 2000}] var b = [{"Name": "Shyam","Billing City":"HYD","Billing Country": "India","Message": "Hello world!","Type": "Account","Allowance": 3000}] --- (join(a,b, (x)-> x."Billing Country", (y)-> y."Billing Country")) map { "info": $.l ++ $.r - "Billing Country" } Output: JSON [ { "info": { "Name": "Ram", "Billing City": "BLR", "Billing Country": "India", "Message": "Hello world!", "Type": "Account", "Allowance": 1000, "Name": "Shyam", "Billing City": "HYD", "Message": "Hello world!", "Type": "Account", "Allowance": 3000 } } ] 5. Update as Function For Fieldname This update function updates a field in an object with the specified string value. The function returns a new object with the specified field and value. Introduced in DataWeave version 2.2.2 Importing the util::Values function is required. DataWeave Code Plain Text %dw 2.0 output application/json import * from dw::core::Arrays import * from dw::util::Values var a = [{"Name": "Ram","Billing City": "BLR","Billing Country": "India","Message": "Hello world!","Type": "Account","Allowance": 1000},{"Name": "Max","Billing City": "NY","Billing Country": "USA","Message": "Hello world!!","Type": "Account","Allowance": 2000}] var b = [{"Name": "Shyam","Billing City": "HYD","Billing Country": "India","Message": "Hello world!","Type": "Account","Allowance": 3000}] fun mapping() = (if (isEmpty(a)) b else a) var c = {"element": mapping()} --- c update "element" with "abc" //string Output: JSON { "element": "abc" } For Index Updates an array index with the specified value This update function returns a new array that changes the value of the specified index. Introduced in DataWeave version 2.2.2 Importing the util::Values function is required. DataWeave Code Plain Text %dw 2.0 output application/json import * from dw::core::Arrays import * from dw::util::Values var a = [{"Name": "Ram","Billing City": "BLR","Billing Country": "India","Message": "Hello world!","Type": "Account","Allowance": 1000},{"Name": "Max","Billing City": "NY","Billing Country": "USA","Message": "Hello world!!","Type": "Account","Allowance": 2000}] var b = [{"Name": "Shyam","Billing City": "HYD","Billing Country": "India","Message": "Hello world!","Type": "Account","Allowance": 3000}] fun mapping() = (if (isEmpty(a)) b else a) var c = {"element": mapping()} var d = [1, true, 2, 3, false] --- d update 2 with 5 //index Output: JSON [ 1, true, 5, 3, false ] 6. Update as Operator This new update operator will update a specific field value with a new value given. This feature adds an easy way to update single values in nested data structures without requiring to understand functional recursion. No extra dw libraries are required. DataWeave Code Plain Text %dw 2.0 output application/json var a = [{"Name": "Ram","Billing City": "BLR","Billing Country": "India","Message": "Hello world!","Type": "Account","Allowance": 1000},{"Name": "Max","Billing City": "NY","Billing Country": "USA","Message": "Hello world!!","Type": "Account","Allowance": 2000}] var b = [{"Name": "Shyam","Billing City": "HYD","Billing Country": "India","Message": "Hello world!","Type": "Account","Allowance": 3000}] fun mapping() = (if (isEmpty(a)) b else a) var c = {"element": mapping()} var d = [1, true, 2, 3, false] --- c update { case element at .element -> if (element == a) "Max" else "Mule" } Output: JSON { "element": "Max" } 7. Max By Iterates over an array and returns the highest value of comparable elements from it. The items must be of the same type. maxBy throws an error if they are not, and the function returns null if the array is empty. DataWeave Code Plain Text %dw 2.0 output application/json var a = [{"Name": "Ram","Billing City": "BLR","Billing Country": "India","Message": "Hello world!","Type": "Account","Allowance": 1000},{"Name": "Max","Billing City": "NY","Billing Country": "USA","Message": "Hello world!!","Type": "Account","Allowance": 2000}] var b = [{"Name": "Shyam","Billing City": "HYD","Billing Country": "India","Message": "Hello world!","Type": "Account","Allowance": 3000}] fun mapping() = (if (isEmpty(a)) b else a) var c = {"element": mapping()} --- a maxBy $.Allowance Output: JSON { "Name": "Max", "Billing City": "NY", "Billing Country": "USA", "Message": "Hello world!!", "Type": "Account", "Allowance": 2000 } 8. Min By Iterates over an array to return the lowest value of comparable elements from it. The items need to be of the same type. minBy returns an error if they are not, and it returns null when the array is empty. DataWeave Code Plain Text %dw 2.0 output application/json var a = [{"Name": "Ram","Billing City": "BLR","Billing Country": "India","Message": "Hello world!","Type": "Account","Allowance": 1000},{"Name": "Max","Billing City": "NY","Billing Country": "USA","Message": "Hello world!!","Type": "Account","Allowance": 2000}] var b = [{"Name": "Shyam","Billing City": "HYD","Billing Country": "India","Message": "Hello world!","Type": "Account","Allowance": 3000}] fun mapping() = (if (isEmpty(a)) b else a) var c = {"element": mapping()} --- a minBy $.Allowance Output: JSON { "Name": "Ram", "Billing City": "BLR", "Billing Country": "India", "Message": "Hello world!", "Type": "Account", "Allowance": 1000 } Input payload (common for all functions below) JSON [{"Name": "Ram","Billing City": "BLR","Billing Country": "India","Message": "Hello world!","Type": "Account","Allowance": 1000}, {"Name": "Max","Billing City": "NY","Billing Country": "USA","Message": "Hello world!!","Type": "Account","Allowance": 2000}, {"Name": "Shyam","Billing City": "HYD","Billing Country": "India","Message": "Hello world!","Type": "Account","Allowance": 3000}, {"Name": "John","Billing City": "FL","Billing Country": "USA","Message": "Hello world!!","Type": "Account","Allowance": 4000}] 9. Filtering an Array (filter) To filter the data based on the condition. DataWeave Code Plain Text %dw 2.0 output application/json --- payload filter ((item, index) -> item."Billing Country" == "India" ) Output: JSON [ { "Name": "Ram", "Billing City": "BLR", "Billing Country": "India", "Message": "Hello world!", "Type": "Account" }, { "Name": "Shyam", "Billing City": "HYD", "Billing Country": "India", "Message": "Hello world!", "Type": "Account" } ] 10. Map an Array (map) Transforming every item in an array DataWeave Code Plain Text %dw 2.0 output application/json --- payload map ((item, index) -> {"Cities": if (item."Billing Country" == "USA") "USA" else "Others"} ) Output: JSON [ { "Cities": "Others" }, { "Cities": "USA" }, { "Cities": "Others" }, { "Cities": "USA" } ] 11. DistinctBy an Array (distinctBy) Remove duplicate items from an Array. DataWeave Code Plain Text %dw 2.0 output application/json --- payload distinctBy ((item, index) -> item."Billing Country" ) Output: JSON [ { "Name": "Ram", "Billing City": "BLR", "Billing Country": "India", "Message": "Hello world!", "Type": "Account" }, { "Name": "Max", "Billing City": "NY", "Billing Country": "USA", "Message": "Hello world!!", "Type": "Account" } ] 12. GroupBy an Array (groupBy) Grouping together items in an array based on some value DataWeave Code Plain Text %dw 2.0 output application/json --- payload groupBy ((item, index) -> item."Billing Country" ) Output: JSON { "India": [ { "Name": "Ram", "Billing City": "BLR", "Billing Country": "India", "Message": "Hello world!", "Type": "Account" }, { "Name": "Shyam", "Billing City": "HYD", "Billing Country": "India", "Message": "Hello world!", "Type": "Account" } ], "USA": [ { "Name": "Max", "Billing City": "NY", "Billing Country": "USA", "Message": "Hello world!!", "Type": "Account" }, { "Name": "John", "Billing City": "FL", "Billing Country": "USA", "Message": "Hello world!!", "Type": "Account" } ] } 13. Reduce an Array (reduce) It can be used to transform an array to any other type. DataWeave Code Plain Text %dw 2.0 output application/json --- payload."Allowance" reduce ((item, accumulator) -> (item + accumulator)) Output: Plain Text 10000 14. Flatten an Array (flatten) DataWeave Code Plain Text %dw 2.0 output application/json --- flatten (payload.Name) Output: JSON [ "Ram", "Max", "Shyam", "John" ] Conclusion As MuleSoft Developers, we use DataWeave codes almost daily in our integrations. The functions mentioned above of Array and examples could help us achieve our desired outputs/results easily. Happy learning!
In this blog, I would like to bring out the differences between Anypoint API Experience Hub (AEH), Anypoint API Community Manager (ACM), and Anypoint Exchange and also, let you know how they are related. I will take you through the key differences and commonalities of all three platforms. Introduction MuleSoft offers several platforms designed to enhance the management, discovery, and utilization of APIs, each serving different aspects of API strategy and community engagement. Understanding the differences between Anypoint API Experience Hub (AEH), Anypoint API Community Manager (ACM), and Anypoint Exchange can help you determine which platform or combination of platforms will best suit your organization’s needs. Anypoint API Experience Hub (AEH) API Experience Hub (AEH) is designed to improve the API consumption experience by providing a comprehensive portal where developers can discover, learn about, and test APIs. It serves as a central repository and collaborative space for all stakeholders involved in the API lifecycle. It focuses on providing a customized portal (curated experience) that improves how developers interact with APIs, facilitating easier integration and adoption. Key Features Build personalized developer portals with clicks, not code: Use out-of-the-box templates to easily design and publish developer portals from a single place in the Anypoint Platform. Track consumption and quality metrics: Manage API investments by observing community engagement with APIs to continuously optimize and improve performance. Empower developers and partners to collaborate: Provide easy and self-service access to articles, FAQs, and the collective wisdom of the community to support consumers at every step. Scale API portals and create digital marketplaces: Use advanced portal builder capabilities, powered by Salesforce Experience Cloud, to customize and scale portals with engagement, community, and forum elements. Build digital marketplaces to monetize and get more value from API investments. Anypoint API Community Manager (ACM) API Community Manager (ACM) is aimed at turning the API portal into a community-focused tool where API users and developers can engage, get support, and exchange ideas. It emphasizes building a strong community around an organization’s APIs. With ACM, you can build and operate communities around your APIs for developers and partners, both inside and outside of your organization, who write applications that use your APIs. Key Features Out-of-the-box themes: Set up new digital experiences complete with API product documentation, news, events, blogs, forums, onboarding, and support in no time, with a prebuilt template and themes you can customize down to the pixel. Personalized portals: Tailor experiences by serving specific API products, news, blogs, events based on geography, referring domains, and more. Reconfigure and reuse elements to create new experiences for target partners or events, such as hackathons. Let users select their preferred language using the language picker. Interactive documentation: Make your consumers successful quickly by providing a searchable API product catalog and interactive documentation — complete with videos, code snippets, tutorials, and an auto-generated mocking service. Get the API console and associated API resources automatically from Anypoint Exchange. Forums and cases: Create a space for API consumers to engage with peers, developer evangelists, and API product teams through developer forums, chat, and support case management. Enable users to publish answers to a public knowledge base, or communicate privately. Engagement analytics: Measure and analyze API program metrics, track ecosystem engagement with content, and identify evangelists who actively engage peers through forums — all with preconfigured and customizable dashboards. Anypoint Exchange Anypoint Exchange is a curated catalog of reusable assets. APIs, API groups, API spec fragments, custom assets, examples, GraphQL APIs, and integration assets such as connectors, policies, RPA assets, rulesets, and templates are some of the types that are supported in Exchange. You can catalog (publish), share, discover, learn about, and reuse assets within your organization of developers to facilitate collaboration, boost productivity, and promote standards. You can create API developer portals, view and test APIs, simulate sending data to APIs by using the mocking service, create assets, and use API Notebooks to describe and test API functions. Key Features Accelerate your project delivery: Don’t start development from scratch. Accelerate delivery by leveraging 100+ OOTB APIs, examples, best practices, accelerators and much more within Anypoint Exchange and our broader ecosystem. Build upon previous projects by reusing your own assets auto-populated into Anypoint Exchange. Build a consolidated source of truth for your APIs: Catalog APIs built by any team or anywhere in the enterprise — Anypoint Platform or otherwise — into Anypoint Exchange using developer-friendly tools. Improve collaboration across development teams: Before implementing an API, share it with your API consumers for validation using a mocking service. Drive developers to discover assets, microservices, or governance policies and test new functionalities with ease. Manage your assets more effectively: Automatically generate documentation and map dependencies across assets in Exchange. Create custom roles, permissions, and team structures aligned to your organization to provide varying degrees of access to view, contribute, or administer assets. Key Differences Core Focus AEH is focused more on the API discovery, testing, and documentation aspect, aiming to enhance the technical interaction with APIs. ACM focuses on building and managing a community around the APIs, emphasizing engagement and support. Anypoint Exchange is aimed at asset sharing and discovery across a broad spectrum of integration assets. Platform Base AEH is a MuleSoft-specific solution tailored to API Management. ACM leverages Salesforce Experience Cloud, making it ideal for organizations already invested in the Salesforce ecosystem. Anypoint Exchange is one of the core components of MuleSoft’s Anypoint Platform. Primary Users Both AEH and ACM target API developers and consumers with a focus on improving their direct interactions and engagement. Anypoint Exchange serves a wider audience including developers, integration specialists, and business analysts looking for any type of reusable asset — APIs, connectors, templates, examples, and other types of artifacts. User Engagement AEH is primarily a hub for API documentation, discovery, and collaboration tools. It helps in creating basic community features. ACM is about creating a dynamic community for APIs which includes forums, tickets, and personalized interactions. It helps in creating advanced community features. Anypoint Exchange being a key component of Anypoint Platform, serves as a central hub where organizations can discover, share, and manage APIs, connectors, templates, and other integration assets. Integration Needs AEH is not directly connected with Salesforce, but it is connected through Salesforce APIs to create an API portal within the Anypoint Platform, which is a one-time process. AEH focuses more on integrating with API management processes to provide a streamlined developer experience. ACM integrates deeply with Salesforce, benefiting from its CRM and analytics capabilities to enhance user management and engagement analytics. ACM uses a data bridge to communicate with the Anypoint Platform. Within the Anypoint Platform, there is a connected app that allows ACM to access data such as API specs, client applications, subscriptions, etc. Anypoint Exchange do not need integration with any external systems to create developer portals. Exchange itself provides a repository for code and APIs and helps your organization access them easily, model them, and reuse them. Knowledge of Salesforce (For Developer Portals) With AEH, you don’t need to learn the nitty-gritty of Salesforce. AEH reduces the burden of salesforce knowledge to get started and see value immediately by providing a simple portal creation experience. In addition, one can manage portal users and API Products through an Anypoint account directly. ACM, which is built on Salesforce Experience Cloud, generally requires a higher level of Salesforce knowledge. Since ACM leverages many features of Salesforce, understanding Salesforce’s platform capabilities, particularly those related to the Experience Cloud and community management, is crucial. With Anypoint Exchange, no Salesforce knowledge is required. Launch Time With ACM, launching an API Portal with small to medium complexity requires a couple of days. One of the reasons for this is you need to juggle between two platforms — Anypoint Platform and Salesforce Experience Cloud. With AEH, it's reduced considerably because of a single platform with easier management. Launching an API Portal with AEH is simply a 4-step process. With Anypoint Exchange, once your API is ready with the documentation, publishing it to Exchange is relatively quick. Licensing AEH requires two types of licenses: Salesforce: This license is required to use the Salesforce Experience Builder. External identity: This license is based on the number of member requests to access the portal. The current package has a minimum of 100 API access requests which require a minimum of 2000 External Identity licenses. This is required for the portal consumers to access the API portal. A standard ACM license consists of a Salesforce Customer Community Plus Login License Unlimited Edition. It is based on the number of member logins per month. Licensing for Anypoint Exchange, as part of Anypoint Platform, is generally structured around the broader licensing models that MuleSoft employs for its entire suite of products. Commonalities API Management Focus All three platforms are fundamentally designed to enhance API management. They provide tools and features that help organizations create, manage, and share APIs efficiently. Enhanced API Discovery All three platforms support enhanced API discovery. Documentation and Interactive Testing Documentation and interactive testing are integral features across all three platforms. AEH and Anypoint Exchange provide capabilities for API providers to publish comprehensive documentation along with interactive examples where consumers can test APIs directly in the browser. ACM integrates these features into its community portals, enhancing the user experience and facilitating easier adoption and feedback from API consumers. Integration With Anypoint Platform All three platforms are tightly integrated with Anypoint Platform, ensuring a seamless experience for users from API design through management, testing, and consumption. AEH and Anypoint Exchange are directly part of the Anypoint Platform ecosystem. ACM, built on Salesforce, leverages Anypoint Platform for backend API management functionalities. Collaboration and User Engagement Promoting collaboration among API developers and consumers is a common theme in all three platforms. Analytics and Monitoring Monitoring API usage and performance is supported across all three platforms, allowing organizations to gather insights into API performance, consumption patterns, and user engagement. Conclusion This is just a small attempt to clear out the ambiguities around three platforms — Anypoint API Experience Hub, Anypoint API Community Manager, and Anypoint Exchange, offered by MuleSoft for Management, Discovery, and Utilization of APIs. Hope you all find this article helpful/useful in whatever way. Thank you for reading!! Please do not forget to like, share, and feel free to share your thoughts/comments in the comments section.
My ideas for blog posts inevitably start to dry up after over two years at Apache APISIX. Hence, I did some triage on the APISIX repo. I stumbled upon this one question: We have a requirement to use a plugin, where we need to route the traffic on percentage basis. I'll give an example for better understanding. We have an URL where ca is country (canada) and fr is french language. Now the traffic needs to routed 10% to and the remaining 90% to . And whenever we're routing the traffic to we need to set a cookie. So for next call, if the cookie is there, it should directly go to else it should go via a 10:90 traffic split. What is the best possible way to achieve this ?? - Help request: Setting cookie based on a condition The use case is interesting, and I decided to tackle it. I'll rephrase the requirements first: If no cookie is set, randomly forward the request to one of the upstreams. If a cookie has been set, forward the request to the correct upstream. For easier testing: I change the odds from 10:90 to 50:50. I use the root instead of a host plus a path. Finally, I assume that the upstream sets the cookie. Newcomers to Apache APISIX understand the matching algorithm very quickly: if a request matches a route's host, method, and path, forward it to the upstream set. YAML routes: - id: 1 uri: /hello host: foo.com methods: - GET - PUT - POST upstream_id: 1 Shell curl --resolve foo.com:127.0.0.1 http://foo.com/hello #1 curl -X POST --resolve foo.com:127.0.0.1 http://foo.com/hello #2 curl -X PUT --resolve foo.com:127.0.0.1 http://foo.com/hello #2 curl --resolve bar.com:127.0.0.1 http://bar.com/hello #3 curl --resolve foo.com:127.0.0.1 http://foo.com/hello/john #4 Matches host, method as curl defaults to GET, and path Matches host, method, and path Doesn't match host Doesn't match path as the configured path doesn't hold a * character path is the only required parameter; neither host nor methods are. host defaults to any host and methods to any method. Beyond these three main widespread matching parameters, others are available, e.g., remote_addrs or vars. Let's focus on the latter. The documentation on the Route API is pretty concise: Matches based on the specified variables consistent with variables in Nginx. Takes the form [[var, operator, val], [var, operator, val], ...]]. Note that this is case sensitive when matching a cookie name. See lua-resty-expr for more details. - Route API One can only understand vars in the Router Radix Tree documentation. The Router Radix Tree powers the Apache APISIX's matching engine. Nginx provides a variety of built-in variables that can be used to filter routes based on certain criteria. Here is an example of how to filter routes by Nginx built-in variables: - How to filter route by Nginx built-in variable $ curl http://127.0.0.1:9180/apisix/admin/routes/1 -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -i -d ' { "uri": "/index.html", "vars": [ ["http_host", "==", "iresty.com"], ["cookie_device_id", "==", "a66f0cdc4ba2df8c096f74c9110163a9"], ["arg_name", "==", "json"], ["arg_age", ">", "18"], ["arg_address", "~~", "China.*"] ], "upstream": { "type": "roundrobin", "nodes": { "127.0.0.1:1980": 1 } } }' This route will require the request header host equal iresty.com, request cookie key _device_id equal a66f0cdc4ba2df8c096f74c9110163a9, etc. You can learn more at radixtree-new. Among all Nginx variables, we can find $cookie_xxx. Hence, we can come up with the following configuration: YAML routes: - name: Check for French cookie uri: / vars: [[ "cookie_site", "==", "fr" ]] #1 upstream_id: 1 - name: Check for English cookie uri: / vars: [[ "cookie_site", "==", "en" ]] #2 upstream_id: 2 Match if a cookie named site has value fr Match if a cookie named site has value en We need to configure the final route, the one used when no cookie is set. We use the traffic-split plugin to assign a route randomly. The traffic-split Plugin can be used to dynamically direct portions of traffic to various Upstream services. This is done by configuring match, which are custom rules for splitting traffic, and weighted_upstreams which is a set of Upstreams to direct traffic to. When a request is matched based on the match attribute configuration, it will be directed to the Upstreams based on their configured weights. You can also omit using the match attribute and direct all traffic based on weighted_upstreams. - traffic-split The third route is the following: YAML - name: Let the fate decide uri: / upstream_id: 1 #1 plugins: traffic-split: rules: - weighted_upstreams: - weight: 50 #1 - upstream_id: 2 #2 weight: 50 #2 The weight of the upstream 1 is 50. The upstream 2 weight is also 50 out of the total weight sum. It's a half-half chance of APISIX forwarding it to either upstream. At this point, we need to solve one remaining issue: the order in which APISIX will evaluate the routes. When routes' paths are disjoint, the order plays no role; when they are overlapping, it does. For example, if APISIX evaluates the last route first, it will forward the request to a random upstream, even though a cookie might have been set. We need to force the evaluation of the first two routes first. For that, APISIX offers the priority parameter; its value is 0 by default. It evaluates routes matching by order of decreasing priority. We need to override it to evaluate the random route last. YAML - name: Let the fate decide uri: / upstream_id: 1 priority: -1 #... You can try the setup in a browser or with curl. With curl, we can set the "first" request like this: curl -v localhost:9080 If the upstream sets the cookie correctly, you should see the following line among the different response headers: Set-Cookie: site=fr Since curl doesn't store cookies by default, the value should change across several calls. If we set the cookie, the value stays constant: curl -v --cookie 'site=en' localhost:9080 #1 The cookie name is case-sensitive; beware The browser keeps the cookie, so it's even simpler. Just go to and refresh several times: the content is the same as well. The content will change if you change the cookie to another possible value and request again. The complete source code for this post can be found on GitHub. To Go Further router-radixtree Route Admin API
Cucumber is a tool that supports Behavior-Driven Development (BDD). In this blog, you will learn how to pass arguments to step definitions when using Cucumber and Spring Boot. Enjoy! Introduction In a previous post, Cucumber was introduced as a tool that supports Behavior-Driven Development (BDD). Some of the features were explained, but not how to pass arguments to step definitions. In this blog, you will learn how you can do so. The application under test is a Spring Boot application. You will also learn how you can integrate the Cucumber tests with Spring. The sources used in this blog are available on GitHub. Do check out the following references for extra information: Cucumber Expressions Cucumber Configuration: Type Registry Prerequisites The prerequisites for this blog are: Basis Java knowledge - Java 21 is used Basic Maven knowledge Basic Spring Boot knowledge Basic comprehension of BDD Basic knowledge of Cucumber (see the previous blog for an introduction) Application Under Test The application under test is a basic Spring Boot application. It consists of a Controller and a Service. The Controller serves a customer endpoint that implements an OpenAPI specification. The Service is a basic implementation, storing customers in a HashMap. A customer only has a first name and a last name, just to keep things simple. The API offers the following functionality: Creating a customer Retrieving the customer based on the customer ID Retrieving all customers Deleting all customers Spring Integration In order to enable the Spring integration, you add the following dependency to the pom: XML <dependency> <groupId>io.cucumber</groupId> <artifactId>cucumber-spring</artifactId> <version>7.14.0</version> <scope>test</scope> </dependency> The Spring Boot application must be in a running state; therefore, you need to run the Cucumber tests with the @SpringBootTest annotation. This will start the application and you will be able to run Cucumber tests for it. In order to do so, you create a class CucumberSpringConfiguration. Add the @CucumberContextConfiguration annotation so that the Spring integration is enabled. The Spring Boot application starts on a random port; therefore, you store the port in a system property so that you will be able to use it when you need to call the API. Java @CucumberContextConfiguration @SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT) public class CucumberSpringConfiguration { @LocalServerPort private int port; @PostConstruct public void setup() { System.setProperty("port", String.valueOf(port)); } } The Cucumber step definitions will extend this class. Tests can be run via Maven: Shell $ mvn clean verify Test: Add Customer Using Arguments The Add Customer test will add a customer to the customer list and will verify that the customer is added to the list. The feature file is the following. Do note that the first name (John) and last name (Doe) are within quotes. This way, Cucumber is able to recognize a string argument. Plain Text Scenario: Add customer Given an empty customer list When customer 'John' 'Doe' is added Then the customer 'John' 'Doe' is added to the customer list The corresponding step definitions are the following. When: The first name and last name placeholders are represented with {string} and they are mapped as arguments to the method. This way, the arguments are accessible to the step definition. Then: In a similar way, the arguments are passed to the step definition. Java public class StepDefinitions extends CucumberSpringConfiguration { final int port = Integer.parseInt(System.getProperty("port")); final RestClient restClient = RestClient.create(); @Given("an empty customer list") public void an_empty_customer_list() { ResponseEntity<Void> response = restClient.delete() .uri("http://localhost:"+ port + "/customer") .retrieve() .toBodilessEntity(); } @When("customer {string} {string} is added") public void customer_firstname_lastname_is_added(String firstName, String lastName) { Customer customer = new Customer(firstName, lastName); ResponseEntity<Void> response = restClient.post() .uri("http://localhost:"+ port + "/customer") .contentType(APPLICATION_JSON) .body(customer) .retrieve() .toBodilessEntity(); assertThat(response.getStatusCode().is2xxSuccessful()).isTrue(); } @Then("the customer {string} {string} is added to the customer list") public void the_customer_first_name_last_name_is_added_to_the_customer_list(String firstName, String lastName) { List<Customer> customers = restClient.get() .uri("http://localhost:"+ port + "/customer") .retrieve() .body(new ParameterizedTypeReference<>() {}); assertThat(customers).contains(new Customer(firstName, lastName)); } ... } Note that the arguments are used to create a Customer object which is defined in the step definitions class. This class contains the fields, getters, setters, equals, and hashCode implementations. Java public static class Customer { private String firstName; private String lastName; ... } Test: Add Customers Using Arguments When you want to add several customers, you can chain the same step definition by means of an And using different arguments. The feature file is the following, the step definitions remain the same. Plain Text Scenario: Add customers Given an empty customer list When customer 'John' 'Doe' is added And customer 'David' 'Beckham' is added Then the customer 'John' 'Doe' is added to the customer list And the customer 'David' 'Beckham' is added to the customer list Test: Add Customer Using DataTable The previous tests all started with an empty customer list. The next test will add some data to the customer list as a starting point. You can, of course, use the step definition customer firstName lastName is added and invoke it multiple times, but you can also use a DataTable. The DataTable must be the last argument in a step definition. The feature file is the following and the DataTable is used in the Given-clause. Plain Text Scenario: Add customer to existing customers Given the following customers: | John | Doe | | David | Beckham | When customer 'Bruce' 'Springsteen' is added Then the customer 'Bruce' 'Springsteen' is added to the customer list In the implementation of the step definition, you now see that the arguments are passed as a DataTable. It is a table containing strings, so you need to parse the table yourself. Java @Given("the following customers:") public void the_following_customers(io.cucumber.datatable.DataTable dataTable) { for (List<String> customer : dataTable.asLists()) { customer_firstname_lastname_is_added(customer.get(0), customer.get(1)); } } Test: Add Customer Using Parameter Type In the previous test, you needed to parse the DataTable yourself. Wouldn’t it be great if the DataTable could be mapped immediately to a Customer object? This is possible if you define a parameter type for it. You create a parameter type customerEntry and annotate it with @DataTableType. You use the string arguments of a DataTable to create a Customer object. You do so in a class ParameterTypes, which is considered as best practice. Java public class ParameterTypes { @DataTableType public StepDefinitions.Customer customerEntry(Map<String, String> entry) { return new StepDefinitions.Customer( entry.get("firstName"), entry.get("lastName")); } } The feature file is identical to the previous one, only the step definition has changed in order to have a unique step definition. Plain Text Scenario: Add customer to existing customers with parameter type Given the following customers with parameter type: | John | Doe | | David | Beckham | When customer 'Bruce' 'Springsteen' is added Then the customer 'Bruce' 'Springsteen' is added to the customer list In the implementation of the step definition, you notice that the argument is not a DataTable anymore, but a list of Customer. Java @Given("the following customers with parameter type:") public void the_following_customers_with_parameter_type(List<Customer> customers) { for (Customer customer : customers) { customer_firstname_lastname_is_added(customer.getFirstName(), customer.getLastName()); } } Conclusion In this blog, you learned how to integrate Cucumber with a Spring Boot application and several ways to pass arguments to your step definitions - a powerful feature of Cucumber!
Through my years of building services, the RESTful API has been my primary go-to. However, even though REST has its merits, that doesn’t mean it’s the best approach for every use case. Over the years, I’ve learned that, occasionally, there might be better alternatives for certain scenarios. Sticking with REST just because I’m passionate about it — when it’s not the right fit — only results in tech debt and a strained relationship with the product owner. One of the biggest pain points with the RESTful approach is the need to make multiple requests to retrieve all the necessary information for a business decision. As an example, let’s assume I want a 360-view of a customer. I would need to make the following requests: GET /customers/{some_token} provides the base customer information GET /addresses/{some_token} supplies a required address GET /contacts/{some_token} returns the contact information GET /credit/{some_token} returns key financial information While I understand the underlying goal of REST is to keep responses laser-focused for each resource, this scenario makes for more work on the consumer side. Just to populate a user interface that helps an organization make decisions related to future business with the customer, the consumer must make multiple calls In this article, I’ll show why GraphQL is the preferred approach over a RESTful API here, demonstrating how to deploy Apollo Server (and Apollo Explorer) to get up and running quickly with GraphQL. I plan to build my solution with Node.js and deploy my solution to Heroku. When To Use GraphQL Over REST? There are several common use cases when GraphQL is a better approach than REST: When you need flexibility in how you retrieve data: You can fetch complex data from various resources but all in a single request. (I will dive down this path in this article.) When the frontend team needs to evolve the UI frequently: Rapidly changing data requirements won’t require the backend to adjust endpoints and cause blockers. When you want to minimize over-fetching and under-fetching: Sometimes REST requires you to hit multiple endpoints to gather all the data you need (under-fetching), or hitting a single endpoint returns way more data than you actually need (over-fetching). When you’re working with complex systems and microservices: Sometimes multiple sources just need to hit a single API layer for their data. GraphQL can provide that flexibility through a single API call. When you need real-time data pushed to you: GraphQL features subscriptions, which provide real-time updates. This is useful in the case of chat apps or live data feeds. (I will cover this benefit in more detail in a follow-up article.) What Is Apollo Server? Since my skills with GraphQL aren’t polished, I decided to go with Apollo Server for this article. Apollo Server is a GraphQL server that works with any GraphQL schema. The goal is to simplify the process of building a GraphQL API. The underlying design integrates well with frameworks such as Express or Koa. I will explore the ability to leverage subscriptions (via the graphql-ws library) for real-time data in my next article. Where Apollo Server really shines is the Apollo Explorer, a built-in web interface that developers can use to explore and test their GraphQL APIs. The studio will be a perfect fit for me, as it allows for the easy construction of queries and the ability to view the API schema in a graphical format. My Customer 360 Use Case For this example, let’s assume we need the following schema to provide a 360-view of the customer: TypeScript type Customer { token: String name: String sic_code: String } type Address { token: String customer_token: String address_line1: String address_line2: String city: String state: String postal_code: String } type Contact { token: String customer_token: String first_name: String last_name: String email: String phone: String } type Credit { token: String customer_token: String credit_limit: Float balance: Float credit_score: Int } I plan to focus on the following GraphQL queries: TypeScript type Query { addresses: [Address] address(customer_token: String): Address contacts: [Contact] contact(customer_token: String): Contact customers: [Customer] customer(token: String): Customer credits: [Credit] credit(customer_token: String): Credit } Consumers will provide the token for the Customer they wish to view. We expect to also retrieve the appropriate Address, Contact, and Credit objects. The goal is to avoid making four different API calls for all this information rather than with a single API call. Getting Started With Apollo Server I started by creating a new folder called graphql-server-customer on my local workstation. Then, using the Get Started section of the Apollo Server documentation, I followed steps one and two using a Typescript approach. Next, I defined my schema and also included some static data for testing. Ordinarily, we would connect to a database, but static data will work fine for this demo. Below is my updated index.ts file: TypeScript import { ApolloServer } from '@apollo/server'; import { startStandaloneServer } from '@apollo/server/standalone'; const typeDefs = `#graphql type Customer { token: String name: String sic_code: String } type Address { token: String customer_token: String address_line1: String address_line2: String city: String state: String postal_code: String } type Contact { token: String customer_token: String first_name: String last_name: String email: String phone: String } type Credit { token: String customer_token: String credit_limit: Float balance: Float credit_score: Int } type Query { addresses: [Address] address(customer_token: String): Address contacts: [Contact] contact(customer_token: String): Contact customers: [Customer] customer(token: String): Customer credits: [Credit] credit(customer_token: String): Credit } `; const resolvers = { Query: { addresses: () => addresses, address: (parent, args, context) => { const customer_token = args.customer_token; return addresses.find(address => address.customer_token === customer_token); }, contacts: () => contacts, contact: (parent, args, context) => { const customer_token = args.customer_token; return contacts.find(contact => contact.customer_token === customer_token); }, customers: () => customers, customer: (parent, args, context) => { const token = args.token; return customers.find(customer => customer.token === token); }, credits: () => credits, credit: (parent, args, context) => { const customer_token = args.customer_token; return credits.find(credit => credit.customer_token === customer_token); } }, }; const server = new ApolloServer({ typeDefs, resolvers, }); const { url } = await startStandaloneServer(server, { listen: { port: 4000 }, }); console.log(`Apollo Server ready at: ${url}`); const customers = [ { token: 'customer-token-1', name: 'Acme Inc.', sic_code: '1234' }, { token: 'customer-token-2', name: 'Widget Co.', sic_code: '5678' } ]; const addresses = [ { token: 'address-token-1', customer_token: 'customer-token-1', address_line1: '123 Main St.', address_line2: '', city: 'Anytown', state: 'CA', postal_code: '12345' }, { token: 'address-token-22', customer_token: 'customer-token-2', address_line1: '456 Elm St.', address_line2: '', city: 'Othertown', state: 'NY', postal_code: '67890' } ]; const contacts = [ { token: 'contact-token-1', customer_token: 'customer-token-1', first_name: 'John', last_name: 'Doe', email: 'jdoe@example.com', phone: '123-456-7890' } ]; const credits = [ { token: 'credit-token-1', customer_token: 'customer-token-1', credit_limit: 10000.00, balance: 2500.00, credit_score: 750 } ]; With everything configured as expected, we run the following command to start the server: Shell $ npm start With the Apollo server running on port 4000, I used the http://localhost:4000/ URL to access Apollo Explorer. Then I set up the following example query: TypeScript query ExampleQuery { addresses { token } contacts { token } customers { token } } This is how it looks in Apollo Explorer: Pushing the Example Query button, I validated that the response payload aligned with the static data I provided in the index.ts: JSON { "data": { "addresses": [ { "token": "address-token-1" }, { "token": "address-token-22" } ], "contacts": [ { "token": "contact-token-1" } ], "customers": [ { "token": "customer-token-1" }, { "token": "customer-token-2" } ] } } Before going any further in addressing my Customer 360 use case, I wanted to run this service in the cloud. Deploying Apollo Server to Heroku Since this article is all about doing something new, I wanted to see how hard it would be to deploy my Apollo server to Heroku. I knew I had to address the port number differences between running locally and running somewhere in the cloud. I updated my code for starting the server as shown below: TypeScript const { url } = await startStandaloneServer(server, { listen: { port: Number.parseInt(process.env.PORT) || 4000 }, }); With this update, we’ll use port 4000 unless there is a PORT value specified in an environment variable. Using Gitlab, I created a new project for these files and logged into my Heroku account using the Heroku command-line interface (CLI): Shell $ heroku login You can create a new app in Heroku with either their CLI or the Heroku dashboard web UI. For this article, we’ll use the CLI: Shell $ heroku create jvc-graphql-server-customer The CLI command returned the following response: Shell Creating jvc-graphql-server-customer... done https://jvc-graphql-server-customer-b62b17a2c949.herokuapp.com/ | https://git.heroku.com/jvc-graphql-server-customer.git The command also added the repository used by Heroku as a remote automatically: Shell $ git remote heroku origin By default, Apollo Server disables Apollo Explorer in production environments. For my demo, I want to leave it running on Heroku. To do this, I need to set the NODE_ENV environment variable to development. I can set that with the following CLI command: Shell $ heroku config:set NODE_ENV=development The CLI command returned the following response: Shell Setting NODE_ENV and restarting jvc-graphql-server-customer... done, v3 NODE_ENV: development Now we’re in a position to deploy our code to Heroku: Shell $ git commit --allow-empty -m 'Deploy to Heroku' $ git push heroku A quick view of the Heroku Dashboard shows my Apollo Server running without any issues: If you’re new to Heroku, this guide will show you how to create a new account and install the Heroku CLI. Acceptance Criteria Met: My Customer 360 Example With GraphQL, I can meet the acceptance criteria for my Customer 360 use case with the following query: TypeScript query CustomerData($token: String) { customer(token: $token) { name sic_code token }, address(customer_token: $token) { token customer_token address_line1 address_line2 city state postal_code }, contact(customer_token: $token) { token, customer_token, first_name, last_name, email, phone }, credit(customer_token: $token) { token, customer_token, credit_limit, balance, credit_score } } All I need to do is pass in a single Customer token variable with a value of customer-token-1: JSON { "token": "customer-token-1" } We can retrieve all of the data using a single GraphQL API call: JSON { "data": { "customer": { "name": "Acme Inc.", "sic_code": "1234", "token": "customer-token-1" }, "address": { "token": "address-token-1", "customer_token": "customer-token-1", "address_line1": "123 Main St.", "address_line2": "", "city": "Anytown", "state": "CA", "postal_code": "12345" }, "contact": { "token": "contact-token-1", "customer_token": "customer-token-1", "first_name": "John", "last_name": "Doe", "email": "jdoe@example.com", "phone": "123-456-7890" }, "credit": { "token": "credit-token-1", "customer_token": "customer-token-1", "credit_limit": 10000, "balance": 2500, "credit_score": 750 } } } Below is a screenshot from Apollo Explorer running from my Heroku app: Conclusion I recall earlier in my career when Java and C# were competing against each other for developer adoption. Advocates on each side of the debate were ready to prove that their chosen tech was the best choice … even when it wasn’t. In this example, we could have met my Customer 360 use case in multiple ways. Using a proven RESTful API would have worked, but it would have required multiple API calls to retrieve all of the necessary data. Using Apollo Server and GraphQL allowed me to meet my goals with a single API call. I also love how easy it is to deploy my GraphQL server to Heroku with just a few commands in my terminal. This allows me to focus on implementation—offloading the burdens of infrastructure and running my code to a trusted third-party provider. Most importantly, this falls right in line with my personal mission statement: “Focus your time on delivering features/functionality that extends the value of your intellectual property. Leverage frameworks, products, and services for everything else.” – J. Vester If you are interested in the source code for this article, it is available on GitLab. But wait… there’s more! In my follow-up post, we will build out our GraphQL server further, to implement authentication and real-time data retrieval with subscriptions. Have a really great day!
Is it possible to build a time-tracking app in just a few hours? It is, and in this article, I'll show you how! I’m a senior backend Java developer with 8 years of experience in building web applications. I will show you how satisfying and revolutionary it can be to save a lot of time on building my next one. The approach I use is as follows: I want to create a time-tracking application (I called it Timelog) that integrates with the ClickUp API. It offers a simple functionality that will be very useful here: creating time entries remotely. In order to save time, I will use some out-of-the-box functionalities that the Openkoda platform offers. These features are designed with developers in mind. Using them, I can skip building standard features that are used in every web application (over and over again). Instead, I can focus on the core business logic. I will use the following pre-built features for my application needs: Login/password authentication User and organization management Different user roles and privileges Email sender Logs overview Server-side code editor Web endpoints creator CRUDs generator Let’s get started! Timelog Application Overview Our sample internal application creates a small complex system that can then be easily extended both model-wise and with additional business logic or custom views. The main focus of the application is to: Store the data required to communicate with the ClickUp API. Assign users to their tickets. Post new time entries to the external API. To speed up the process of building the application, we relied on some of the out-of-the-box functionalities mentioned above. At this stage, we used the following ones: Data model builder (Form) - Allows us to define data structures without the need to recompile the application, with the ability to adjust the data schema on the fly Ready-to-use management functionalities - With this one, we can forget about developing things like authentication, security, and standard dashboard view. Server-side code editor - Used to develop a dedicated service responsible for ClickUp API integration, it is coded in JavaScript all within the Openkoda UI. WebEndpoint builder - Allows us to create a custom form handler that uses a server-side code service to post time tracking entry data to the ClickUp servers instead of storing it in our internal database Step 1: Setting Up the Architecture To implement the functionality described above and to store the required data, we designed a simple data model, consisting of the following five entities. ClickUpConfig, ClickUpUser, Ticket, and Assignment are designed to store the keys and IDs required for connections and messages sent to the ClickUp API. The last one, TimeEntry, is intended to take advantage of a ready-to-use HTML form (Thymeleaf fragment), saving a lot of time on its development. The following shows the detailed structure of a prepared data model for the Timelog ClickUp integration. ClickUpConfig apiKey - ClickUp API key teamId - ID of space in ClickUp to create time entry in ClickUpUser userId - Internal ID of a User clickUpUserId - ID of a user assigned to a workspace in ClickUp Ticket name - Internal name of the ticket clickUpTicketid - ID of a ticket in ClickUp to create time entries Assignment userId - Internal ID of a User ticketId - Internal ID of a Ticket TimeEntry userId - Internal ID of a User ticketId - Internal ID of a ticket date - Date of a time entry durationHours - Time entry duration provided in hours durationMinutes - Time entry duration provided in minutes description - Short description for created time entry We want to end up with five data tiles on the dashboard: Step 2: Integrating With ClickUp API We integrated our application with the ClickUp API specifically using its endpoint to create time entries in ClickUp. To connect the Timelog app with our ClickUp workspace, it is required to provide the API Key. This can be done using either a personal API token or a token generated by creating an App in the ClickUp dashboard. For information on how to retrieve one of these, see the official ClickUp documentation. In order for our application to be able to create time entries in our ClickUp workspace, we need to provide some ClickUp IDs: teamId: This is the first ID value in the URL after accessing your workspace. userId: To check the user’s ClickUp ID (Member ID), go to Workspace -> Manage Users. On the Users list, select the user’s Settings and then Copy Member ID. taskId: Task ID is accessible in three places on the dashboard: URL, task modal, and tasks list view. See the ClickUp Help Center for detailed instructions. You can recognize the task ID being prefixed by the # sign - we use the ID without the prefix. Step 3: Data Model Magic With Openkoda Openkoda uses the Byte Buddy library to dynamically build entity and repository classes for dynamically registered entities during the runtime of our Spring Boot application. Here is a short snippet of entity class generation in Openkoda (a whole service class is available on their GitHub). Java dynamicType = new ByteBuddy() .with(SKIP_DEFAULTS) .subclass(OpenkodaEntity.class) .name(PACKAGE + name) .annotateType(entity) .annotateType(tableAnnotation) .defineConstructor(PUBLIC) .intercept(MethodCall .invoke(OpenkodaEntity.class.getDeclaredConstructor(Long.class)) .with((Object) null)); Openkoda provides a custom form builder syntax that defines the structure of an entity. This structure is then used to generate both entity and repository classes, as well as HTML representations of CRUD views such as a paginated table with all records, a settings form, and a simple read-only view. All of the five entities from the data model described earlier have been registered in the same way, only by using the form builder syntax. The form builder snippet for the Ticket entity is presented below. JavaScript a => a .text("name") .text("clickUpTaskId") The definition above results in having the entity named Ticket with a set of default fields for OpenkodaEntity and two custom ones named “name” and “clickUpTaskId”. The database table structure for dynamically generated Ticket entity is as follows: Markdown Table "public.dynamic_ticket" Column | Type | Collation | Nullable | Default ------------------+--------------------------+-----------+----------+----------------------- id | bigint | | not null | created_by | character varying(255) | | | created_by_id | bigint | | | created_on | timestamp with time zone | | | CURRENT_TIMESTAMP index_string | character varying(16300) | | | ''::character varying modified_by | character varying(255) | | | modified_by_id | bigint | | | organization_id | bigint | | | updated_on | timestamp with time zone | | | CURRENT_TIMESTAMP click_up_task_id | character varying(255) | | | name | character varying(255) | | | The last step of a successful entity registration is to refresh the Spring context so it recognizes the new repository beans and for Hibernate to acknowledge entities. It can be done by restarting the application from the Admin Panel (section Monitoring). Our final result is an auto-generated full CRUD for the Ticket entity. Auto-generated Ticket settings view: Auto-generated all Tickets list view: Step 4: Setting Up Server-Side Code as a Service We implemented ClickUp API integration using the Openkoda Server-Side Code keeping API calls logic separate as a service. It is possible to use the exported JS functions further in the logic of custom form view request handlers. Then we created a JavaScript service that delivers functions responsible for ClickUp API communication. Openkoda uses GraalVM to run any JS code fully on the backend server. Our ClickupAPI server-side code service has only one function (postCreateTimeEntry) which is needed to meet our Timelog application requirements. JavaScript export function postCreateTimeEntry(apiKey, teamId, duration, description, date, assignee, taskId) { let url = `https://api.clickup.com/api/v2/team/${teamId}/time_entries`; let timeEntryReq = { duration: duration, description: '[Openkoda Timelog] ' + description, billable: true, start: date, assignee: assignee, tid: taskId, }; let headers = {Authorization: apiKey}; return context.services.integrations.restPost(url, timeEntryReq, headers); } To use such a service later on in WebEndpoints, it is easy enough to follow the standard JS import expression import * as clickupAPI from 'clickupAPI';. Step 5: Building Time Entry Form With Custom GET/POST Handlers Here, we prepare the essential screen for our demo application: the time entry form which posts data to the ClickUp API. All is done in the Openkoda user interface by providing simple HTML content and some JS code snippets. The View The HTML fragment is as simple as the one posted below. We used a ready-to-use form Thymeleaf fragment (see form tag) and the rest of the code is a standard structure of a Thymeleaf template. HTML <!--DEFAULT CONTENT--> <!DOCTYPE html> <html xmlns:th="http://www.thymeleaf.org" xmlns:layout="http://www.ultraq.net.nz/thymeleaf/layout" lang="en" layout:decorate="~{${defaultLayout}"> <body> <div class="container"> <h1 layout:fragment="title"/> <div layout:fragment="content"> <form th:replace="~{generic-forms::generic-form(${TimeEntry}, 'TimeEntry', '', '', '', 'Time Entry', #{template.save}, true)}"></form> </div> </div> </body> </html> HTTP Handlers Once having a simple HTML code for the view, we need to provide the actual form object required for the generic form fragment (${TimeEntry}). We do it inside a GET endpoint as a first step, and after that, we set the currently logged user ID so there’s a default value selected when entering the time entry view. JavaScript flow .thenSet("TimeEntry", a => a.services.data.getForm("TimeEntry")) .then(a => a.model.get("TimeEntry").dto.set("userId", a.model.get("userEntityId"))) Lastly, the POST endpoint is registered to handle the actual POST request sent from the form view (HTML code presented above). It implements the scenario where a user enters the time entry form, provides the data, and then sends the data to the ClickUp server. The following POST endpoint JS code: Receives the form data. Reads the additional configurations from the internal database (like API key, team ID, or ClickUp user ID). Prepares the data to be sent. Triggers the clickupAPI service to communicate with the remote API. JavaScript import * as clickupAPI from 'clickupAPI'; flow .thenSet("clickUpConfig", a => a.services.data.getRepository("clickupConfig").search( (root, query, cb) => { let orgId = a.model.get("organizationEntityId") != null ? a.model.get("organizationEntityId") : -1; return cb.or(cb.isNull(root.get("organizationId")), cb.equal(root.get("organizationId"), orgId)); }).get(0) ) .thenSet("clickUpUser", a => a.services.data.getRepository("clickupUser").search( (root, query, cb) => { let userId = a.model.get("userEntityId") != null ? a.model.get("userEntityId") : -1; return cb.equal(root.get("userId"), userId); }) ) .thenSet("ticket", a => a.form.dto.get("ticketId") != null ? a.services.data.getRepository("ticket").findOne(a.form.dto.get("ticketId")) : null) .then(a => { let durationMs = (a.form.dto.get("durationHours") != null ? a.form.dto.get("durationHours") * 3600000 : 0) + (a.form.dto.get("durationMinutes") != null ? a.form.dto.get("durationMinutes") * 60000 : 0); return clickupAPI.postCreateTimeEntry( a.model.get("clickUpConfig").apiKey, a.model.get("clickUpConfig").teamId, durationMs, a.form.dto.get("description"), a.form.dto.get("date") != null ? (new Date(a.services.util.toString(a.form.dto.get("date")))).getTime() : Date.now().getTime(), a.model.get("clickUpUser").length ? a.model.get("clickUpUser").get(0).clickUpUserId : -1, a.model.get("ticket") != null ? a.model.get("ticket").clickUpTaskId : '') }) Step 6: Our Application Is Ready! This is it! I built a complex application that is capable of storing the data of users, assignments to their tickets, and any properties required for ClickUp API connection. It provides a Time Entry form that covers ticket selection, date, duration, and description inputs of a single time entry and sends the data from the form straight to the integrated API. Not to forget about all of the pre-built functionalities available in Openkoda like authentication, user accounts management, logs overview, etc. As a result, the total time to create the Timelog application was only a few hours. What I have built is just a simple app with one main functionality. But there are many ways to extend it, e.g., by adding new structures to the data model, by developing more of the ClickUp API integration, or by creating more complex screens like the calendar view below. If you follow almost exactly the same scenario as I presented in this case, you will be able to build any other simple (or not) business application, saving time on repetitive and boring features and focusing on the core business requirements. I can think of several applications that could be built in the same way, such as a legal document management system, a real estate application, a travel agency system, just to name a few. As an experienced software engineer, I always enjoy implementing new ideas and seeing the results quickly. In this case, that is all I did. I spent the least amount of time creating a fully functional application tailored to my needs and skipped the monotonous work. The .zip package with all code and configuration files are available on my GitHub.
Creating an API and an admin app using a framework takes too long, and is far too complex. AI and automation can create systems in minutes rather than weeks or months, dramatically simpler, and fully customizable with tools and approaches you already know. In this tutorial, we'll show how to create a complete system using VS Code, Copilot, and API Logic Server (open source). We'll then add business logic with rules, and use Python to add a custom endpoint and Kafka integration. Links are provided so you can execute these steps on your own. Overview As shown below, you can submit a Natural Language description of a database to Copilot. This creates a Python data model (SQLAlchemy classes). You then use API Logic Server CLI to create an executable project from the model. Alternatively, you can create a project by identifying an existing database. The project is executable, providing an API and an admin app, enabling agile collaboration and unblocking custom app dev. Figure 1: Overview Setup To begin, install Python and VSCode. Optionally, install Copilot: it's moderately priced and you can execute this tutorial without it. But, it provides the Natural Language services shown here - it's quite a lot of fun to explore, so you might just want to splurge and acquire it. Then, install the API Logic Server and start it: Shell python3 -m venv venv # windows: python -m venv venv source venv/bin/activate # windows: venv\Scripts\activate python -m pip install ApiLogicServer ApiLogicServer start This will launch the API Logic Server in VSCode. We've moved the Copilot chat pane to the right. Figure 2: API Logic Manager in your IDE 1. Create Your Model With Copilot The README page includes the Natural Language Text to supply to Copilot; paste it, and press enter. It's shown in the diagram below in dark gray ("Use SQLAlchemy to..."). Copilot creates the SQLAlchemy model code. Paste the generated code into a new model file called sample_ai.py (step 2 in Figure 3 below): Figure 3: Creating a project with Copilot and API Logic Server 2. Create a Project With API Logic Server Create your project (step 3 in Figure 3 above) by entering the following into the bottom terminal pane (als is a synonym for ApiLogicServer): Shell als create --project-name=sample_ai --from-model=sample_ai.py --db-url=sqlite API Logic Server uses SQLAlchemy (a popular Python ORM) to create a database from the Copilot model and then creates a fully formed project by reading the database schema. The project includes your data model classes, your API, and your admin app, fully configured for execution. Here the target database is SQLite; the same scenario will work for other databases as well, but you will need to replace "sqlite" with a full URI of your database. Create Project From Existing Database Alternatively, if you don't have Copilot, you can use an existing database, and create your project using the pre-installed SQLite database (1b in Figure 1): Shell als create --project-name=sample_ai --db-url=sqlite:///sample_ai.sqlite 3. Microservice Automation: Executable Project In either case, API Logic Server creates a fully formed project, ready to run, and launches it in another instance of VSCode: Figure 4: Created Project (new VSCode instance) Press F5 to start the server; launch the created admin app in your browser to explore your data and the API: The admin app, based on React Admin, provides data query/update services, with multi-table support for automatic joins and page navigations. This can kick-start agile business user collaboration, and provide back-office data functions. A JSON API, provides retrieval/update services, including support to choose columns and related data. This unblocks custom app dev, which is often compressed into the end of the project while waiting on custom API development. Compare automation to framework-based development: With a framework, you are ready to code. With automation, you are ready to run. Figure 5: Microservice Automation - created admin app and API So, we have working software: an admin app, for business user collaboration; an API, to unblock custom app dev. It was fast - only took a few moments - and simple - did not require months to learn a framework. 4a. Customize With Rules: Logic Automation Well, "fast and simple" is great, but it's little more than a stupid pet trick without logic and security. That's often nearly half the work of a transactional database application. API Logic Server contains a rule engine. You can declare rules in Python, using IDE code completion services. It also provides value: spreadsheet-like rules reduce logic code (half the app) by 40X. But, we can have much more fun. As shown below, we can ask Copilot to create these rules for us, and paste them into a pre-created file: Figure 6: Creating Rules with Copilot These 5 lines of code look just like the requirements, so the level of abstraction is remarkably high. These 5 declarative rules would have required 200 lines of traditional procedural code. And they are executable. They listen for SQLAlchemy ORM events, and fire in response to the actual changes in the transaction. Rules (and their overhead) are pruned if their referenced data is unchanged. Rules are debuggable. Standard logging depicts each rule firing, including the state of the row. And, you can use your debugger. Similar declarative rules are provided for row-level security, based on a user's roles. Authorization information can be obtained from a SQL database, or corporate stores such as LDAP or Active Directory. 4b. Customize With Python Automation is great, but let's face it: you can never automate everything. It's mandatory to have a customization capability that is standards-based — using our favorite IDE, standard languages like Python, and standard frameworks like SQLAlchemy and Flask. Let's examine 2 typical customizations: a custom endpoint, and Kafka integration. Custom API Endpoint Imagine accepting orders in a designated format from a B2B partner. The custom endpoint below performs the transformation using order_b2b_def (a declarative mapping definition, not shown), and saves it. This automatically activates the business rules above to check credit. Figure 7: Customization - Adding a Custom Endpoint Custom Logic for Kafka Integration Let's further imagine that accepted orders must be further transformed, and sent to shipping via a Kafka message. The screenshot below illustrates that, in addition to rules, you can provide Python code for business logic. Here you have all the power of Python libraries: Flask, SQLAlchemy, Kafka, etc. Figure 8: Customization - Kafka Integration 5. Deploy Value is not realized until the system is deployed, whether for final production, or early collaboration with stakeholders. API Logic Server creates scripts to containerize your project, and deploy to Azure with Docker Compose: Summary The screenshots above illustrate remarkable agility. This system might have taken weeks or months using conventional frameworks. But it's more than agility. The level of abstraction here is very high, bringing a level of simplicity that empowers you to create microservices - even if you are new to Python or frameworks such as Flask and SQLAlchemy. Instead of a complex framework, it's more like an appliance — just plug into your idea or existing database, and you have an executable, customizable project. There are 4 key elements that deliver this speed and simplicity: Natural Language Processing: Even savvy SQL programmers welcome syntax help. It's a dream come true to start from Natural Language, as provided by Copilot. Microservice automation: Instead of slow and complex framework coding, just plug into your database for an instant API and Admin App. Logic automation with declarative rules: Instead of tedious code that describes how logic operates, rules express what you want to accomplish, and reduce the backend half of your application by 40x. Extensibility: Finish the remaining elements with your IDE, Python, and standard packages such as Flask and SQLAlchemy. Automation empowers more people, to do more.
Editor's Note: The following is an article written for and published in DZone's 2024 Trend Report, Modern API Management: Connecting Data-Driven Architectures Alongside AI, Automation, and Microservices. APIs play a pivotal role in the world of modern software development. Multiple types of APIs can be used to establish communication and data exchange between various systems. At the forefront lies the REST approach, which has dominated the industry due to its simplicity and scalability. However, as technology has evolved, the demands of developers and businesses have also changed. In recent years, alternatives such as GraphQL and asynchronous event-driven APIs have also emerged. They offer distinct advantages over traditional REST APIs. In this article, we will look into each of these API technologies and build a comparative understanding of them. REST: The Start of Resource-Oriented Communication REST architecture revolves around the concept of resources. These are entities that can be managed through standard HTTP methods such as GET, POST, PUT, and DELETE. One of the key characteristics of REST is its stateless nature, where each request from a client contains all the necessary information for the server to fulfill it. This decouples the client and server, allowing them to be scaled independently. Advantages and Disadvantages of REST REST APIs have some significant advantages: REST follows a simple and intuitive design based on standard HTTP methods. Each request in the REST approach is independent, resulting in better scalability and reliability. REST utilizes HTTP caching mechanisms to enhance performance and reduce the load on the origin server. REST is interoperable, working well with various programming languages and platforms due to its standard format. However, REST architecture also has several disadvantages: REST APIs can result in overfetching, where clients receive more data than needed, leading to inefficiency and waste of network bandwidth. Similar to the first point, REST APIs can also suffer from underfetching, where multiple requests are needed to fulfill complex data requirements. This results in increased latency. REST follows a synchronous approach that can lead to blocking and performance issues in high-load scenarios. Changes to the API's data schema can impact clients, resulting in tight coupling. Use Cases of REST APIs There are ideal use cases where REST APIs are much better suited when compared to other types of APIs, for example: Caching intensive applications – A read-heavy application, such as news websites or static content, can benefit from REST's caching mechanism. The standardized caching directives of REST make it easier to implement. Simple CRUD operations – When dealing with straightforward CRUD operations, REST APIs offer simplicity and predictability. Applications with a clear and static data model often find REST APIs to be more suitable. GraphQL: The Rise of Declarative Data Fetching With APIs GraphQL is a combination of an open-source language for querying data as well as a runtime for fulfilling those queries. The key principle behind GraphQL is to have a hierarchical structure for defining data queries, letting the clients precisely specify the data they need in a single request. Figure 1. GraphQL in the big picture In quite a few ways, GraphQL was a direct response to the issues with the traditional REST API architecture. However, it also promotes a strongly typed schema, offering developers a clear idea of what to expect. GraphQL supports real-time data updates through subscriptions. Over the years, a lot of work has happened on tools like GraphQL Federation to make GraphQL APIs more scalable for large enterprises with multiple domain areas. Advantages and Disadvantages of GraphQL GraphQL provides some key advantages: With GraphQL, clients can request only the specific data they need. This eliminates the overfetching and underfetching issues with REST APIs. GraphQL's strongly typed schema approach provides a clear structure and validation, speeding up development and documentation. GraphQL typically operates through a single endpoint. Clients just need to care about a single endpoint while talking to a GraphQL server even though there might be multiple sources for the data. Built-in introspection allows clients to explore the schema and discover available data and operations. There are also several disadvantages to GraphQL: Implementing GraphQL requires additional effort and expertise when compared to traditional REST APIs. Since the queries in GraphQL are flexible, caching of data can be challenging and may need custom solutions. While GraphQL reduces overfetching at the top level, nested queries can still lead to unnecessary data retrievals. Ownership of the common GraphQL layer becomes confusing, unlike the clear boundaries of a REST API. Use Cases of GraphQL There are specific scenarios where GraphQL does a better job as compared to REST APIs, for instance: Complex and nested data requirements – To fetch data with complex relationships, GraphQL helps clients precisely specify the data they need in a single query. Real-time data updates – GraphQL subscriptions help applications handle real-time data updates such as chat applications or live dashboards. With GraphQL, clients can subscribe to changes in specific data, allowing real-time updates without the need for frequent polling. Microservices architectures – In this case, data is distributed across multiple services. GraphQL provides a unified interface for clients to query data from various services. The client application doesn't have to manage multiple REST endpoints. Asynchronous APIs: A Shift to Event-Driven Architecture Over the years, the push to adopt, or migrate to, a cloud-native architecture has also given rise to event-driven architectures, the advantage being the prospect of non-blocking communication between components. With asynchronous APIs, clients don't need to wait for a response before proceeding further. They can send requests and continue their execution process. Such an approach is advantageous for scenarios that require high concurrency, scalability, and responsiveness. In event-driven systems, asynchronous APIs handle events and messages along with help from technologies like Apache Kafka and RabbitMQ, which offer a medium of communication between the message producer and the consumer. Considering a typical system using an event-driven API approach, we have producers publish events to topics, and consumers subscribe to these topics to receive and process the events asynchronously. This allows for seamless scalability and fault tolerance because both producers and consumers can evolve independently. The below diagram shows such a system: Figure 2. An event-driven system with Kafka and asynchronous APIs Advantages and Disadvantages of Asynchronous APIs There are some key advantages of asynchronous APIs: Asynchronous APIs are well suited for handling high concurrency and scalability requirements since multiple requests can be handled concurrently. Asynchronous APIs also enable real-time data processing by enabling timely response to events. Asynchronous APIs can also help better utilize system resources by offloading tasks to background processes. Lastly, asynchronous APIs increase the general fault tolerance of a system as one component failing doesn't disrupt the entire system. However, just like other API types, asynchronous APIs also have several disadvantages: There is increased complexity around message delivery, ordering, and error handling. Asynchronous APIs are more challenging to debug and test. Systems built using asynchronous APIs often result in eventual consistency, where data updates aren't immediately reflected across all components. Asynchronous APIs can also increase costs with regard to special systems for handling messages. Use Cases of Asynchronous APIs There are a few ideal use cases for asynchronous APIs when compared to REST and GraphQL APIs, including: Real-time data streaming – Asynchronous APIs are the best choice for real-time data streaming needs such as social media feeds, financial market updates, and IoT sensor data. These applications generate large volumes of data that need to be processed and delivered to clients in near real time. Integration with third-party systems – Asynchronous APIs are quite suitable for integrating with third-party systems that may have unpredictable response times or availability SLAs. Background tasks – Lastly, applications that require execution of background tasks — such as sending emails, notifications, or image/video processing — can benefit from the use of asynchronous APIs. Side-by-Side Comparison of REST, GraphQL, and Asynchronous APIs We've looked at all three types of API architectures. It is time to compare them side by side so that we can make better decisions about choosing one over the other. The table below shows this comparison across multiple parameters: Table 1. Comparing REST, GraphQL, and Async APIs Parameter REST APIs GraphQL APIs Asynchronous APIs Data fetching approach Data is fetched with predefined endpoints Clients specify the exact data requirements in the query Data is passed in the form of asynchronous messages Performance and scalability Highly suitable for scalable applications; can suffer from overfetching and underfetching problems Scalable; nested queries can be problematic Highly scalable; efficient for real-time data processing Flexibility and ease of use Limited flexibility in querying data High flexibility for querying data Limited flexibility in querying data and requires understanding of an event-driven approach Developer experience and learning curve Well established and familiar to many developers Moderate learning curve in terms of understanding the GraphQL syntax Steeper learning curve Real-time capabilities Limited real-time capabilities, relying on techniques like polling and webhooks for updates Real-time capabilities through subscriptions Designed for real-time data processing; highly suitable for streaming applications Tooling and ecosystem support Abundant tooling and ecosystem support Growing ecosystem The need for specialized tools such as messaging platforms like RabbitMQ or Kafka Conclusion In this article, we've explored the key distinctions between different API architectures: REST, GraphQL, and asynchronous APIs. We've also looked at scenarios where a particular type of API may be more suitable than others. Looking ahead, the API development landscape is poised for further transformation. Emerging technologies such as machine learning, edge computing, and IoT will drive new demands that necessitate the evolution of API approaches. Also, with the rapid growth of distributed systems, APIs will play a key role in enabling communication. As a developer, it's extremely important to understand the strengths and limitations of each API style and to select the approach that's most suitable for a given requirement. This mentality can help developers navigate the API landscape with confidence. This is an excerpt from DZone's 2024 Trend Report, Modern API Management: Connecting Data-Driven Architectures Alongside AI, Automation, and Microservices.Read the Free Report
John Vester
Staff Engineer,
Marqeta
Alexey Shepelev
Senior Full-stack Developer,
BetterUp
Saurabh Dashora
Founder,
ProgressiveCoder