DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Low-Code Development: Leverage low and no code to streamline your workflow so that you can focus on higher priorities.

DZone Security Research: Tell us your top security strategies in 2024, influence our research, and enter for a chance to win $!

Launch your software development career: Dive head first into the SDLC and learn how to build high-quality software and teams.

Open Source Migration Practices and Patterns: Explore key traits of migrating open-source software and its impact on software development.

IoT

IoT, or the Internet of Things, is a technological field that makes it possible for users to connect devices and systems and exchange data over the internet. Through DZone's IoT resources, you'll learn about smart devices, sensors, networks, edge computing, and many other technologies — including those that are now part of the average person's daily life.

icon
Latest Refcards and Trend Reports
Trend Report
Edge Computing and IoT
Edge Computing and IoT
Refcard #214
MQTT Essentials
MQTT Essentials
Refcard #263
Messaging and Data Infrastructure for IoT
Messaging and Data Infrastructure for IoT

DZone's Featured IoT Resources

An ARM Cortex-M3-Based Bare-Metal OS

An ARM Cortex-M3-Based Bare-Metal OS

By Shobhit Kukreti
ARM-based systems are ubiquitous in today's world. Most of our smartphones, tablets, smart speakers, smart thermostats, and even data centers are likely powered by an ARM-based processor. The difference between the traditional laptop using Intel or ARM-based x86 chips and ARM is that ARM processors have a smaller form factor, less power consumption, and come in a variety of flavors. Amongst the multitude of ARM processor offerings, we will pick the ARM Cortex-M series processor. We shall build a bare-metal operating system from scratch. We will use the arm-none-eabi toolchain and QEMU for rapid prototyping. The host system is Ubuntu 18.04 and both the toolchain and QEMU can be installed using the Ubuntu software repository. QEMU can be invoked with the below command line. It emulates the Stellaris board, which has 256K flash memory and 64K of SRAM. qemu-system-arm -M lm3s6965evb --kernel main.bin --serial stdio When you compile a typical C program, whether for ARM or Intel/AMD processors, the structure will look like the code below. The entry point for the program is at main. You may use a library function printf to print out a statement on a terminal console. C int main (int argc, char* argv[]) { printf("Hello World\"); return 0; } // gcc -o main main.c Underneath the hood, the compiler and linker add a C runtime library to your code which adds startup code, printf, etc., which makes your program run. In contrasting fashion, a vanilla bare-metal firmware has to implement its own startup code, create the linker file, and define an entry point for its code to run. The code block below defines a linker script. It defines the flash memory and RAM memory starting address and length. The linker takes the object code as input and performs relocation/copies different sections of the code at the appropriate address as defined in the linker file. C ENTRY(Reset_Handler) MEMORY { flash (rx) : ORIGIN = 0x00000000, LENGTH = 256K ram (rwx) : ORIGIN = 0x20000000, LENGTH = 64K } ..... SECTIONS { .text : { . = ALIGN(4); *(.isrvectors) *(.text) *(.rodata) *(.rodata*) . = ALIGN(4); _endflash = .; } > flash .data : { . = ALIGN(4); _start_data = .; *(vtable) *(.data) . = ALIGN(4); _end_data = .; } > ram AT > flash ..... } The interrupt vectors, text, and read-only section are loaded into the flash memory and our code runs directly from the flash. The mutable data is loaded into the RAM. C .align 2 .thumb .syntax unified .section .isrvectors .word vTopRam /* Top of Stack */ .word Reset_Handler+1 /* Reset Handler */ .word NMI_Handler+1 /* NMI Handler */ .word HardFault_Handler+1 /* Hard Fault Handler */ .word MemManage_Handler+1 /* MPU Fault Handler */ .word BusFault_Handler+1 /* Bus Fault Handler */ .word UsageFault_Handler+1 /* Usage Fault Handler */ .word 0 /* Reserved */ .word 0 /* Reserved */ .word 0 /* Reserved */ .word 0 /* Reserved */ .word SVC_Handler+1 /* SVCall Handler */ .word DebugMon_Handler+1 /* Debug Monitor Handler */ .word 0 /* Reserved */ .word PendSV_Handler+1 /* PendSV Handler */ .word SysTick_Handler+1 /* SysTick Handler */ From the Interrupt Service Routine Vectors, Reset_Handler, SVC_Handler and SysTick_Handler are of importance to us in this tutorial. The following register map is from the TI Stellaris LM3S6965 datasheet. It defines the registers which we shall use in our tiny OS. C #define STCTRL (*((volatile unsigned int *)0xE000E010)) // SysTick Control Register #define STRELOAD (*((volatile unsigned int *)0xE000E014)) // SysTick Load Timer Value #define STCURRENT (*((volatile unsigned int *) 0xE000E018)) // Read Current Timer Value #define INTCTRL (*((volatile unsigned int *)0XE000ED04)) // Interrupt Control Register #define SYSPRI2 (*((volatile unsigned int *)0XE000ED1C)) // System Interrupt Priority #define SYSPRI3 (*((volatile unsigned int *)0xE000ED20)) // System Interrupt Priority #define SYSHNDCTRL (*((volatile unsigned int *)0xE000ED24)) #define SVC_PEND() ((SYSHNDCTRL & 0x8000)?1:0) // SuperVisory Call Pending #define TICK_PEND() ((SYSHNDCTRL & 0x800)?1:0) // SysTick Pending Figure 1: Setup Flow Our Reset_Handler function is part of the startup code. Cortex-M architecture defines a handler mode and a thread mode. All exceptions are run in the handler mode and user code runs in the thread mode. On power-on reset, we are in thread mode. For our OS to function we require the following: Startup code: Reset handler and ISR Vectors Setting up exceptions for supervisor/software interrupt and the OS timer Define common system calls such as Read/Write/Sleep and our custom create_task. Define a Task Control Block (TCB) struct and a circular linked list of TCB called Run Queue. ARM architecture defines a 24-bit SysTick timer and it is present in all Cortex-M3 SOCs. To make our OS generic and portable, we use the SysTick timer to generate periodic interrupts (~ 10 ms) for our OS Timer, which is also when our scheduler kicks in to manage tasks. The priority for SVC is kept higher than SysTick in our OS. Reset_Handler is defined below with a jump to c_entry(). C .thumb_func Reset_Handler: # add assembly initializations here LDR r0, =c_entry BX r0 #define TICK_PRIO(prio) {SYSPRI3 &=0x1FFFFFFF; \ SYSPRI3 |=(prio<<28); \ } #define SVC_PRIO(prio) {SYSPRI2 &=0x1FFFFFFF; \ SYSPRI2 |=(prio<<28); \ } The code snippet below shows sample tasks and their addition to our Run Queue of our OS. We define three tasks that are perhaps similar to the void loop() in Arduino where code runs forever. In our simple tasks, we print the task ID and then go to sleep for a variable amount of time. The write() and sleep() APIs are system calls. C typedef void (*CallBack)(); typedef struct _task_struct { CallBack func; unsigned int priority; }TASK_STRUCT; .... // Sample Tasks void task1() { while (1) { write("T1 ", 2); // yield cpu sleep(1000); } } ... // Define three tasks with different priorities. Lower number means higher priority. TASK_STRUCT task[3]; task[0].priority = 8; task[0].func = &task1; task[1].priority = 5; task[1].func = &task2; task[2].priority = 10; task[2].func = &task3; create_task((void*)&task, 3); ... The ARM Procedure Call Standard separates the group of ARM Registers which will be preserved or clobbered when a function call happens. Register R0-R3 holds the arguments to a function and R0 also holds the return value of the function. You will notice this in all exception-handling routines. The assembly code snippet below triggers an SVC interrupt and it jumps to the SVC Handler. C #define TASK_CREATE 31 .... create_task: @r0-r3 hold the arguments and are saved automatically. stmfd sp!,{lr} // Push Return Address onto fully descending stack push {r4-r11} // save r4-r11 SVC #TASK_CREATE // Call Supervisor Call to jump into Handler Mode pop {r4-r11} // Pop Back the saved register ldmfd sp!,{lr} // Pop LR mov pc,lr // return from the exception handler ... The code snippet below defines the SVC Handler. From the SVC instruction, we extract the immediate number, which in this case is #31, and use it in our C SVC Handler function which shall init our RunQueue linked list defined as RUNQ. C // SVC Interrupt Handler SVC_Handler: ... CPSID i // disable system interrupts .. // Extract SVC Immediate value ldr r1,[sp,#28] ldrb r1,[r1,#-2] BL C_SVC_Hndlr // Branch to C SVC Handler CPSIE i // enable system interrupts BX LR // Jump to Return Address ... int C_SVC_Hndlr(void *ptr, int svc_num) { int ret = 0, len = 0; void *stck_loc = ptr; switch (svc_num) { case 2: { // Write System Call char *data = (char*)*(unsigned int *)(stck_loc); // R0 on stack len = *(unsigned int *)(stck_loc + 1); // R1 on stack put(data, len); // Write to Serial Termimal break; } case 4: // Sleep System Call ms_delay(*(unsigned*)ptr); // *ptr holds the delay value break; case 31: // Create Task System Call task_create((void *)stck_loc); break; } } After defining our RUNQ linked list, we arm the SysTick Timer, point our program counter to the starting address of the first function in our list, and exit out of the handler mode. C // Simple Scheduler void Scheduler(void) { uint8_t max_prio = 64; TCB *pt = RUNQ; TCB *next = RUNQ; // find a task which is not sleeping and not blocked do{ pt = pt->next; if((pt->priority < max_prio)&&((pt->is_blocked)==0)&&((pt->sleep)==0)){ max_prio = pt->priority; next = pt; } } while(RUNQ != pt); RUNQ = next; } When the SysTick timer expires, our scheduler function is invoked which picks the next task in our queue which is not sleeping or is not blocked and has a higher priority. Now with our OS implemented, it is time to compile/build our firmware and run it on QEMU. Figure 2: QEMU Output With our QEMU output, we see the task ID getting printed. Task T2 has the highest priority and gets picked by our scheduler. It prints its task id and goes to sleep while yielding the CPU. The scheduler then picks the next task T1 with a medium priority until it yields, and then finally T3 runs. Since T2 sleeps for double the amount of time than T1 and T3, we see T1 and T3 run again before T2 gets scheduled back, and we follow the starting pattern T2, T1, T3. Conclusion We have introduced a simple bare-metal OS that implements system calls and a simple round-robin scheduler to loop through all the tasks in the system. Our OS lacks locking primitives like semaphores and mutexes. They can be implemented by adding another linked list of waiting tasks. The mutex locks or unlock operations can be handled with a system call which when triggered disables the interrupts (scheduler), which allows for serialization of the code. If the lock is already held by another task, the calling task is added to the wait queue and is de-queued when the mutex unlock operation occurs. Overall, this tutorial provides insights into how firmware-based OS/RTOS internals work. It also serves as a template for the readers for their own OS implementation and expansion on the ideas of operating systems, process management, virtual memory, device drivers, etc. More
Upgrade Your Hobbyist Arduino Firmware To Work With STM32 For Wider Applications

Upgrade Your Hobbyist Arduino Firmware To Work With STM32 For Wider Applications

By Akanksha Jhunjhunwala
If you're new to the DIY IoT community or even if you're a seasoned maker but need to spin up a quick prototype for something that collects some sensor data and takes some actions based on it automatically, you probably have an Arduino running some code somewhere in your workshop. Now, if you have been adding more sensors and more controls and more peripherals to your little system for a while till it's not so little anymore, or if you find yourself looking for real-time capabilities or just more power, it might be time to upgrade to a 32-bit ARM Cortex-M based chip such as one from the STM32 family. For the purposes of this tutorial, we will focus on the main advantages of making the switch and the high-level firmware changes needed for the same, along with code examples. I would suggest using the STM32 Discovery Board to play with and test the code before moving on to designing a custom PCB with an STM32 chip. IDE and Setup If you're used to using Arduino IDE for development, suddenly switching over to something more widely used in the industry like Keil Studio will probably be too much of a jump. A good middle ground would be the STM32CubeIDE. As a summary, let's outline the basic tools you will need to have to get started: STM32CubeIDE: Download links STM32CubeMX: This is an add-on to the STM32 IDE that provides an easy GUI for configuring the microcontroller. Download link STM32 development board with programming cable Here is a good quick-start guide from Digikey for installing and setting up the IDE and connecting to the development board. Next, we will get to the heart of it all, porting over the code. Porting the Firmware Peripheral Code Some of the main protocols we will cover in this tutorial based on how widespread they are include Digital Read/Write, I2C, ADC (for leading analog sensors for example), and PWMs. 1. Digital I/O This is relatively easy; you just have to replace the digitalWrite() and digitalRead() with the respective STM32 HAL functions. Here is a code example. C++ // Arduino code for Digital I/O pinMode(LED_PIN, OUTPUT); digitalWrite(LED_PIN, HIGH); int state = digitalRead(LED_PIN); C++ // STM32 HAL Code HAL_GPIO_WritePin(GPIOA, GPIO_PIN_5, GPIO_PIN_SET); GPIO_PinState state = HAL_GPIO_ReadPin(GPIOA, GPIO_PIN_5); 2. PWM Controlling PWM-based outputs is relatively complicated unless you're using Arduino libraries that are built for specific modules. For example, if you want to know how to control an LED strip or servos, it's beneficial to know how to work with PWM signals. Here is an example of setting up a PWM output. In the graphical interface of your STM32CubeIDE, configure Timer2 to operate in PWM Mode and set CH1 as output. Set the RCC mode and configuration as shown in the image in the System Core settings. Hit "Generate Code" from the "Project" menu on the menu bar to auto-generate the code to configure the PWM signal. Here is a screenshot of what it looked like for me. Add some code in your main function to test the PWM output. C int main(void) { int32_t dutyCycle = 0; HAL_Init(); SystemClock_Config(); MX_GPIO_Init(); MX_TIM2_Init(); HAL_TIM_PWM_Start(&htim2, TIM_CHANNEL_1); while (1) { for(dutyCycle = 0; dutyCycle < 65535; dutyCycle += 70) { TIM2->CCR1 = dutyCycle; HAL_Delay(1); } for(dutyCycle = 65535; dutyCycle > 0; dutyCycle -= 70) { TIM2->CCR1 = dutyCycle; HAL_Delay(1); } } } Now, if you connect the GPIO pin attached to TIM2 to an oscilloscope, you'll see the PWM signal with the duty cycle you set! You can check which GPIO pin is attached to that timer using the configuration view for that timer; TIM 2 if you follow the example, as shown in the image below. 3. Analog Read Another commonly used function you've probably used your Arduino for is reading analog sensors. With an Arduino, it was as simple as using the AnalogRead(pin_number) function. On an STM32 though, it's not that much harder. You can follow the steps below. Go to the "Pinout & Configuration" tab. Enable ADC1 and select the channel connected to your analog sensor (e.g., ADC1_IN0 for PA0). Configure the ADC parameters as needed. From the Analog tab, select the ADC you want to use, and select one of the interrupts that don't show any conflicts; that is, they're not highlighted in red. If you go to the GPIO section, it will show which pin on the MCU it's connected to. "Generate Code" as before for the configuration code. Here is some sample code for your main function to read the analog value: C int main(void) { HAL_Init(); SystemClock_Config(); MX_GPIO_Init(); MX_ADC1_Init(); HAL_ADC_Start(&hadc1); while (1) { if (HAL_ADC_PollForConversion(&hadc1, HAL_MAX_DELAY) == HAL_OK) { uint32_t adcValue = HAL_ADC_GetValue(&hadc1); printf("ADC Value: %lu\n", adcValue); } HAL_Delay(1000); } } 4. I2C A lot of industrial quality sensors, I/O expansion devices, multiplexers, displays, and other useful peripherals commonly communicate over I2C. On an Arduino, you probably used the Wire library to communicate with I2C peripherals. Let's dive into how to communicate with an I2C peripheral on an STM32. Go to the graphical interface, enable I2C1 (or another I2C instance), and configure the pins (e.g., PB6 for I2C1_SCL and PB7 for I2C1_SDA). Configure the I2C parameters as needed (e.g., speed, addressing mode). I kept the default settings for this example. Generate the code. Here is some sample code for sending and receiving data over I2C. C int main(void) { HAL_Init(); SystemClock_Config(); MX_GPIO_Init(); MX_I2C1_Init(); uint8_t data = 0x00; HAL_I2C_Master_Transmit(&hi2c1, (uint16_t)0x50 << 1, &data, 1, HAL_MAX_DELAY); HAL_I2C_Master_Receive(&hi2c1, (uint16_t)0x50 << 1, &data, 1, HAL_MAX_DELAY); while (1) { } } static void MX_I2C1_Init(void) { hi2c1.Instance = I2C1; hi2c1.Init.ClockSpeed = 100000; hi2c1.Init.DutyCycle = I2C_DUTYCYCLE_2; hi2c1.Init.OwnAddress1 = 0; hi2c1.Init.AddressingMode = I2C_ADDRESSINGMODE_7BIT; hi2c1.Init.DualAddressMode = I2C_DUALADDRESS_DISABLE; hi2c1.Init.OwnAddress2 = 0; hi2c1.Init.GeneralCallMode = I2C_GENERALCALL_DISABLE; hi2c1.Init.NoStretchMode = I2C_NOSTRETCH_DISABLE; if (HAL_I2C_Init(&hi2c1) != HAL_OK) { Error_Handler(); } } Conclusion In this article, we covered interacting with peripherals using some of the most common communication protocols with an STM32. If you would like a tutorial on other communication protocols, or have questions about configuring your first STM32 controller, please leave a comment below. More
Why Use Rust Over C++ for IoT Solution Development
Why Use Rust Over C++ for IoT Solution Development
By Den Smyrnov
Smart Network Onboarding: Revolutionizing Connectivity With AI and Automation
Smart Network Onboarding: Revolutionizing Connectivity With AI and Automation
By Raghavaiah Avula
ARM CPU for Cost-Effective Apache Kafka at the Edge and Cloud
ARM CPU for Cost-Effective Apache Kafka at the Edge and Cloud
By Kai Wähner DZone Core CORE
Explore the Complete Guide to Various Internet of Things (IoT) Protocols
Explore the Complete Guide to Various Internet of Things (IoT) Protocols

Software developers use real-time data transmission to ensure the security of IoT applications. The choice of protocol is influenced by the complexity of the application and priorities. For instance, developers might prioritize speed over power saving if the IoT application requires real-time data transmission. On the other hand, if the application deals with sensitive data, a developer might prioritize security over speed. Understanding these trade-offs is critical to making the right protocol choice and putting in control of the IoT development journey. As the Internet of Things (IoT) evolves, we witness the birth of the latest devices and use cases. This dynamic landscape gives rise to more specialized protocols and opens new possibilities and potential for innovation. Simultaneously, older, obsolete protocols are naturally phasing out, paving the way for more efficient and effective solutions. This is a time of immense potential and opportunity in the world of IoT. Let's dive deep into the depths of IoT protocols. How Many IoT Protocols Are There? The IoT protocols can be vastly classified into two separate categories. They are IoT Data protocols and IoT Network protocols. IoT Data Protocols Discover the essential role of IoT data protocols in connecting low-power IoT devices. These protocols facilitate communication with hardware on the user's end without reliance on an internet connection. IoT data protocols and standards are linked through wired or cellular networks, enabling seamless connectivity. Noteworthy examples of IoT data protocols are: 1. Extensible Messaging and Presence Protocol (XMPP) XMPP is a versatile data transfer protocol for instant messaging technologies like Messenger and Google Hangouts. It is widely used for machine-to-machine communication in IoT, providing reliable and secure communication between devices. XMPP can transfer unstructured and structured data, making it a safe and flexible communication solution. 2. MQTT (Message Queuing Telemetry Transport) MQTT is a protocol that enables seamless data flow between devices. Despite its widespread adoption, it has limitations, such as the need for defined data representation and device management structure and the absence of built-in security measures. Careful consideration is essential when selecting this protocol for your IoT project. 3. CoAP (Constrained Application Protocol) CoAP is designed explicitly for HTTP-based IoT systems. It offers low overhead, ease of use, and multicast support, making it ideal for devices with resource constraints, such as IoT microcontrollers or WSN nodes. Its applications include intelligent energy and building automation for IoT innovation. 4. AMQP (Advanced Message Queuing Protocol) The Advanced Message Queuing Protocol (AMQP) sends transactional messages between servers. It provides high security and reliability, making it common in server-based analytical environments, especially in banking. However, its heaviness limits its use in IoT devices with limited memory. 5. DDS (Data Distribution Service) DDS (Data Distribution Service) is a scalable IoT protocol that enables high-quality communication in IoT. Similar to MQTT, DDS works on a publisher-subscriber model. It can be deployed in various settings, making it perfect for real-time and embedded systems. DDS allows for interoperable data exchange independent of hardware and software, positioning it as an open international middleware IoT standard. 6. HTTP (Hyper Text Transfer Protocol) The HTTP (Hyper Text Transfer Protocol) differs from the preferred IoT standard due to cost, battery life, power consumption, and weight issues. However, it is still used in manufacturing and 3-D printing industries due to its ability to handle large amounts of data and enable PC connection to 3-D printers for printing three-dimensional objects. 7. WebSocket WebSocket, developed as part of HTML5 in 2011, enables message exchange between clients and servers through a single TCP connection. Like CoAP, it simplifies managing connections and bidirectional communication on the Internet. It is widely used in IoT networks for continuous data communication across devices in client or server environments. IoT Network Protocols Now that we've covered IoT data protocols, let's explore the different IoT network protocols. IoT network protocols facilitate the connection of devices over a network, usually the Internet. Noteworthy examples of IoT network protocols are: 1. Lightweight M2M (LWM2M) IoT devices and sensors require minimal power, necessitating lightweight and energy-efficient communication. Gathering meteorological data often demands numerous sensors. To minimize energy consumption, experts employ lightweight communication protocols. One such protocol is the Lightweight M2M (LWM2M), enabling efficient remote connectivity. 2. Cellular Cellular networks like 4G and 5G are used to connect IoT devices, offering low latency and high data transfer speeds. However, they require a SIM card, which can be costly for many devices across a wide area. 3. Wi-Fi Wi-Fi is a widely known IoT protocol that provides internet connectivity within a specific range. It uses radio waves on particular frequencies, such as 2.4 GHz or 5GHz channels. These frequencies offer multiple channels for various devices, preventing network congestion. Typically, Wi-Fi connections range from 10 to 100 meters, with their range and speed influenced by the environment and coverage type. 4. Bluetooth The latest Bluetooth 4.0 standard uses 40 channels and 2 MHz bandwidth, enabling a maximum Mbps data transfer rate. Bluetooth Low Energy (BLE) technology is ideal for IoT applications prioritizing flexibility, scalability, and low power consumption. 5. ZigBee ZigBee-based networks, like Bluetooth, boast a significant IoT user base. ZigBee offers lower power consumption, more extended range (up to 200 meters compared to Bluetooth's 100 meters), low data range, and high security. Its simplicity and ability to scale to thousands of nodes make it an ideal choice for small devices. Many suppliers offer devices that support ZigBee's open standard, self-assembly, and self-healing grid topology model. 6. Thread The thread protocol is based on Zigbee. It provides efficient internet access to low-powered devices within a small area and offers the stability of Zigbee and Wi-Fi with superior power efficiency. In a Thread network, self-healing capabilities enable specific devices to seamlessly take over the role of a failing router. 7. Z-Wave Z-Wave is a popular IoT protocol for home applications. This protocol functions on the 800 to 900MHz radio frequency and rarely suffers from interference. However, device frequency is location-dependent, so choose the right one for your country. It is best used for home applications rather than in business. 8. LoRaWAN (Long Range WAN) LoRaWAN is an IoT protocol that enables low-power devices to talk with internet-connected services over a long-range wireless network. It can be mapped to the 2nd and 3rd layers of the OSI (Open Systems Interconnection) model. Conclusion Each IoT communication protocol is distinct, with a specific set of parameters that can either lead to success in one application or render it completely ineffective in another. Choosing IoT protocols and standards for Software Development projects is an essential and significant decision. Software developers must understand the gravity of this decision and determine the proper protocol for their IoT application. As the IoT industry continues to evolve, it brings about revolutionary changes in device communication, further underscoring the importance of IoT protocols. In this dynamic landscape, organizations are continually challenged to select the most suitable IoT protocol for their projects.

By BINU SUDHAKARAN PILLAI
A Complete Guide to the Real-Time Streaming Protocol (RTSP)
A Complete Guide to the Real-Time Streaming Protocol (RTSP)

With video surveillance increasingly becoming a top application of smart technology, video streaming protocols are getting a lot more attention. We’ve recently spent a lot of time on our blog posts discussing real-time communication, both to and from video devices, and that has finally led to an examination of the Real-Time Streaming Protocol (RTSP) and its place in the Internet of Things (IoT). What Is the Real-Time Streaming Protocol? The Real-Time Streaming Protocol is a network control convention that’s designed for use in entertainment and communications systems to establish and control media streaming sessions. RTSP is how you will play, record, and pause media in real time. Basically, it acts like the digital form of the remote control you use on your TV at home. We can trace the origins of RTSP back to 1996 when a collaborative effort between RealNetworks, Netscape, and Columbia University developed it with the intent to create a standardized protocol for controlling streaming media over the Internet. These groups designed the protocol to be compatible with existing network protocols, such as HTTP, but with a focus specifically on the control aspects of streaming media, which HTTP did not adequately address at the time. The Internet Engineering Task Force (IETF) officially published RTSP in April of 1998. Since the inception of RTSP, IoT developers have used it for various applications, including for streaming media over the Internet, in IP surveillance cameras, and in any other systems that require real-time delivery of streaming content. It’s important to note that RTSP does not actually transport the streaming data itself; rather, it controls the connection and the streaming, often working in conjunction with other protocols like the Real-time Transport Protocol (RTP) for the transport of the actual media data. RTSP works on a client-server architecture, in which a software or media player – called the client – sends requests to a second party, i.e., the server. In an IoT interaction, the way this works is typically that the client software is on your smartphone or your computer and you are sending commands to a smart video camera or other smart device that acts as the server. The server will respond to requests by performing a specific action, like playing or pausing a media stream or starting a recording. And you’ll be able to choose what the device does in real-time. Understanding RTSP Requests So, the client in an RTSP connection sends requests. But what exactly does that mean? Basically, the setup process for streaming via RTSP involves a media player or feed monitoring platform on your computer or smartphone sending a request to the camera’s URL to establish a connection. This is done using the “SETUP” command for setting up the streaming session and the “PLAY” command to start the stream. The camera then responds by providing session details so the RTP protocol can send the media data, including details about which transport protocol it will use. Once the camera receives the “PLAY” command through RTSP, it begins to stream packets of video data in real-time via RTP, possibly through a TCP tunnel (more on this later). The media player or monitoring software then receives and decodes these video data packets into viewable video. Here’s a more thorough list of additional requests and their meanings in RTSP: OPTIONS: Queries the server for the supported commands. It’s used to request the available options or capabilities of a server. DESCRIBE: Requests a description of a media resource, typically in SDP (Session Description Protocol) format, which includes details about the media content, codecs, and transport information. SETUP: Initializes the session and establishes a media transport, specifying how the media streams should be sent. This command also prepares the server for streaming by allocating necessary resources. PLAY: Starts the streaming of the media. It tells the server to start sending data over the transport protocol defined in the SETUP command. PAUSE: Temporarily halts the stream without tearing down the session, allowing it to be resumed later with another PLAY command. TEARDOWN: Ends the session and stops the media stream, freeing up the server resources. This command effectively closes the connection. GET_PARAMETER: Used to query the current state or value of a parameter on the session or media stream. SET_PARAMETER: Allows the client to change or set the value of a parameter on the session or media stream. Once a request goes through, the server can offer a response. For example, a “200 OK” response indicates a successful completion of the request, while “401 Unauthorized” indicates that the server needs more authentication. And “404 Not Found” means the specified resource does not exist. If that looks familiar, it’s because you’ve probably seen 404 errors and a message like “Web page not found” at least once in the course of navigating the internet. The Real-Time Transport Protocol As I said earlier, RTSP doesn’t directly transmit the video stream. Instead, developers use the protocol in conjunction with a transport protocol. The most common is the Real-time Transport Protocol (RTP). RTP delivers audio and video over networks from the server to the client so you can, for example, view the feed from a surveillance camera on your phone. The protocol is widely used in streaming media systems and video conferencing to transmit real-time data, such as audio, video, or simulation data. Some of the key characteristics of RTP include: Payload type identification: RTP headers include a payload type field, which allows receivers to interpret the format of the data, such as the codec being used. Sequence numbering: Each RTP data packet is assigned a sequence number. This helps the receiver detect data loss and reorder packets that arrive out of sequence. Timestamping: RTP packets carry timestamp information to enable the receiver to reconstruct the timing of the media stream, maintaining the correct pacing of audio and video playback. RTP and RTSP are still not enough on their own to handle all the various tasks involved in streaming video data. Typically, a streaming session will also involve the Real-time Transport Control Protocol (RTCP), which provides feedback on the quality of the data distribution, including statistics and information about participants in the streaming session. Finally, RTP itself does not provide any mechanism for ensuring timely delivery or protecting against data loss; instead, it relies on underlying network protocols such as the User Datagram Protocol (UDP) or Transport Control Protocol (TCP) to handle data transmission. To put it all together, RTP puts data in packets and transports it via UDP or TCP, while RTCP helps with quality control and RTSP only comes in to set up the stream and act like a remote control. RTSP via TCP Tunneling While I said you can use both UDP and TCP to deliver a media stream, I usually recommend RTSP over TCP, specifically using TCP tunneling. Basically, TCP tunneling makes it easier for RTSP commands to get through network firewalls and Network Address Translation (NAT) systems. The reason this is necessary is because RTSP in its out-of-box version has certain deficiencies when it comes to authentication and privacy. Basically, its features were not built for the internet of today which is blocked by firewalls on all sides. Rather than being made for devices on local home networks behind NAT systems, RTSP was originally designed more for streaming data from central services. For that reason, it struggles to get through firewalls or locate and access cameras behind those firewalls, which limits its possible applications. However, using TCP tunneling allows RTSP to get through firewalls and enables easy NAT traversal while maintaining strong authentication. It allows you to use an existing protocol and just “package” it in TCP for enhanced functionality. The tunnel can wrap RTSP communication inside a NAT traversal layer to get through the firewall. This is important because it can be difficult to set up a media stream between devices that are on different networks: for example, if you’re trying to monitor your home surveillance system while you’re on vacation. Another benefit of TCP tunneling is enhanced security. Whereas RTSP and RTP don’t have the out-of-box security features of some other protocols, like WebRTC, you can fully encrypt all data that goes through the TCP tunnel. These important factors have made RTSP via TCP tunneling a top option for video streaming within IoT. Final Thoughts In summary, while RTSP provides a standardized way to control media streaming sessions, its inherent limitations make it challenging for modern IoT video use cases requiring remote access and robust security. However, by leveraging TCP tunneling techniques, developers can harness the benefits of RTSP while overcoming firewall traversal and encryption hurdles. As video streaming continues to drive IoT innovation, solutions like RTSP over TCP tunneling will be crucial for enabling secure, real-time connectivity across distributed devices and networks. With the right protocols and services in place, IoT developers can seamlessly integrate live video capabilities into their products.

By Carsten Rhod Gregersen
Node-RED Unleashed: Transforming Industrial IoT Development and Industry Collaboration With Hitachi
Node-RED Unleashed: Transforming Industrial IoT Development and Industry Collaboration With Hitachi

Node-RED is an open-source, flow-based development tool designed for programming Internet of Things (IoT) applications with ease, and is a part of the OpenJS Foundation. It provides a browser-based editor where users can wire together devices, APIs, and online services by dragging and dropping nodes into a flow. This visual approach to programming makes it accessible for users of all skill levels to create complex applications by connecting different elements without writing extensive code. Node-RED has been working on some great improvements lately, including the first beta release of Node-RED 4.0. Updates include auto-complete in flow/global/env inputs, timestamp formatting options, and better, faster, more compliant CSV node. More to come in the full release next month! Recently, the OpenJS Foundation talked with Kazuhito Yokoi (横井 一仁), Learning and Development Division, Hitachi Academy, to find out more about Node-RED and why it is becoming so popular in Industrial IoT applications. A browser-based low-code programming tool sounds great, but how often do users end up having to write code anyway? It depends on user skills and systems. If users such as factory engineers have no IT skills, they can create flow without coding. The two most common cases are data visualization and sending data to a cloud environment. In these cases, users can create their systems by connecting Node-RED nodes. If users have IT skills, they can more easily customize Node-RED flow. They need to know about SQL when they want to store sensor data. If they want external npm modules, they should understand how to call the function through JavaScript coding, but in both cases, the programming code of a Node-RED node is usually on a computer screen. Hitachi is using Generative AI based on a Hitachi LLM to support the use of low-code development. Do you personally use ChatGPT with Node-RED? Do you think it will increase efficiency in creating low-code Node-RED flows? Yes, I do use ChatGPT with Node-RED. Recently, I used ChatGPT to generate code to calculate location data. Calculating direction and distance from two points, including latitude and longitude, is difficult because it requires trigonometric functions. But ChatGPT can automatically generate the source code from the prompt text. In particular, the function-gpt node, developed by FlowFuse, can generate JavaScript code in the Node-RED-specific format within a few seconds. Users just type the prompt text on the Node-RED screen. It’s clear to me that using ChatGPT with Node-RED allows IT engineers to reduce their coding time, and it expands the capabilities of factory engineers because they can try to write code themselves. In addition to factory applications, there's a compelling use case in Japan that underscores the versatility of Node-RED, especially for individuals without an IT skill set. In Tokyo, the Tokyo Mystery Circus, an amusement building, utilizes Node-RED to control its displays and manage complex interactions. The developer behind this project lacked a traditional IT background but needed a way to handle sophisticated tasks, such as controlling various displays that display writing as part of the gameplay. By using Node-RED, along with ChatGPT for creating complex handling scripts, the developer was able to achieve this. Using these technologies in such a unique environment illustrates how accessible and powerful tools like Node-RED and ChatGPT can be for non-traditional programmers. This example, highlighted in Tokyo and extending to cities like Osaka and Nagoya, showcases the practical application of these technologies in a wide range of settings beyond traditional IT and engineering domains. For more details, the video below (in Japanese) provides insight into how Tokyo Mystery Circus uses Node-RED in its operations. Why is Node-RED popular for building Industrial IoT applications? Node-RED was developed in early 2013 as a side-project by Nick O'Leary and Dave Conway-Jones of IBM's Emerging Technology Services group and is particularly well-known for its support of IoT protocols like MQTT and HTTP. Because Node-RED has many functions in MQTT, it is ready for use in Industrial IoT. From MQTT, other protocols like OPC UA (cross-platform, open-source, IEC62541 standard for data exchange from sensors to cloud applications) and Modbus (client/server data communications protocol in the application layer) can be used in 3rd party nodes developed by the community. Because Node-RED can connect many types of devices, it is very popular in the Industrial IoT field. In addition, many industrial devices support Node-RED. Users can buy these devices and start using Node-RED quickly. Why have companies like Microsoft, Hitachi, Siemens, AWS, and others adopted Node-RED? Regarding Hitachi, Node-RED has emerged as a crucial communication tool bridging the gap between IT and factory engineers, effectively addressing the barriers that exist both in technology and interpersonal interactions. Within one company, IT and OT (Operational Technology) departments often operate like two distinct entities, which makes it challenging to communicate despite the critical importance of collaboration. To overcome this, Hitachi decided to adopt Node-RED as a primary communication tool in programming. Node-RED’s intuitive interface allows for the entire flow to be visible on the screen, facilitating discussions and collaborative efforts seamlessly. This approach was put into practice recently when I, as the only IT Engineer, visited a Hitachi factory. Initially, typing software code on my own, the factory engineers couldn't grasp the intricacies of the work. However, after developing a Node-RED flow, it became a focal point of interest, enabling other engineers to gather around and engage with the project actively. This shift towards a more inclusive and comprehensible method of collaboration underscores the value of Node-RED in demystifying IT for non-specialists. I believe Siemens operates under a similar paradigm, utilizing Node-RED to enhance communication between its IT and engineering departments. Moreover, major companies like Microsoft and AWS are also recognizing the potential of Node-RED. By integrating it within their IT environments, they aim to promote their cloud services more effectively. This wide adoption of Node-RED across different sectors, from industrial giants to cloud service providers, highlights its versatility and effectiveness as a tool for fostering understanding and cooperation across diverse technological landscapes. How important is Node-RED in the MING (MQTT, InfluxDB, Node-RED, Grafana) stack? Node-RED is an essential tool in the MING stack because it is a central component that facilitates the connection to other software. The MING stack is designed to facilitate data collection, storage, processing, and visualization, and it brings together the key open-source components of an IoT system. Its importance cannot be overstated as it connects various software components and represents the easiest way to store and manage data. This functionality underscores its crucial role in the integration and efficiency of the stack, highlighting its indispensability in achieving streamlined data processing and application development. Node-RED has introduced advanced features like Git Integration, Flow Debugger, and Flow Linter. What's next for improving the developer experience with Node-RED? The main focus of Node-RED development at the moment is to improve the collaboration tooling - working towards concurrent editing to make it easier for multiple users to work together. Another next step for the community is building a flow testing tool. Flow testing is needed to ensure stability. There's a request from the community for flow testing capabilities for Node-RED flows. In response, the Node-RED team, with significant contributions from Nick O'Leary (CTO and Founder, FlowFuse, and Node-RED Project Lead), is developing a flow testing tool, primarily as a plugin. A design document for this first implementation called node-red-flow-tester is available, allowing users to post issues and contribute feedback, which has been very useful. The tool aims to leverage REST API test frameworks for testing, although it's noted that some components cannot be tested in detail. If made available, this tool would simplify the process of upgrading Node-RED and its JavaScript version, ensuring compatibility with dependency modules.Simultaneously, my focus has been on documentation and organizing hands-on events related to advanced features such as Git integration. These features are vital, as, without them, users might face challenges in their development projects. On Medium, under the username kazuhitoyokoi, I have published 6 articles that delve into these advanced features. One article specifically focuses on Git integration and is also available in Japanese, indicating the effort to cater to a broader audience. Furthermore, I have been active on Qiita, a popular Japanese technical knowledge-sharing platform, where I organized the first hands-on event. The first event full video is available here. (In Japanese) The second event was held on March 18, 2024, and a third event is scheduled for April 26, 2024, showcasing the community's growing interest in these topics and the practical application of Node-RED in development projects. This multifaceted approach, combining tool development, documentation, and community engagement, aims to enhance the Node-RED ecosystem, making it more accessible and user-friendly for developers around the world. Contributions to the Node-RED community include source code, internationalization of the flow editor, bug reports, feature suggestions, participating in developer meetings, and more. What is the best way to get started contributing to Node-RED? If you are not a native English speaker, I recommend translating the Node-RED flow editor as a great way to start contributing. Currently, users can contribute to the Node-RED project by creating a JSON file that contains local language messages. If the user finds a bug, try inspecting the code. The Node-RED source code is very easy to understand. After trying the fix, the user can make a pull request. Conclusion The interview shows that Node-RED is an essential tool to improve collaboration between different professionals without technical barriers in the development of Industrial IoT applications. Discover the potential of Node-RED for your projects and contribute to the Node-RED project. The future of Node-RED is in our hands! Resources Node-Red main site To get an invite to the Node-RED Slack

By Jesse Casman DZone Core CORE
Using My New Raspberry Pi To Run an Existing GitHub Action
Using My New Raspberry Pi To Run an Existing GitHub Action

Recently, I mentioned how I refactored the script that kept my GitHub profile up-to-date. Since Geecon Prague, I'm also a happy owner of a Raspberry Pi: Though the current setup works flawlessly — and is free, I wanted to experiment with self-hosted runners. Here are my findings. Context GitHub offers a large free usage of GitHub Actions: GitHub Actions usage is free for standard GitHub-hosted runners in public repositories, and for self-hosted runners. For private repositories, each GitHub account receives a certain amount of free minutes and storage for use with GitHub-hosted runners, depending on the account's plan. Any usage beyond the included amounts is controlled by spending limits. — About billing for GitHub Actions Yet, the policy can easily change tomorrow. Free tier policies show a regular trend of shrinking down when: A large enough share of users use the product, lock-in Shareholders want more revenue A new finance manager decides to cut costs The global economy shrinks down A combination of the above Forewarned is forearmed. I like to try options before I need to choose one. Case in point: what if I need to migrate? The Theory GitHub Actions comprise two components: The GitHub Actions infrastructure itself.It hosts the scheduler of jobs. Runners, who run the jobs By default, jobs run on GitHub's runners. However, it's possible to configure one's job to run on other runners, whether on-premise or in the Cloud: these are called self-hosted runners. The documentation regarding how to create self-hosted runners gives all the necessary information to build one, so I won't paraphrase it. I noticed two non-trivial issues, though. First, if you have jobs in different repositories, you need to set up a job for each repository. Runner groups are only available for organization repositories. Since most of my repos depend on my regular account, I can't use groups. Hence, you must duplicate each repository's package on the runner's Pi. In addition, there's no dedicated package: you must untar an archive. This means there's no way to upgrade the runner version easily. That being said, I expected the migration to be one line long: YAML jobs: update: #runs-on: ubuntu-latest runs-on: self-hosted It's a bit more involved, though. Let's detail what steps I had to undertake in my repo to make the job work. The Practice GitHub Actions depend on Docker being installed on the runner. Because of this, I thought jobs ran in a dedicated image: it's plain wrong. Whatever you script in your job happens on the running system. Case in point, the initial script installed Python and Poetry. YAML jobs: update: runs-on: ubuntu-latest steps: - name: Set up Python 3.x uses: actions/setup-python@v5 with: python-version: 3.12 - name: Set up Poetry uses: abatilo/actions-poetry@v2 with: poetry-version: 1.7.1 In the context of a temporary container created during each run, it makes sense; in the context of a stable, long-running system, it doesn't. Raspbian, the Raspberry default operating system, already has Python 3.11 installed. Hence, I had to downgrade the version configured in Poetry. It's no big deal because I don't use any specific Python 3.12 feature. TOML [tool.poetry.dependencies] python = "^3.11" Raspbian forbids the installation of any Python dependency in the primary environment, which is a very sane default. To install Poetry, I used the regular APT package manager: Shell sudo apt-get install python-poetry The next was to handle secrets. On GitHub, you set the secrets on the GUI and reference them in your scripts via environment variables: YAML jobs: update: runs-on: ubuntu-latest steps: - name: Update README run: poetry run python src/main.py --live env: BLOG_REPO_TOKEN: ${{ secrets.BLOG_REPO_TOKEN } YOUTUBE_API_KEY: ${{ secrets.YOUTUBE_API_KEY } It allows segregating individual steps so that a step has access to only the environmental variables it needs. For self-hosted runners, you set environment variables in an existing .env file inside the folder. YAML jobs: update: runs-on: ubuntu-latest steps: - name: Update README run: poetry run python src/main.py --live If you want more secure setups, you're on your own. Finally, the architecture is a pull-based model. The runner constantly checks if a job is scheduled. To make the runner a service, we need to use out-of-the-box scripts inside the runner folder: Shell sudo ./svc.sh install sudo ./svc.sh start The script uses systemd underneath. Conclusion Migrating from a GitHub runner to a self-hosted runner is not a big deal but requires changing some bits and pieces. Most importantly, you need to understand the script runs on the machine. This means you need to automate the provisioning of a new machine in the case of crashes. I'm considering the benefits of running the runner inside a container on the Pi to roll back to my previous steps. I'd be happy to hear if you found and used such a solution. In any case, I'm not migrating any more jobs to self-hosted for now. To Go Further About billing for GitHub Actions About self-hosted runners Configuring the self-hosted runner application as a service

By Nicolas Fränkel DZone Core CORE
Taming the Tiny Titan: Database Solutions for RAM-Constrained IoT Devices
Taming the Tiny Titan: Database Solutions for RAM-Constrained IoT Devices

The Internet of Things (IoT) is rapidly expanding, creating a tapestry of networked gadgets that create a symphony of data. However, for many of these devices, particularly those located at the edge, processing power and memory are valuable resources. Traditional databases meant for powerful servers will simply not work on these resource-constrained devices. So, how do we store and manage data on these RAM-constrained miniature titans? The RAM Reaper: Understanding the Challenge Before diving into the solutions, let’s acknowledge the enemy: limited RAM. Unlike their server counterparts, many IoT devices operate with mere kilobytes (KB) of RAM. Storing and manipulating data within these constraints requires a different approach. Traditional relational databases, with their hefty overhead and complex queries, simply won’t do. We need leaner, meaner machines specifically designed for the edge. Key Considerations for Choosing Your Database Warrior When selecting a database for your RAM-constrained warrior, several key factors need to be considered: Data type: What kind of data will you be storing? Simple key-value pairs? Complex sensor readings? Time-series data with timestamps? Different databases excel in handling different data types. Query needs: How complex will your data queries be? Do you need basic filtering or intricate joins and aggregations? Certain databases offer more powerful querying capabilities than others. ACID compliance: Is data integrity paramount? If so, you’ll need a database that guarantees Atomicity, Consistency, Isolation, and Durability (ACID) properties. Community and support: A vibrant community and active support ecosystem can be invaluable for troubleshooting and finding answers. The Contenders: A Tour of RAM-Friendly Databases Key-Value Stores RocksDB: Blazing-fast performance and tiny footprint. Not ACID-compliant, but offers concurrent transactions and supports various languages. LevelDB: Veteran in the ring, known for simplicity and efficiency. Similar to RocksDB, provides basic CRUD operations and ACID guarantees. SQLite: Though primarily file-based, surprisingly shines on RAM-constrained devices due to its self-contained nature and minimal footprint. Even offers SQL querying capabilities. Embedded Databases ObjectBox: Designed specifically for edge IoT, packs a punch with a memory footprint under 1 MB and ACID compliance. Supports various languages and offers object-oriented data management. Berkeley DB: Veteran contender, who brings experience and efficiency. With a small library size and minimal runtime requirements, it’s a solid choice for resource-constrained devices. SQLite3 RTree: Spatial extension to SQLite, empowers you to store and query location-based data efficiently, ideal for resource-constrained devices with geographical needs. Time-Series Databases InfluxDB: Built specifically for time-series data, the Usain Bolt of the ring, optimized for storing and retrieving large datasets with minimal RAM usage. TimescaleDB: Transforms PostgreSQL into a powerful time-series database, offering SQL compatibility and efficient data handling. Cloud-Based Options Firebase real-time database: Though not stored directly on the device, this cloud-based NoSQL database synchronizes data efficiently, minimizing local storage and RAM usage. Choosing Your Champion: Matchmaking for Maximum Efficiency The best database for your project depends on a dance between your specific needs and the strengths of each contender. Here’s a quick matchmaking guide: Simple key-value data: RocksDB or LevelDB. Complex data structures: ObjectBox or SQLite. Time-series data: InfluxDB or TimescaleDB. Complex queries: SQLite or PostgreSQL-based options. Data integrity: Choose ACID-compliant options like Berkeley DB or ObjectBox.** Beyond the Database: Optimizing for Efficiency Remember, even the most RAM-friendly database requires careful data management. Consider filtering and downsampling data before storing it on the device to further minimize memory usage. The Final Round: A Symphony of Data, Not RAM Exhaustion With the right database warrior by your side, your RAM-constrained IoT device can transform data into insights, not a burden. Remember, the key is to understand your specific needs, carefully evaluate the contenders, and optimize your data management practices. Beyond the Database: Additional Considerations While choosing the right database is crucial, there are additional factors to consider for optimal performance: Hardware: Pair your database with appropriate hardware, balancing processing power and RAM limitations. Data lifecycle management: Implement strategies for data retention, deletion, and aggregation to avoid data overload. Security: Ensure proper security measures are in place to protect sensitive data stored on the device. Testing and monitoring: Regularly test your chosen database and closely monitor its performance to identify any bottlenecks or inefficiencies. The Future of RAM-Friendly Databases The landscape of RAM-friendly databases is constantly evolving. As IoT devices become more sophisticated and generate even richer data, we can expect advancements in areas like: In-memory databases: Store data directly in RAM, offering lightning-fast performance for specific use cases. Hybrid approaches: Combining different database types based on data needs can further optimize performance and efficiency. AI-powered optimization: Future databases might leverage AI to automatically optimize data storage and retrieval based on real-time usage patterns. The Takeaway: A Journey, Not a Destination Choosing the best database for your RAM-limited IoT device is not a one-time choice. It is a voyage of discovery, assessment, and adaptation. Understanding your goals, exploiting the many alternatives available, and consistently optimizing your approach will guarantee your device becomes a symphony of data rather than a RAM-constrained burden. So, go into this journey with confidence, knowing that there’s a champion database out there eager to join your IoT dance!

By Aditya Bhuyan
MQTT Market Trends for 2024: Cloud, Unified Namespace, Sparkplug, Kafka Integration
MQTT Market Trends for 2024: Cloud, Unified Namespace, Sparkplug, Kafka Integration

The lightweight and open IoT messaging protocol MQTT has been adopted more widely across industries. This blog post explores relevant market trends for MQTT: cloud deployments and fully managed services, data governance with unified namespace and Sparkplug B, MQTT vs. OPC-UA debates, and the integration with Apache Kafka for OT/IT data processing in real-time. MQTT Summit in Munich In December 2023, I attended the MQTT Summit Connack. HiveMQ sponsored the event. The agenda included various industry experts. The talks covered industrial IoT deployments, unified namespace, Sparkplug B, security and fleet management, and use cases for Kafka combined with MQTT like connected vehicles or smart cities (my talk). It was a pleasure to meet many industry peers of the MQTT community, independent consultants, and software vendors. I learned a lot about the adoption of MQTT in the real world, best practices, and a few trade-offs of Sparkplug B. The following sections summarize my trends for MQTT of this event combined with experiences I had this year in customer meetings around the world. Special thanks to Kudzai Manditereza of HiveMQ for organizing this great event with many international attendees across industries: What Is MQTT? MQTT stands for Message Queuing Telemetry Transport. MQTT is a lightweight and open-source messaging protocol designed for small sensors and mobile devices with high-latency or unreliable networks. IBM originally developed MQTT in the late 1990s and later became an open standard. MQTT follows a publish/subscribe model, where devices (or clients) communicate through a central message broker. The key components in MQTT are: Client: The device or application that connects to the MQTT broker to send or receive messages. Broker: The central hub that manages the communication between clients. It receives messages from publishing clients and routes them to subscribing clients based on topics. Topic: A hierarchical string that acts as a label for a message. Clients subscribe to topics to receive messages and publish messages to specific topics. When To Use MQTT The publish/subscribe model allows for efficient communication between devices. When a client publishes a message to a specific topic, all other clients subscribed to that topic receive the message. This decouples the sender and receiver, enabling a scalable and flexible communication system. The MQTT standard is known for its simplicity, low bandwidth usage, and support for unreliable networks. These characteristics make it well-suited for Internet of Things (IoT) applications, where devices often have limited resources and may operate under challenging network conditions. Good MQTT implementations provide a scalable and reliable platform for IoT projects. MQTT has gained widespread adoption in various industries for IoT deployments, home automation, and other scenarios requiring lightweight and efficient communication. I discuss the following four market trends for MQTT in the following sections. These have a huge impact on the adoption and deciding to choose MQTT: MQTT in the Public Cloud Data Governance for MQTT MQTT vs. OPC-UA Debates MQTT and Apache Kafka for OT/IT Data Processing Trend 1: MQTT in the Public Cloud Most companies have a cloud-first strategy. Go serverless if you can! Focus on business problems, faster time-to-market, and an elastic infrastructure are the consequences. Mature MQTT cloud services exist. At Confluent, we work a lot with HiveMQ. The combination even provides a fully managed integration between both cloud offerings. Having said that, not everything can or should go to the (public) cloud. Security, latency, and cost often make deployment in the data center or at the edge (e.g., in a smart factory) the preferred or mandatory option. Hybrid architectures allow the combination of both options for building the most cost-efficient but also reliable and secure IoT infrastructure. Automation and Security Are the Typical Blockers for Public Cloud The key to success, especially in hybrid architectures, is automation and fleet management with CI/CD and GitOps for multi-cluster management. Many projects leverage Kubernetes as a cloud-native infrastructure for the edge and private cloud. However, in the public cloud, the first option should always be a fully managed service (if security and other requirements allow it). Be careful when adopting fully-managed MQTT cloud services: Support for MQTT is not always equal across the cloud vendors. Many vendors do not implement the entire protocol, miss features, and require usage limitations. HiveMQ wrote a great article showing this. The article is a bit outdated (and opinionated, of course, as a competing MQTT vendor). But it shows very well how some vendors provide offerings that are far away from a good MQTT cloud solution. The hardest problem for public cloud adoption of MQTT is security! Double-check the requirements early. Latency, availability, or specific features are usually not the problem. The deployment and integration need to be compliant and follow the cloud strategy. As Industrial IoT projects always have to include some kind of edge story, it is a tougher discussion than sales or marketing projects. Trend 2: Data Governance for MQTT Data governance is crucial across the enterprise. From an IoT and MQTT perspective, the two main topics are unified namespace as the concept and Sparkplug B as the technology. Unified Namespace for Industrial IoT In the context of the Industrial Internet of Things (IIoT), a unified namespace (UNS) typically refers to a standardized and cohesive way of naming and organizing devices, data, and resources within an industrial network or ecosystem. The goal is to provide a consistent naming structure that facilitates interoperability, data sharing, and management of IIoT devices and systems. The term Unified Namespace (in Industrial IoT) was coined and popularized by Walker Reynolds, an expert and content creator for Industrial IoT. Concepts of Unified Namespace Here are some key aspects of a unified namespace in Industrial IoT: Device naming: Devices in an IIoT environment may come from various manufacturers and have different functionalities. A unified namespace ensures that devices are named consistently, making it easier for administrators, applications, and other devices to identify and interact with them. Data naming and tagging: IIoT involves the generation and exchange of vast amounts of data. A unified namespace includes standardized naming conventions and tagging mechanisms for data points, variables, or attributes associated with devices. This consistency is crucial for applications that need to access and interpret data across different devices. Interoperability: A unified namespace promotes interoperability by providing a common framework for devices and systems to communicate. When devices and applications follow the same naming conventions, it becomes easier to integrate new devices into existing systems or replace components without causing disruptions. Security and access control: A well-defined namespace contributes to security by enabling effective access control mechanisms. Security policies can be implemented based on standardized names and hierarchies, ensuring that only authorized entities can access specific devices or data. Management and scalability: In large-scale industrial environments, having a unified namespace simplifies device and resource management. It allows for scalable solutions where new devices can be added or replaced without requiring extensive reconfiguration. Semantic interoperability: Beyond just naming, a unified namespace may include semantic definitions and standards. This helps in achieving semantic interoperability, ensuring that devices and systems understand the meaning and context of the data they exchange. Overall, a unified namespace in Industrial IoT is about establishing a common and standardized structure for naming devices, data, and resources, providing a foundation for efficient, secure, and scalable IIoT deployments. Standards organizations and industry consortia often play a role in developing and promoting these standards to ensure widespread adoption and compatibility across diverse industrial ecosystems. Sparkplug B: Interoperability and Standardized Communication for MQTT Topics and Payloads Unified Namespace is the theoretical concept for interoperability. The standardized implementation for payload structure enforcement is Sparkplug B. This is a specification created at the Eclipse Foundation and turned into an ISO standard later. Sparkplug B provides a set of conventions for organizing data and defining a common language for devices to exchange information. Here is an example of HiveMQ depicting how a unified namespace makes communication between devices, systems, and sites easier: Source: HiveMQ Key features of Sparkplug B include: Payload structure: Sparkplug B defines a specific format for the payload of MQTT messages. This format includes fields for information such as timestamps, data types, and values. This standardized payload structure ensures that devices can consistently understand and interpret the data being exchanged. Topic namespace: The specification defines a standardized topic namespace for MQTT messages. This helps in organizing and categorizing messages, making it easier for devices to discover and subscribe to relevant information. Birth and death certificates: Sparkplug B introduces the concept of "Birth" and "Death" certificates for devices. When a device comes online, it sends a Birth certificate with information about itself. Conversely, when a device goes offline, it sends a Death certificate. This mechanism aids in monitoring the status of devices within the IIoT network. State management: The specification includes features for managing the state of devices. Devices can publish their current state, and other devices can subscribe to receive updates. This helps in maintaining a synchronized view of device states across the network. Sparkplug B is intended to enhance the interoperability, scalability, and efficiency of IIoT deployments by providing a standardized framework for MQTT communication in industrial environments. Its adoption can simplify the integration of diverse devices and systems within an industrial ecosystem, promoting seamless communication and data exchange. Limitations of Sparkplug B Sparkplug B has a few limitations, such as: Only supports Quality of Service (QoS) 0 providing at most once message delivery guarantees. Limits in the structure of topic namespaces. Very device-centric (but MQTT is for many "things") Understand the pros and cons of Sparkplug B. It is perfect for some use cases. But the above limitations are blockers for some others. Especially, only supporting QoS 0 is a huge limitation for mission-critical use cases. Trend 3: MQTT vs. OPC-UA Debates MQTT has many benefits compared to other industrial protocols. However, OPC-UA is another standard in the IoT space that gets at least as much traction in the market as MQTT. The debate about choosing the right IoT standard is controversial, often led by emotions and opinions, and still absolutely valid to discuss. OPC-UA (Open Platform Communications Unified Architecture) is a machine-to-machine communication protocol for industrial automation. It enables seamless and secure communication and data exchange between devices and systems in various industrial settings. OPC UA has become a widely adopted standard in the industrial automation and control domain, providing a foundation for secure and interoperable communication between devices, machines, and systems. Its open nature and support from industry organizations contribute to its widespread use in applications ranging from manufacturing and process control to energy management and more. If you look at the promises of MQTT and OPC-UA, a lot of overlapping exists: Scalable Reliable Real-time Open Standardized All of them are true for both standards. Still, trade-offs exist. I won't start a flame war here. Just search for "MQTT vs. OPC-UA". You will find many blog posts, articles, and videos. Most are very opinionated (and often driven by a vendor). The reality is that the industry adopted both MQTT and OPC-UA widely. And while the above characteristics might all be true for both standards in general, the details make the difference for specific implementations. For instance, if you try to connect plenty of Siemens S3 PLCs via OPC-UA, then you quickly realize that the number of parallel connections is not as scalable as the OPC-UA standard specification tells you. When To Choose MQTT vs. OPC-UA? The clear recommendation is to start with the business problem, not the technology. Evaluate both standards and their implementations, supported interfaces, vendors' cloud services, etc. Then choose the right technology. Here is what I use as a simplified rule of thumb if you have to start a technical discussion: MQTT: Use cases for connected IoT devices, vehicles, and other interfaces with support for lightweight infrastructure, large number of connections, and/or bad networks. OPC-UA: Use cases for industrial automation to connect heavy equipment, PLCs, SCADA systems, data historians, etc. This is just a rule of thumb. And the situation changes. Modern PLCs and other equipment add support for multiple protocols to be more flexible. But, nowadays, you rarely have an option anyway because specific equipment, devices, or vehicles only support one or the other. And you can still be happy: Otherwise, you need to use another IIoT platform to connect to proprietary legacy protocols like S3, Modbus, et al. MQTT and OPC-UA Gotchas A few additional gotchas I realized from various customer conversations around the world in the past quarters: In theory, MQTT and OPC-UA work well together, i.e., MQTT is the underlying transportation protocol for OPC-UA. I have not seen this yet in the real world (no statistical evidence, just my personal experience). But what I see is the combination of OPC-UA for the last mile integration to the PLC and then forwarding the data to other consumers via MQTT. All in a single gateway, usually a proprietary IoT platform. OPC-UA defines many sub-standards for different industries or use cases. In theory, this is great. In practice, I see this more like the WS-* hell in the SOAP/WSDL web service world where most projects moved to much simpler HTTP/REST architectures. Similarly, most integrations I see to OPC-UA use simple, custom-coded clients in Java or other programming languages — because the tools don't support the complex standards. IoT vendors pitch any possible integration scenario in marketing. I am amazed that MQTT and OPC-UA platforms directly integrate with MES and ERP systems like SAP, and any data warehouse and data lake, like Google Big Query, Snowflake, or Databricks. But that's only the theory. Should you really do this? And did you ever try to connect SAP ECC to MQTT or OPC-UA? Good luck from a technical, and even harder, from an organizational perspective. And do you want tight coupling and point-to-point communication in between the OT world and the ERP? In most cases, it is a good thing to have a clear separation of concerns between different business units, domains, and use cases. Choose the right tool and enterprise architecture; not just for the POC and first pipeline, but for the entire long-term strategy and vision. The last point brings me to another growing trend: The combination of MQTT for IoT / OT workloads and data streaming with Apache Kafka for integration with the IT world. Trend 4: MQTT and Apache Kafka for OT/IT Data Processing Contrary to MQTT, Apache Kafka is NOT an IoT platform. Instead, Kafka is an event streaming platform and uses the underpinning of an event-driven architecture for various use cases across industries. It provides a scalable, reliable, and elastic real-time platform for messaging, storage, data integration, and stream processing. Apache Kafka and MQTT are a perfect combination for many IoT use cases. Let's explore the pros and cons of both technologies from the IoT perspective. Trade-Offs of MQTT MQTT's pros: Lightweight Built for thousands of connections All programming languages supported Built for poor connectivity / high latency scenarios High scalability and availability (depending on broker implementation)•ISO Standard Most popular IoT protocol (competing with OPC UA) MQTT's cons: Adoption mainly in IoT use cases Only pub/sub, not stream processing No reprocessing of events Trade-Offs of Apache Kafka Kafka's pros: Stream processing, not just pub/sub High throughput Large scale High availability Long-term storage and buffering Reprocessing of events Good integration with the rest of the enterprise Kafka's cons: Not built for tens of thousands of connections Requires a stable network and good infrastructure No IoT-specific features like keep alive, last will, or testament Use Cases, Architectures, and Case Studies for MQTT and Kafka I wrote a blog series about MQTT in conjunction with Apache Kafka with many more technical details and real-world case studies across industries. The first blog post explores the relationship between MQTT and Apache Kafka. Afterward, the other four blog posts discuss various use cases, architectures, and reference deployments. Part 1 – Overview: Relation between Kafka and MQTT, pros and cons, architectures Part 2 – Connected Vehicles: MQTT and Kafka in a private cloud on Kubernetes; use case: remote control and command of a car Part 3 – Manufacturing: MQTT and Kafka at the edge in a smart factory; use case: Bidirectional OT-IT integration with Sparkplug B between PLCs, IoT Gateways, Data Historian, MES, ERP, Data Lake, etc. Part 4 – Mobility Services: MQTT and Kafka leveraging serverless cloud infrastructure; use case: Traffic jam prediction service using machine learning Part 5 – Smart City: MQTT at the edge connected to fully-managed Kafka in the public cloud; use case: Intelligent traffic routing by combining and correlating different 1st and 3rd party services The following presentation is from my talk at the MQTT Summit. It explores various use cases and reference architectures for MQTT and Apache Kafka: [pvfw-embed viewer_id="5956" width="100%" height="800"] If you have a bad network, tens of thousands of clients, or the need for a lightweight push-based messaging solution, then MQTT is the right choice. Elsewhere, Kafka, a powerful event streaming platform, is probably the right choice for real-time messaging, data integration, and data processing. In many IoT use cases, the architecture combines both technologies. Even in the industrial space, various projects use Kafka for use cases like building a cloud-native data historian or real-time condition monitoring and predictive maintenance. Data Governance for MQTT With Sparkplug and Kafka (And Beyond) Unified Namespace and the concrete implementation with Sparkplug B are excellent for data governance in IoT workloads with MQTT. In a similar way, the Schema Registry defines the data contracts for Apache Kafka data pipelines. Schema Registry should be the foundation of any Kafka project! Data contracts (aka Schemas, similar to Swagger in REST/HTTP APIs) enforce good data quality and interoperability between independent microservices in the Kafka ecosystem. Each business unit and its data products can choose any technology or API. However, data sharing with others works only with good (enforced) data quality. You can see the issue: Each technology uses its own data governance technology. If you add your favorite data lake, you will add another concept, like Apache Iceberg, to define the data tables for analytics storage systems. And that's okay! Each data governance suite is optimized for its workloads and requirements. A company-wide master data management failed in the last two decades because each software category has different requirements. Hence, one clear trend I see is an enterprise-wide data governance strategy across the different systems (with technologies like Collibra or Azure Purview). It has open interfaces and integrates with specific data contracts like Sparkplug B for MQTT, Schema Registry for Kafka, Swagger for HTTP/REST applications, or Iceberg for data lakes. Don't try to solve the entire enterprise-wide data governance strategy with a single technology. It will fail! We have seen this before... Legacy PLC (S7, Modbus, BACnet, etc.) With MQTT or Kafka? MQTT and Kafka enable reliable and scalable end-to-end data pipelines between IoT and IT systems. At least, if you can use modern APIs and standards. Most IoT projects today are still brownfield. A lot of legacy PLCs, SCADA systems, and data historians only support proprietary protocols like Siemens S7, Modbus, BACnet, and so on. MQTT or Kafka don't support these legacy protocols out of the box. Another middleware is required. Usually, enterprises choose a dedicated IoT platform for this. That means more cost and complexity, and slower projects. In the Kafka world, Apache PLC4X is a great open-source option if you want to build a modern, cloud-native data historian with Kafka. The framework provides integration with many legacy protocols. And it offers a Kafka Connect connector. The main issue is no official vendor support behind. Companies cannot buy support with a 24/7 business model for mission-critical applications. And that's typically a blocker for any industrial deployment. As MQTT is only a pub/sub message broker, it cannot help with legacy protocol integration. HiveMQ tries to solve this challenge with a new framework called HiveMQ Edge: A software-based industrial edge protocol converter. It is a young project and just kicking off. The core is open source. The first supported legacy protocol is Modbus. I think this is an excellent product strategy. I hope the project gets traction and evolves to support many other legacy IIoT technologies to modernize the brownfield shop floor. The project actually also supports OPC-UA. We will see how much demand that feature creates, too. MQTT and Sparkplug Adoption Grows Year-By-Year for IoT Use Cases In the IoT world, MQTT and OPC UA have established themselves as open and platform-independent standards for data exchange in Industrial IoT and Industry 4.0 use cases. Data Streaming with Apache Kafka is the data hub for integrating and processing massive volumes of data at any scale in real time. The "Trinity of Data Streaming in IoT explores the combination of MQTT, OPC-UA, and Apache Kafka" in more detail. MQTT adoption grows year by year with the need for more scalable, reliable, and open IoT communication between devices, equipment, vehicles, and the IT backend. The sweet spots of MQTT are unreliable networks, lightweight (but reliable and scalable) communication and infrastructure, and connectivity to thousands of things. Maturing trends like the Unified Namespace with Sparkplug B, fully managed cloud services, and combined usage with Apache Kafka make MQTT one of the most relevant IoT standards across verticals like manufacturing, automotive, aviation, logistics, and smart city. But don't get fooled by architectural pictures and theory. For example, most diagrams for MQTT and Sparkplug show integrations with the ERP (e.g., SAP) and Data Lake (e.g., Snowflake). Should you really integrate directly from the OT world into the analytics platform? Most times, the answer is no because of cost, decoupling of business units, legal issues, and other reasons. This is where the combination of MQTT and Kafka (or another integration platform) shines. How do you use MQTT and Sparkplug today? What are the use cases? Do you combine it with other technologies, like Apache Kafka, for end-to-end integration across the OT/IT pipeline? Let’s connect on LinkedIn and discuss it! Join the data streaming community and stay informed about new blog posts by subscribing to my newsletter.

By Kai Wähner DZone Core CORE
Performance Optimization in Agile IoT Cloud Applications: Leveraging Grafana and Similar Tools
Performance Optimization in Agile IoT Cloud Applications: Leveraging Grafana and Similar Tools

In today's era of Agile development and the Internet of Things (IoT), optimizing performance for applications running on cloud platforms is not just a nice-to-have; it's a necessity. Agile IoT projects are characterized by rapid development cycles and frequent updates, making robust performance optimization strategies essential for ensuring efficiency and effectiveness. This article will delve into the techniques and tools for performance optimization in Agile IoT cloud applications, with a special focus on Grafana and similar platforms. Need for Performance Optimization in Agile IoT Agile IoT cloud applications often handle large volumes of data and require real-time processing. Performance issues in such applications can lead to delayed responses, a poor user experience, and ultimately, a failure to meet business objectives. Therefore, continuous monitoring and optimization are vital components of the development lifecycle. Techniques for Performance Optimization 1. Efficient Code Practices Writing clean and efficient code is fundamental to optimizing performance. Techniques like code refactoring and optimization play a significant role in enhancing application performance. For example, identifying and removing redundant code, optimizing database queries, and reducing unnecessary loops can lead to significant improvements in performance. 2. Load Balancing and Scalability Implementing load balancing and ensuring that the application can scale effectively during high-demand periods is key to maintaining optimal performance. Load balancing distributes incoming traffic across multiple servers, preventing any single server from becoming a bottleneck. This approach ensures that the application remains responsive even during traffic spikes. 3. Caching Strategies Effective caching is essential for IoT applications dealing with frequent data retrieval. Caching involves storing frequently accessed data in memory, reducing the load on the backend systems, and speeding up response times. Implementing caching mechanisms, such as in-memory caches or content delivery networks (CDNs), can greatly improve the overall performance of IoT applications. Tools for Monitoring and Optimization In the realm of performance optimization for Agile IoT cloud applications, having the right tools at your disposal is paramount. These tools serve as the eyes and ears of your development and operations teams, providing invaluable insights and real-time data to keep your applications running smoothly. One such cornerstone tool in this journey is Grafana, an open-source platform that empowers you with real-time dashboards and alerting capabilities. But Grafana doesn't stand alone; it collaborates seamlessly with other tools like Prometheus, New Relic, and AWS CloudWatch to offer a comprehensive toolkit for monitoring and optimizing the performance of your IoT applications. Let's explore these tools in detail and understand how they can elevate your Agile IoT development game. Grafana Grafana stands out as a primary tool for performance monitoring. It's an open-source platform for time-series analytics that provides real-time visualizations of operational data. Grafana's dashboards are highly customizable, allowing teams to monitor key performance indicators (KPIs) specific to their IoT applications. Here are some of its key features: Real-time dashboards: Grafana's real-time dashboards empower development and operations teams to track essential metrics in real-time. This includes monitoring CPU usage, memory consumption, network bandwidth, and other critical performance indicators. The ability to view these metrics in real-time is invaluable for identifying and addressing performance bottlenecks as they occur. This proactive approach to monitoring ensures that issues are dealt with promptly, reducing the risk of service disruptions and poor user experiences. Alerts: One of Grafana's standout features is its alerting system. Users can configure alerts based on specific performance metrics and thresholds. When these metrics cross predefined thresholds or exhibit anomalies, Grafana sends notifications to the designated parties. This proactive alerting mechanism ensures that potential issues are brought to the team's attention immediately, allowing for rapid response and mitigation. Whether it's a sudden spike in resource utilization or a deviation from expected behavior, Grafana's alerts keep the team informed and ready to take action. Integration: Grafana's strength lies in its ability to seamlessly integrate with a wide range of data sources. This includes popular tools and databases such as Prometheus, InfluxDB, AWS CloudWatch, and many others. This integration capability makes Grafana a versatile tool for monitoring various aspects of IoT applications. By connecting to these data sources, Grafana can pull in data, perform real-time analysis, and present the information in customizable dashboards. This flexibility allows development teams to tailor their monitoring to the specific needs of their IoT applications, ensuring that they can capture and visualize the most relevant data for performance optimization. Complementary Tools Prometheus: Prometheus is a powerful monitoring tool often used in conjunction with Grafana. It specializes in recording real-time metrics in a time-series database, which is essential for analyzing the performance of IoT applications over time. Prometheus collects data from various sources and allows you to query and visualize this data using Grafana, providing a comprehensive view of application performance. New Relic: New Relic provides in-depth application performance insights, offering real-time analytics and detailed performance data. It's particularly useful for detecting and diagnosing complex application performance issues. New Relic's extensive monitoring capabilities can help IoT development teams identify and address performance bottlenecks quickly. AWS CloudWatch: For applications hosted on AWS, CloudWatch offers native integration, providing insights into application performance and operational health. CloudWatch provides a range of monitoring and alerting capabilities, making it a valuable tool for ensuring the reliability and performance of IoT applications deployed on the AWS platform. Implementing Performance Optimization in Agile IoT Projects To successfully optimize performance in Agile IoT projects, consider the following best practices: Integrate Tools Early Incorporate tools like Grafana during the early stages of development to continuously monitor and optimize performance. Early integration ensures that performance considerations are ingrained in the project's DNA, making it easier to identify and address issues as they arise. Adopt a Proactive Approach Use real-time data and alerts to proactively address performance issues before they escalate. By setting up alerts for critical performance metrics, you can respond swiftly to anomalies and prevent them from negatively impacting user experiences. Iterative Optimization In line with Agile methodologies, performance optimization should be iterative. Regularly review and adjust strategies based on performance data. Continuously gather feedback from monitoring tools and make data-driven decisions to refine your application's performance over time. Collaborative Analysis Encourage cross-functional teams, including developers, operations, and quality assurance (QA) personnel, to collaboratively analyze performance data and implement improvements. Collaboration ensures that performance optimization is not siloed but integrated into every aspect of the development process. Conclusion Performance optimization in Agile IoT cloud applications is a dynamic and ongoing process. Tools like Grafana, Prometheus, and New Relic play pivotal roles in monitoring and improving the efficiency of these systems. By integrating these tools into the Agile development lifecycle, teams can ensure that their IoT applications not only meet but exceed performance expectations, thereby delivering seamless and effective user experiences. As the IoT landscape continues to grow, the importance of performance optimization in this domain cannot be overstated, making it a key factor for success in Agile IoT cloud application development. Embracing these techniques and tools will not only enhance the performance of your IoT applications but also contribute to the overall success of your projects in this ever-evolving digital age.

By Deep Manishkumar Dave
Bridging IoT and Cloud: Enhancing Connectivity With Kong's TCPIngress in Kubernetes
Bridging IoT and Cloud: Enhancing Connectivity With Kong's TCPIngress in Kubernetes

In the rapidly evolving landscape of the Internet of Things (IoT) and cloud computing, organizations are constantly seeking efficient ways to bridge these two realms. The IoT space, particularly in applications like GPS-based vehicle tracking systems, demands robust, seamless connectivity to cloud-native applications to process, analyze, and leverage data in real-time. UniGPS Solutions, a pioneer in IoT platforms for vehicle tracking, utilizes Kubernetes Cluster as its cloud-native infrastructure. A key component in ensuring seamless connectivity between IoT devices and cloud services in this setup is Kong's TCPIngress, an integral part of the Kong Ingress Controller. The Role of TCPIngress in IoT-Cloud Connectivity Kong's TCPIngress resource is designed to handle TCP traffic, making it an ideal solution for IoT applications that communicate over TCP, such as GPS trackers in vehicles. By enabling TCP traffic management, TCPIngress facilitates direct, efficient communication between IoT devices and the cloud-native applications that process their data. This is crucial for real-time monitoring and analytics of vehicle fleets, as provided by Spring Boot-based microservices in UniGPS' solution. How TCPIngress Works TCPIngress acts as a gateway for TCP traffic, routing it from IoT devices to the appropriate backend services running in a Kubernetes cluster. It leverages Kong's powerful proxying capabilities to ensure that TCP packets are securely and efficiently routed to the correct destination, without the overhead of HTTP protocols. This direct TCP handling is especially beneficial for low-latency, high-throughput scenarios typical in IoT applications. Implementing TCPIngress in UniGPS' Kubernetes Cluster To integrate TCPIngress with UniGPS' Kubernetes cluster, we start by deploying the Kong Ingress Controller, which automatically manages Kong's configuration based on Kubernetes resources. Here's a basic example of how to deploy TCPIngress for a GPS tracking application: YAML apiVersion: configuration.konghq.com/v1beta1 kind: TCPIngress metadata: name: gps-tracker-tcpingress namespace: unigps spec: rules: - port: 5678 backend: serviceName: gps-tracker-service servicePort: 5678 In this example, gps-tracker-tcpingress is a TCPIngress resource that routes TCP traffic on port 5678 to the gps-tracker-service. This service then processes the incoming GPS packets from the vehicle tracking devices. Security and Scalability With TCPIngress Security is paramount in IoT applications, given the sensitive nature of data like vehicle locations. Kong's TCPIngress supports TLS termination, allowing encrypted communication between IoT devices and the Kubernetes cluster. This ensures that GPS data packets are securely transmitted over the network. To configure TLS for TCPIngress, you can add a TLS section to the TCPIngress resource: YAML spec: tls: - hosts: - gps.unigps.io secretName: gps-tls-secret rules: - port: 5678 backend: serviceName: gps-tracker-service servicePort: 5678 This configuration enables TLS for the TCPIngress, using a Kubernetes secret (gps-tls-secret) that contains the TLS certificate for gps.unigps.io. Scalability is another critical factor in IoT-cloud connectivity. The deployment of TCPIngress with Kong's Ingress Controller enables auto-scaling of backend services based on load, ensuring that the infrastructure can handle varying volumes of GPS packets from the vehicle fleet. Monitoring and Analytics Integrating TCPIngress in the UniGPS platform not only enhances connectivity but also facilitates advanced monitoring and analytics. By leveraging Kong's logging plugins, it's possible to capture detailed metrics about the TCP traffic, such as latency and throughput. This data can be used to monitor the health and performance of the IoT-cloud communication and to derive insights for optimizing vehicle fleet operations. Conclusion The integration of IoT devices with cloud-native applications presents unique challenges in terms of connectivity, security, and scalability. Kong's TCPIngress offers a robust solution to these challenges, enabling seamless, secure, and efficient communication between IoT devices and cloud services. By implementing TCPIngress in Kubernetes clusters, organizations like UniGPS can leverage the full potential of their IoT platforms, enhancing real-time vehicle tracking, monitoring, and analytics capabilities. This strategic approach to bridging IoT and cloud not only optimizes operations but also drives innovation and competitive advantage in the IoT space. In summary, Kong's TCPIngress is a cornerstone in building a future-proof, scalable IoT-cloud infrastructure, empowering businesses to harness the power of their data in unprecedented ways. Through strategic deployment and configuration, TCPIngress paves the way for next-generation IoT applications, making the promise of a truly connected world a reality.

By Rajesh Gheware
Real-Time Communication Protocols: A Developer's Guide With JavaScript
Real-Time Communication Protocols: A Developer's Guide With JavaScript

Real-time communication has become an essential aspect of modern applications, enabling users to interact with each other instantly. From video conferencing and online gaming to live customer support and collaborative editing, real-time communication is at the heart of today's digital experiences. In this article, we will explore popular real-time communication protocols, discuss when to use each one, and provide examples and code snippets in JavaScript to help developers make informed decisions. WebSocket Protocol WebSocket is a widely used protocol that enables full-duplex communication between a client and a server over a single, long-lived connection. This protocol is ideal for real-time applications that require low latency and high throughput, such as chat applications, online gaming, and financial trading platforms. Example Let's create a simple WebSocket server using Node.js and the ws library. 1. Install the ws library: Shell npm install ws 2. Create a WebSocket server in server.js: JavaScript const WebSocket = require('ws'); const server = new WebSocket.Server({ port: 8080 }); server.on('connection', (socket) => { console.log('Client connected'); socket.on('message', (message) => { console.log(`Received message: ${message}`); }); socket.send('Welcome to the WebSocket server!'); }); 3. Run the server: Shell node server.js WebRTC WebRTC (Web Real-Time Communication) is an open-source project that enables peer-to-peer communication directly between browsers or other clients. WebRTC is suitable for applications that require high-quality audio, video, or data streaming, such as video conferencing, file sharing, and screen sharing. Example Let's create a simple WebRTC-based video chat application using HTML and JavaScript. In index.html: HTML <!DOCTYPE html> <html> <head> <title>WebRTC Video Chat</title> </head> <body> <video id="localVideo" autoplay muted></video> <video id="remoteVideo" autoplay></video> <script src="main.js"></script> </body> </html> In main.js: JavaScript const localVideo = document.getElementById('localVideo'); const remoteVideo = document.getElementById('remoteVideo'); // Get media constraints const constraints = { video: true, audio: true }; // Create a new RTCPeerConnection const peerConnection = new RTCPeerConnection(); // Set up event listeners peerConnection.onicecandidate = (event) => { if (event.candidate) { // Send the candidate to the remote peer } }; peerConnection.ontrack = (event) => { remoteVideo.srcObject = event.streams[0]; }; // Get user media and set up the local stream navigator.mediaDevices.getUserMedia(constraints).then((stream) => { localVideo.srcObject = stream; stream.getTracks().forEach((track) => peerConnection.addTrack(track, stream)); }); MQTT MQTT (Message Queuing Telemetry Transport) is a lightweight, publish-subscribe protocol designed for low-bandwidth, high-latency, or unreliable networks. MQTT is an excellent choice for IoT devices, remote monitoring, and home automation systems. Example Let's create a simple MQTT client using JavaScript and the mqtt library. 1. Install the mqtt library: Shell npm install mqtt 2. Create an MQTT client in client.js: JavaScript const mqtt = require('mqtt'); const client = mqtt.connect('mqtt://test.mosquitto.org'); client.on('connect', () => { console.log('Connected to the MQTT broker'); // Subscribe to a topic client.subscribe('myTopic'); // Publish a message client.publish('myTopic', 'Hello, MQTT!'); }); client.on('message', (topic, message) => { console.log(`Received message on topic ${topic}: ${message.toString()}`); }); 3. Run the client: Shell node client.js Conclusion Choosing the right real-time communication protocol depends on the specific needs of your application. WebSocket is ideal for low latency, high throughput applications, WebRTC excels in peer-to-peer audio, video, and data streaming, and MQTT is perfect for IoT devices and scenarios with limited network resources. By understanding the strengths and weaknesses of each protocol and using JavaScript code examples provided, developers can create better, more efficient real-time communication experiences. Happy learning!!

By Arun Pandey DZone Core CORE
Machine Learning at the Edge: Enabling AI on IoT Devices
Machine Learning at the Edge: Enabling AI on IoT Devices

In today's fast-paced world, the Internet of Things (IoT) has become a ubiquitous presence, connecting everyday devices and providing real-time data insights. Within the IoT ecosystem, one of the most exciting developments is the integration of artificial intelligence (AI) and machine learning (ML) at the edge. This article explores the challenges and solutions in implementing machine learning models on resource-constrained IoT devices, with a focus on software engineering considerations for model optimization and deployment. Introduction The convergence of IoT and AI has opened up a realm of possibilities, from autonomous drones to smart home devices. However, IoT devices, often located at the edge of the network, typically have limited computational resources, making the deployment of resource-intensive machine learning models a significant challenge. Nevertheless, this challenge can be overcome through efficient software engineering practices. Challenges of ML on IoT Devices Limited computational resources: IoT devices are usually equipped with constrained CPUs, memory, and storage. Running complex ML models directly on these devices can lead to performance bottlenecks and resource exhaustion. Power constraint: Many IoT devices operate on battery power, which imposes stringent power constraints. Energy-efficient ML algorithms and model architectures are essential to extend device lifespans. Latency requirements: Certain IoT applications, such as autonomous vehicles or real-time surveillance systems, demand low-latency inferencing. Meeting these requirements on resource-constrained devices is a challenging task. Software Engineering Considerations To address these challenges and enable AI on IoT devices, software engineers need to adopt a holistic approach that includes model optimization, deployment strategies, and efficient resource management. 1. Model Optimization Quantization: Quantization is the process of reducing the precision of model weights and activations. By converting floating-point values to fixed-point or integer representations, the model's memory footprint can be significantly reduced. Tools like TensorFlow Lite and ONNX Runtime offer quantization support. Model compression: Model compression techniques, such as pruning, knowledge distillation, and weight sharing, can reduce the size of ML models while preserving their accuracy. These techniques are particularly useful for edge devices with limited storage. Model selection: Choose lightweight ML models that are specifically designed for edge deployment, such as MobileNet, TinyML, or EfficientNet. These models are optimized for inference on resource-constrained devices. 2. Hardware Acceleration Leverage hardware accelerators whenever possible. Many IoT devices come with specialized hardware like GPUs, TPUs, or NPUs that can significantly speed up inference tasks. Software engineers should tailor their ML deployments to utilize these resources efficiently. 3. Edge-To-Cloud Strategies Consider a hybrid approach where only critical or time-sensitive processing is performed at the edge, while less time-critical tasks are offloaded to cloud servers. This helps balance resource constraints and latency requirements. 4. Continuous Monitoring and Updating Implement mechanisms for continuous monitoring of model performance on IoT devices. Set up automated pipelines for model updates, ensuring that devices always have access to the latest, most accurate models. 5. Energy Efficiency Optimize not only for inference speed but also for energy efficiency. IoT devices must strike a balance between model accuracy and power consumption. Techniques like dynamic voltage and frequency scaling (DVFS) can help manage power usage. Deployment Considerations Model packaging: Package ML models into lightweight formats suitable for deployment on IoT devices. Common formats include TensorFlow Lite, ONNX, and PyTorch Mobile. Ensure that the chosen format is compatible with the target hardware and software stack. Runtime libraries: Integrate runtime libraries that support efficient model execution. Libraries like TensorFlow Lite, Core ML, or OpenVINO provide optimized runtime environments for ML models on various IoT platforms. Firmware updates: Implement a robust firmware update mechanism to ensure that deployed IoT devices can receive updates, including model updates, security patches, and bug fixes, without user intervention. Security: Security is paramount in IoT deployments. Implement encryption and authentication mechanisms to protect both the models and data transmitted between IoT devices and the cloud. Regularly audit and update security measures to stay ahead of emerging threats. Case Study: Smart Cameras To illustrate the principles discussed, let's consider the example of smart cameras used for real-time object detection in smart cities. These cameras are often placed at intersections and require low-latency, real-time object detection capabilities. Software engineers working on these smart cameras face the challenge of deploying efficient object detection models on resource-constrained devices. Here's how they might approach the problem: Model selection: Choose a lightweight object detection model like MobileNet SSD or YOLO-Tiny, optimized for real-time inference on edge devices. Model optimization: Apply quantization and model compression techniques to reduce the model's size and memory footprint. Fine-tune the model for accuracy and efficiency. Hardware acceleration: Utilize the GPU or specialized neural processing unit (NPU) on the smart camera hardware to accelerate inference tasks, further reducing latency. Edge-to-cloud offloading: Implement a strategy where basic object detection occurs at the edge while more complex analytics, like object tracking or data aggregation, are performed in the cloud. Continuous monitoring and updates: Set up a monitoring system to track model performance over time and trigger model updates as needed. Implement an efficient firmware update mechanism for devices in the field. Security: Implement strong encryption and secure communication protocols to protect both the camera and the data it captures. Regularly update the camera's firmware to patch security vulnerabilities. The integration of machine learning at the edge of IoT devices holds immense potential for transforming industries, from healthcare to agriculture and from manufacturing to transportation. However, the success of AI on IoT devices heavily relies on efficient software engineering practices. Software engineers must navigate the challenges posed by resource-constrained devices, power limitations, and latency requirements. By optimizing ML models, leveraging hardware acceleration, adopting edge-to-cloud strategies, and prioritizing security, they can enable AI on IoT devices that enhance our daily lives and drive innovation in countless domains.

By Deep Manishkumar Dave

Top IoT Experts

expert thumbnail

Tim Spann

Principal Developer Advocate,
Zilliz

Tim Spann is a Principal Developer Advocate at Zilliz for Milvus, Attu and Towhee. He works with Milvus, Towhee, Attu, Python, Generative AI, Apache NiFi, Apache Pulsar, Apache Kafka, Apache Flink, Flink SQL, Apache Pinot, Trino, Apache Iceberg, DeltaLake, Apache Spark, Big Data, IoT, Cloud, AI/DL, machine learning, and deep learning. Tim has over a ten years of experience with the IoT, big data, distributed computing, messaging, streaming technologies, and Java programming. Previously, he was a Developer Advocate at StreamNative, Principal DataFlow Field Engineer at Cloudera, a Senior Solutions Engineer at Hortonworks, a Senior Solutions Architect at AirisData, a Senior Field Engineer at Pivotal and a Team Leader at HPE. He blogs for DZone, where he is the Big Data Zone leader, and runs a popular meetup in Princeton & NYC on Big Data, Cloud, IoT, deep learning, streaming, NiFi, the blockchain, and Spark. Tim is a frequent speaker at conferences such as ApacheCon, DeveloperWeek, Pulsar Summit and many more. He holds a BS and MS in computer science
expert thumbnail

Alejandro Duarte

Developer Advocate,
MariaDB plc

Alejandro Duarte is a Software Engineer, published author, and award winner. He currently works for MariaDB plc as a Developer Relations Engineer. Starting his coding journey at 13 with BASIC on a rudimentary black screen, Alejandro quickly transitioned to C, C++, and Java during his academic years at the National University of Colombia. Relocating first to the UK and then to Finland, Alejandro deepened his involvement in the open-source community. He's a recognized figure in Java circles, credited with articles and videos amassing millions of views, and presentations at international events. You can contact him through his personal blog at programmingbrain.com and on X (Twitter) @alejandro_du.
expert thumbnail

Kai Wähner

Technology Evangelist,
Confluent

Kai Waehner works as Technology Evangelist at Confluent. Kai’s main area of expertise lies within the fields of Big Data Analytics, Machine Learning / Deep Learning, Messaging, Integration, Microservices, Internet of Things, Stream Processing and Blockchain. He is regular speaker at international conferences such as JavaOne, O’Reilly Software Architecture or ApacheCon, writes articles for professional journals, and shares his experiences with new technologies on his blog (www.kai-waehner.de/blog). Contact and references: kontakt@kai-waehner.de / @KaiWaehner / www.kai-waehner.de

The Latest IoT Topics

article thumbnail
Mitigate the Security Challenges of Telecom 5G IoT Microservice Pods Architecture Using Istio
Discover the essential features of Istio Service Mesh Architecture and master the configuration of Istio for cellular IoT Microservices pods.
July 9, 2024
by BINU SUDHAKARAN PILLAI
· 659 Views · 1 Like
article thumbnail
This Is How SSL Certificates Work: HTTPS Explained in 15 Minutes
The world of online security may seem complex. In this post, gain an understanding of the basics of how SSL certificates work and why HTTPS is essential.
July 9, 2024
by Dinesh Arora
· 520 Views · 1 Like
article thumbnail
Enhancing Security With ZTNA in Hybrid and Multi-Cloud Deployments
This article takes a look at the modern networking concept of ZTNA and how security is its core focus with cloud and on-premise infrastructure.
July 9, 2024
by Sanjay Poddar
· 646 Views · 1 Like
article thumbnail
Strengthening Web Application Security With Predictive Threat Analysis in Node.js
Enhance your Node.js web application security by implementing predictive threat analysis using tools like Express.js, TensorFlow.js, JWT, and MongoDB.
July 5, 2024
by Sameer Danave
· 2,354 Views · 1 Like
article thumbnail
Step-By-Step Guide: Configuring IPsec Over SD-WAN on FortiGate and Unveiling Its Benefits
This article outlines the steps for implementing IPSec over SD-WAN and its advantages, and use cases in today's modern network with a focus on security.
July 5, 2024
by Sanjay Poddar
· 1,734 Views · 1 Like
article thumbnail
Comparing Axios, Fetch, and Angular HttpClient for Data Fetching in JavaScript
In this article, we will explore how to use these tools for data fetching, including examples of standard application code and error handling.
July 4, 2024
by Nitesh Upadhyaya
· 2,142 Views · 3 Likes
article thumbnail
Build Your Business App With BPMN 2.0
This tutorial demonstrates how to build a business application with the Business Process Modelling Notation (BPMN 2.0), a model-driven approach.
July 3, 2024
by Ralph Soika
· 2,617 Views · 2 Likes
article thumbnail
Understanding Properties of Zero Trust Networks
A practical guide to exploring in detail the "Security Automation" property of Zero Trust Networks, by looking at scenarios, technology stack, and examples.
July 3, 2024
by Abhishek Goswami
· 1,905 Views · 1 Like
article thumbnail
Performance and Scalability Analysis of Redis and Memcached
This article benchmarks Redis and Memcached, popular in-memory data stores, to help decision-makers choose the best solution for their needs.
July 2, 2024
by RAHUL CHANDEL
· 3,135 Views · 2 Likes
article thumbnail
Apache Hudi: A Deep Dive With Python Code Examples
Explore Apache Hudi, an open-source data management framework providing efficient data ingestion and real-time analytics on large-scale datasets stored in data lakes.
July 2, 2024
by Harsh Daiya
· 2,034 Views · 1 Like
article thumbnail
Operational Excellence Best Practices
The article explores efforts to stabilize a critical service by enhancing observability and implementing service protection.
July 2, 2024
by Poonam Pradhan
· 1,537 Views · 2 Likes
article thumbnail
GBase 8a Implementation Guide: Resource Assessment
The storage space requirements for a GBase cluster are calculated based on the data volume, the choice of compression algorithm, and the number of cluster replicas.
July 1, 2024
by Cong Li
· 2,202 Views · 1 Like
article thumbnail
Leveraging Microsoft Graph API for Unified Data Access and Insights
This article explores the capabilities of Microsoft Graph API and how it can be utilized to unify data access and gain insights.
June 28, 2024
by Naga Santhosh Reddy Vootukuri DZone Core CORE
· 9,487 Views · 2 Likes
article thumbnail
Partitioning Hot and Cold Data Tier in Apache Kafka Cluster for Optimal Performance
Discover how by partitioning the hot and cold data tiers in the Apache Kafka Cluster, we can optimize storage resources based on data characteristics.
June 28, 2024
by Gautam Goswami DZone Core CORE
· 5,134 Views · 2 Likes
article thumbnail
Hammerspace Empowers GPU Computing With Enhanced S3 Data Orchestration
Hammerspace adds S3 support to its Global Data Platform, enabling automated orchestration of object data to GPU resources alongside file data.
June 27, 2024
by Tom Smith DZone Core CORE
· 6,399 Views · 1 Like
article thumbnail
Cypress Debugging Hacks: Tips and Tricks for Speedy Resolution
Debugging Cypress tests can help identify issues in test code and the application under test. Here, learn more about the Cypress debugger and other dev tools.
June 25, 2024
by Kailash Pathak DZone Core CORE
· 2,666 Views · 3 Likes
article thumbnail
Data Processing vs. Process Management vs. AI?
Modern process management can help to build the bridge between data processing and Large Language Models (LLMs) in a data-driven business landscape.
June 25, 2024
by Ralph Soika
· 2,484 Views · 1 Like
article thumbnail
When Not To Use Apache Kafka (Lightboard Video)
When not to use Apache Kafka, do's and don't's; no matter if you use open source, Confluent, Amazon MSK, Event Hubs, Redpanda, Warpstream, et al.
June 25, 2024
by Kai Wähner DZone Core CORE
· 1,874 Views · 1 Like
article thumbnail
Mastering Unstructured Data Chaos With Datadobi StorageMAP 7.0
StorageMAP 7.0 simplifies unstructured data management, enabling infrastructure optimization, AI data feeds, data resilience, and operational excellence.
June 24, 2024
by Tom Smith DZone Core CORE
· 1,424 Views · 1 Like
article thumbnail
MaxLinear Empowers High-Speed Connectivity and Data Acceleration Solutions for Next-Gen Computing
Advanced connectivity solutions and Panther III storage accelerator empower software engineers to build high-performance, data-driven computing systems.
June 22, 2024
by Tom Smith DZone Core CORE
· 2,710 Views · 1 Like
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • ...
  • Next

ABOUT US

  • About DZone
  • Send feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends: