Low-Code Development: Leverage low and no code to streamline your workflow so that you can focus on higher priorities.
DZone Security Research: Tell us your top security strategies in 2024, influence our research, and enter for a chance to win $!
In the SDLC, deployment is the final lever that must be pulled to make an application or system ready for use. Whether it's a bug fix or new release, the deployment phase is the culminating event to see how something works in production. This Zone covers resources on all developers’ deployment necessities, including configuration management, pull requests, version control, package managers, and more.
Developer Git Commit Hygiene
Debugging Kubernetes Part 1: An Introduction
Context Do you crave hands-on experience with Redis clusters? Perhaps you're eager to learn its intricacies or conduct targeted testing and troubleshooting. A local Redis cluster empowers you with that very control. By setting it up on your own machine, you gain the freedom to experiment, validate concepts, and delve deeper into its functionality. This guide will equip you with the knowledge to quickly create and manage a Redis cluster on your local machine, paving the way for a productive and insightful learning journey. Install Redis The first step would be to install a Redis server locally. Later cluster creation commands will use Redis instances as building blocks and combine them into a cluster. Mac The easiest way would be to install using Homebrew. Use the following command to install Redis on your Macbook. Shell brew install redis Linux Use the following command to install. Shell sudo apt update sudo apt install redis-server From the Source If you need a specific version then you can use this method of installation. For this, you can use the following steps: Download the latest Redis source code from the official website. Unpack the downloaded archive. Navigate to the extracted directory in your terminal. Run the following commands: Shell make sudo make install Create Cluster One Time Steps Clone the git repository Go to the directory where you cloned the repository Then go to the following directory Shell cd <path to local redis repository>/redis/utils/create-cluster Modify create-cluster with the path to your Redis-server Shell vi create-cluster Replace BIN_PATH="$SCRIPT_DIR/../../src/" with BIN_PATH="/usr/local/bin/" Steps to Create/Start/Stop/Clean Cluster These steps are used whenever you need to use a Redis Cluster. Start the Redis Instances Shell ./create-cluster start Create the Cluster Shell echo "yes" | ./create-cluster create Tip You can create an alias and add it to the shell configuration files (~/.bashrc or ~./zshrc) Example: Shell open ~/.zshrc Add the following to this file. Shell alias cluster_start="./create-cluster start && echo "yes" | ./create-cluster create" Open a new terminal and run the following. Shell source ~/.zshrc Now you use “cluster_start” in the command line and it will start and create the cluster for you. Stop the Cluster Shell ./create-cluster stop Clean Up Clears previous cluster data for a fresh start. Shell ./create-cluster clean Tip Similarly, you can create an alias as below to stop the cluster and clean the cluster data files. Shell alias cluster_stop="./create-cluster stop && ./create-cluster clean” How To Create the Cluster With a Custom Number of Nodes by Default By default cluster-create script creates 6 nodes with 3 primaries and 3 replicas. For some special testing or troubleshooting if you need to change the number of nodes you can modify the script instead of manually adding nodes. Shell vi create-cluster Edit the following to the desired number of nodes for the cluster. NODES=6 Also, by default, it creates 1 replica for a primary. You can change that as well by changing the value in the same script (create-cluster) to the desired value. REPLICAS=1 Create Cluster With Custom Configuration Redis provides various options to customize the configuration to configure Redis servers the way you want. All those are present in the redis.conf file. In order to customize those with the desired options follow these steps: Edit the redis.conf With Desired Configurations Shell cd <path to local redis repository>/redis/redis.conf Edit the create-cluster Script Shell vi create-cluster Modify the command in the start and restart options of the script to add the following ../../redis.conf Before Modification Shell $BIN_PATH/redis-server --port $PORT --protected-mode $PROTECTED_MODE --cluster-enabled yes --cluster-config-file nodes-${PORT}.conf --cluster-node-timeout $TIMEOUT --appendonly yes --appendfilename appendonly-${PORT}.aof --appenddirname appendonlydir-${PORT} --dbfilename dump-${PORT}.rdb --logfile ${PORT}.log --daemonize yes ${ADDITIONAL_OPTIONS} After Modification Shell $BIN_PATH/redis-server ../../redis.conf --port $PORT --protected-mode $PROTECTED_MODE --cluster-enabled yes --cluster-config-file nodes-${PORT}.conf --cluster-node-timeout $TIMEOUT --appendonly yes --appendfilename appendonly-${PORT}.aof --appenddirname appendonlydir-${PORT} --dbfilename dump-${PORT}.rdb --logfile ${PORT}.log --daemonize yes ${ADDITIONAL_OPTIONS} For reference please see the snippet below after modification in the start option: References GitHub Redis Redis.git
Continuous integration and continuous delivery (CI/CD) capabilities are basic expectations for modern development teams who want fast feedback on their changes and rapid deployment to the cloud. In recent years, we’ve seen the growing adoption of GitHub Actions, a feature-rich CI/CD system that dovetails nicely with cloud hosting platforms such as Heroku. In this article, we’ll demonstrate the power of these tools used in combination — specifically how GitHub Actions can be used to quickly deploy a Django application to the cloud. A Quick Introduction to Django Django is a Python web application framework that’s been around since the early 2000s. It follows a model-view-controller (MVC) architecture and is known as the “batteries-included” web framework for Python. That’s because it has lots of capabilities, including a strong object-relational mapping (ORM) for abstracting database operations and models. It also has a rich templating system with many object-oriented design features. Instagram, Nextdoor, and Bitbucket are examples of applications built using Django. Clearly, if Django is behind Instagram, then we know that it can scale well. (Instagram hovers around being the fourth most visited site in the world!) Security is another built-in feature; authentication, cross-site scripting protection, and CSRF features all come out of the box and are easy to configure. Django is over 20 years old, which means it has a large dev community and documentation base — both helpful when you’re trying to figure out why something has gone awry. Downsides to Django? Yes, there are a few, with the biggest one being a steeper learning curve than other web application frameworks. You need to know parts of everything in the system to get it to work. For example, to get a minimal “hello world” page up in your browser, you need to set up the ORM, templates, views, routes, and a few other things. Contrast that with a framework like Flask (which is, admittedly, less feature-rich), where less than 20 lines of code can get your content displayed on a web page. Building Our Simple Django Application If you’re not familiar with Django, their tutorial is a good place to start learning how to get a base system configured and running. For this article, I’ve created a similar system using a PostgreSQL database and a few simple models and views. But we won’t spend time describing how to set up a complete Django application. That’s what the Django tutorial is for. My application here is different from the tutorial in that I use PostgreSQL — instead of the default SQLite — as the database engine. The trouble with SQLite (besides poor performance in a web application setting) is that it is file-based, and the file resides on the same server as the web application that uses it. Most cloud platforms assume a stateless deployment, meaning the container that holds the application is wiped clean and refreshed every deployment. So, your database should run on a separate server from the web application. PostgreSQL will provide that for us. The source code for this mini-demo project is available in this GitHub repository. Install Python Dependencies After you have cloned the repository, start up a virtual environment and install the Python dependencies for this project: Plain Text (venv) ~/project$ pip install -r requirements.txt Set up Django To Use PostgreSQL To use PostgreSQL with Django, we use the following packages: psycopg2 provides the engine drivers for Postgres. dj-database-url helps us set up the database connection string from an environment variable (useful for local testing and cloud deployments). In our Django app, we navigate to mysite/mysite/ and modify settings.py (around line 78) to use PostgreSQL. Plain Text DATABASES = {"default": dj_database_url.config(conn_max_age=600, ssl_require=True)} We’ll start by testing out our application locally. So, on your local PostgreSQL instance, create a new database. Plain Text postgres=# create database django_test_db; Assuming our PostgreSQL username is dbuser and the password is the password, then our DATABASE_URL will look something like this: Plain Text postgres://dbuser:password@localhost:5432/django_test_db From here, we need to run our database migrations to set up our tables. Plain Text (venv) ~/project$ \ DATABASE_URL=postgres://dbuser:password@localhost:5432/django_test_db\ python mysite/manage.py migrate Operations to perform: Apply all migrations: admin, auth, contenttypes, movie_journal, sessions Running migrations: Applying contenttypes.0001_initial... OK Applying auth.0001_initial... OK Applying admin.0001_initial... OK Applying admin.0002_logentry_remove_auto_add... OK Applying admin.0003_logentry_add_action_flag_choices... OK Applying contenttypes.0002_remove_content_type_name... OK Applying auth.0002_alter_permission_name_max_length... OK Applying auth.0003_alter_user_email_max_length... OK Applying auth.0004_alter_user_username_opts... OK Applying auth.0005_alter_user_last_login_null... OK Applying auth.0006_require_contenttypes_0002... OK Applying auth.0007_alter_validators_add_error_messages... OK Applying auth.0008_alter_user_username_max_length... OK Applying auth.0009_alter_user_last_name_max_length... OK Applying auth.0010_alter_group_name_max_length... OK Applying auth.0011_update_proxy_permissions... OK Applying auth.0012_alter_user_first_name_max_length... OK Applying movie_journal.0001_initial... OK Applying sessions.0001_initial... OK Test Application Locally Now that we have set up our database, we can spin up our application and test it in the browser. Plain Text (venv) ~/project$ \ DATABASE_URL=postgres://dbuser:password@localhost:5432/django_test_db\ python mysite/manage.py runserver … Django version 4.2.11, using settings 'mysite.settings' Starting development server at http://127.0.0.1:8000/ Quit the server with CONTROL-C. In our browser, we visit this localhost. This is what we see: We’re up and running! We can go through the flow of creating a new journal entry. Looking in our database, we see the record for our new entry. Plain Text django_test_db=# select * from movie_journal_moviejournalentry; -[ RECORD 1 ]+------------------------------------------------------------- id | 1 title | Best of the Best imdb_link | https://www.imdb.com/title/tt0096913/ is_positive | t review | Had some great fight scenes. The plot was amazing. release_year | 1989 created_at | 2024-03-29 09:36:59.24143-07 updated_at | 2024-03-29 09:36:59.241442-07 Our application is working. We’re ready to deploy. Let’s walk through how to deploy using GitHub Actions directly from our repository on commit. The Power of GitHub Actions Over the years, GitHub Actions has built up a large library of jobs/workflows, providing lots of reusable code and conveniences for developers. With CI/CD, a development team can get fast feedback as soon as code changes are committed and pushed. Typical jobs found in a CI pipeline include style checkers, static analysis tools, and unit test runners. All of these help enforce good coding practices and adherence to team standards. Yes, all these tools existed before. But now, developers don’t need to worry about manually running them or waiting for them to finish. Push your changes to the remote branch, and the job starts automatically. Go on to focus on your next coding task as GitHub runs the current jobs and displays their results as they come in. That’s the power of automation and the cloud, baby! Plug-And-Play GitHub Action Workflows You can even have GitHub create your job configuration file for you. Within your repository on GitHub, click Actions. You’ll see an entire library of templates, giving you pre-built workflows that could potentially fit your needs. Let’s click on the Configure button for the Pylint workflow. It looks like this: Plain Text name: Pylint on: [push] jobs: build: runs-on: ubuntu-latest strategy: matrix: python-version: ["3.8", "3.9", "3.10"] steps: - uses: actions/checkout@v3 - name: Set up Python ${{ matrix.python-version } uses: actions/setup-python@v3 with: python-version: ${{ matrix.python-version } - name: Install dependencies run: | python -m pip install --upgrade pip pip install pylint - name: Analysing the code with pylint run: | pylint $(git ls-files '*.py') This configuration directs GitHub Actions to create a new workflow in your repository named Pylint. It triggers a push to any branch. It has one job, build, that runs the latest Ubuntu image. Then, it runs all the steps for each of the three different versions of Python specified. The steps are where the nitty-gritty work is defined. In this example, the job checks out your code, sets up the Python version, installs dependencies, and then runs the linter over your code. Let’s create our own GitHub Action workflow to deploy our application directly to Heroku. Deploying to Heroku via a GitHub Action Here’s the good news: it’s easy. First, sign up for a Heroku account and install the Heroku CLI. Login, Create App, and PostgreSQL Add-On With the Heroku CLI, we run the following commands to create our app and the PostgreSQL add-on: Plain Text $ heroku login $ heroku apps:create django-github Creating ⬢ django-github... done https://django-github-6cbf23e36b5b.herokuapp.com/ | https://git.heroku.com/django-github.git $ heroku addons:create heroku-postgresql:mini --app django-github Creating heroku-postgresql:mini on ⬢ django-github... ~$0.007/hour (max $5/month) Database has been created and is available ! This database is empty. If upgrading, you can transfer ! data from another database with pg:copy Add Heroku App Host To Allowed Hosts List in Django In our Django application settings, we need to update the list of ALLOWED_HOSTS, which represent the host/domain names that your Django site can serve. We need to add the host from our newly created Heroku app. Edit mysite/mysite/settings.py, at around line 31, to add your Heroku app host. It will look similar to this: Plain Text ALLOWED_HOSTS = ["localhost", "django-github-6cbf23e36b5b.herokuapp.com"] Don’t forget to commit this file to your repository. Procfile and requirements.txt Next, we need to add a Heroku-specific file called Procfile. This goes into the root folder of our repository. This file tells Heroku how to start up our app and run migrations. It should have the following contents: Plain Text web: gunicorn --pythonpath mysite mysite.wsgi:application release: cd mysite && ./manage.py migrate --no-input Heroku will also need your requirements.txt file so it knows which Python dependencies to install. Get Your Heroku API Key We will need our Heroku account API key. We’ll store this on GitHub so that our GitHub Action has authorization to deploy code to our Heroku app. In your Heroku account settings, find the auto-generated API key and copy the value. Then, in your GitHub repository settings, navigate to Secrets and variables > Actions. On that page, click New repository secret. Supply a name for your repository secret and. Then, paste in your Heroku API key and click Add secret. Your list of GitHub repository secrets should look like this: Create the Job Configuration File Let’s create our GitHub Action workflow. Typically, we configure CI/CD jobs with a YAML file. With GitHub Actions, this is no different. To add an action to your repository, create a .github subfolder in your project, and then create a workflows subfolder within that one. In .github/workflows/, we’ll create a file called django.yml. Your project tree should look like this: Plain Text . ├── .git │ └── … ├── .github │ └── workflows │ └── django.yml ├── mysite │ ├── manage.py │ ├── mysite │ │ ├── … │ │ └── settings.py │ └── … ├── Procfile └── requirements.txt Our django.yml file has the following contents: Plain Text name: Django CI on: push: branches: [ "main" ] jobs: release: runs-on: ubuntu-latest steps: - uses: actions/checkout@v2 - uses: akhileshns/heroku-deploy@v3.13.15 with: heroku_api_key: ${{ secrets.HEROKU_API_KEY } heroku_app_name: "<your-heroku-app-name>" heroku_email: "<your-heroku-email>" This workflow builds off of the Deploy to Heroku Action in the GitHub Actions library. In fact, using that pre-built action makes our Heroku deployment simple. The only things you need to configure in this file are your Heroku app name and account email. When we commit this file to our repo and push our main branch to GitHub, this kicks off our GitHub Action job for deploying to Heroku. In GitHub, we click the Actions tab and see the newly triggered workflow. When we click the release job in the workflow, this is what we see: Near the bottom of the output of the deploy step, we see results from the Heroku deploy: When we look at our Heroku app logs, we also see the successful deploy. And finally, when we test our Heroku-deployed app in our browser, we see that it’s up and running. Congrats! You’ve successfully deployed your Django action to Heroku via a GitHub Action! Conclusion In this article, we set up a simple Django application with a PostgreSQL database. Then, we walked through how to use GitHub Actions to deploy the application directly to your Heroku on commit. Django is a feature-rich web application framework for Python. Although for some cloud platforms, it can take some time to get things configured correctly, that’s not the case when you’re deploying to Heroku with GitHub Actions. Convenient off-the-shelf tools are available in both GitHub and Heroku, and they make deploying your Django application a breeze.
Editor's Note: The following is an article written for and published in DZone's 2024 Trend Report, Cloud Native: Championing Cloud Development Across the SDLC. Simplicity is a key selling point of cloud technology. Rather than worrying about racking and stacking equipment, configuring networks, and installing operating systems, developers can just click through a friendly web interface and quickly deploy an application. Of course, that friendly web interface hides serious complexity, and deploying an application is just the first and easiest step toward a performant and reliable system. Once an application grows beyond a single deployment, issues begin to creep in. New versions require database schema changes or added components, and multiple team members can change configurations. The application must also be scaled to serve more users, provide redundancy to ensure reliability, and manage backups to protect data. While it might be possible to manage this complexity using that friendly web interface, we need automated cloud orchestration to deliver consistently at speed. There are many choices for cloud orchestration, so which one is best for a particular application? Let's use a case study to consider two key decisions in the trade space: The number of different technologies we must learn and manage Our ability to migrate to a different cloud environment with minimal changes to the automation However, before we look at the case study, let's start by understanding some must-have features of any cloud automation. Cloud Orchestration Must-Haves Our goal with cloud orchestration automation is to manage the complexity of deploying and operating a cloud-native application. We want to be confident that we understand how our application is configured, that we can quickly restore an application after outages, and that we can manage changes over time with confidence in bug fixes and new capabilities while avoiding unscheduled downtime. Repeatability and Idempotence Cloud-native applications use many cloud resources, each with different configuration options. Problems with infrastructure or applications can leave resources in an unknown state. Even worse, our automation might fail due to network or configuration issues. We need to run our automation confidently, even when cloud resources are in an unknown state. This key property is called idempotence, which simplifies our workflow as we can run the automation no matter the current system state and be confident that successful completion places the system in the desired state. Idempotence is typically accomplished by having the automation check the current state of each resource, including its configuration parameters, and applying only necessary changes. This kind of smart resource application demands dedicated orchestration technology rather than simple scripting. Change Tracking and Control Automation needs to change over time as we respond to changes in application design or scaling needs. As needs change, we must manage automation changes as dueling versions will defeat the purpose of idempotence. This means we need Infrastructure as Code (IaC), where cloud orchestration automation is managed identically to other developed software, including change tracking and version management, typically in a Git repository such as this example. Change tracking helps us identify the source of issues sooner by knowing what changes have been made. For this reason, we should modify our cloud environments only by automation, never manually, so we can know that the repository matches the system state — and so we can ensure changes are reviewed, understood, and tested prior to deployment. Multiple Environment Support To test automation prior to production deployment, we need our tooling to support multiple environments. Ideally, we can support rapid creation and destruction of dynamic test environments because this increases confidence that there are no lingering required manual configurations and enables us to test our automation by using it. Even better, dynamic environments allow us to easily test changes to the deployed application, creating unique environments for developers, complex changes, or staging purposes prior to production. Cloud automation accomplishes multi-environment support through variables or parameters passed from a configuration file, environment variables, or on the command line. Managed Rollout Together, idempotent orchestration, a Git repository, and rapid deployment of dynamic environments bring the concept of dynamic environments to production, enabling managed rollouts for new application versions. There are multiple managed rollout techniques, including blue-green deployments and canary deployments. What they have in common is that a rollout consists of separately deploying the new version, transitioning users over to the new version either at once or incrementally, then removing the old version. Managed rollouts can eliminate application downtime when moving to new versions, and they enable rapid detection of problems coupled with automated fallback to a known working version. However, a managed rollout is complicated to implement as not all cloud resources support it natively, and changes to application architecture and design are typically required. Case Study: Implementing Cloud Automation Let's explore the key features of cloud automation in the context of a simple application. We'll deploy the same application using both a cloud-agnostic approach and a single-cloud approach to illustrate how both solutions provide the necessary features of cloud automation, but with differences in implementation and various advantages and disadvantages. Our simple application is based on Node, backed by a PostgreSQL database, and provides an interface to create, retrieve, update, and delete a list of to-do items. The full deployment solutions can be seen in this repository. Before we look at differences between the two deployments, it's worth considering what they have in common: Use a Git repository for change control of the IaC configuration Are designed for idempotent execution, so both have a simple "run the automation" workflow Allow for configuration parameters (e.g., cloud region data, unique names) that can be used to adapt the same automation to multiple environments Cloud-Agnostic Solution Our first deployment, as illustrated in Figure 1, uses Terraform (or OpenTofu) to deploy a Kubernetes cluster into a cloud environment. Terraform then deploys a Helm chart, with both the application and PostgreSQL database. Figure 1. Cloud-agnostic deployment automation The primary advantage of this approach, as seen in the figure, is that the same deployment architecture is used to deploy to both Amazon Web Services (AWS) and Microsoft Azure. The container images and Helm chart are identical in both cases, and the Terraform workflow and syntax are also identical. Additionally, we can test container images, Kubernetes deployments, and Helm charts separately from the Terraform configuration that creates the Kubernetes environment, making it easy to reuse much of this automation to test changes to our application. Finally, with Terraform and Kubernetes, we're working at a high level of abstraction, so our automation code is short but can still take advantage of the reliability and scalability capabilities built into Kubernetes. For example, an entire Azure Kubernetes Service (AKS) cluster is created in about 50 lines of Terraform configuration via the azurerm_kubernetes_cluster resource: Shell resource "azurerm_kubernetes_cluster" "k8s" { location = azurerm_resource_group.rg.location name = random_pet.azurerm_kubernetes_cluster_name.id ... default_node_pool { name = "agentpool" vm_size = "Standard_D2_v2" node_count = var.node_count } ... network_profile { network_plugin = "kubenet" load_balancer_sku = "standard" } } Even better, the Helm chart deployment is just five lines and is identical for AWS and Azure: Shell resource "helm_release" "todo" { name = "todo" repository = "https://book-of-kubernetes.github.io/helm/" chart = "todo" } However, a cloud-agnostic approach brings additional complexity. First, we must create and maintain configuration using multiple tools, requiring us to understand Terraform syntax, Kubernetes manifest YAML files, and Helm templates. Also, while the overall Terraform workflow is the same, the cloud provider configuration is different due to differences in Kubernetes cluster configuration and authentication. This means that adding a third cloud provider would require significant effort. Finally, if we wanted to use additional features such as cloud-native databases, we'd first need to understand the key configuration details of that cloud provider's database, then understand how to apply that configuration using Terraform. This means that we pay an additional price in complexity for each native cloud capability we use. Single Cloud Solution Our second deployment, illustrated in Figure 2, uses AWS CloudFormation to deploy an Elastic Compute Cloud (EC2) virtual machine and a Relational Database Service (RDS) cluster: Figure 2. Single cloud deployment automation The biggest advantage of this approach is that we create a complete application deployment solution entirely in CloudFormation's YAML syntax. By using CloudFormation, we are working directly with AWS cloud resources, so there's a clear correspondence between resources in the AWS web console and our automation. As a result, we can take advantage of the specific cloud resources that are best suited for our application, such as RDS for our PostgreSQL database. This use of the best resources for our application can help us manage our application's scalability and reliability needs while also managing our cloud spend. The tradeoff in exchange for this simplicity and clarity is a more verbose configuration. We're working at the level of specific cloud resources, so we have to specify each resource, including items such as routing tables and subnets that Terraform configures automatically. The resulting CloudFormation YAML is 275 lines and includes low-level details such as egress routing from our VPC to the internet: Shell TodoInternetRoute: Type: AWS::EC2::Route Properties: DestinationCidrBlock: 0.0.0.0/0 GatewayId: !Ref TodoInternetGateway RouteTableId: !Ref TodoRouteTable Also, of course, the resources and configuration are AWS-specific, so if we wanted to adapt this automation to a different cloud environment, we would need to rewrite it from the ground up. Finally, while we can easily adapt this automation to create multiple deployments on AWS, it is not as flexible for testing changes to the application as we have to deploy a full RDS cluster for each new instance. Conclusion Our case study enabled us to exhibit key features and tradeoffs for cloud orchestration automation. There are many more than just these two options, but whatever solution is chosen should use an IaC repository for change control and a tool for idempotence and support for multiple environments. Within that cloud orchestration space, our deployment architecture and our tool selection will be driven by the importance of portability to new cloud environments compared to the cost in additional complexity. This is an excerpt from DZone's 2024 Trend Report, Cloud Native: Championing Cloud Development Across the SDLC.Read the Free Report
In our last two articles, we explored how to configure CI/CD for Heroku using Heroku pipelines. When viewing a pipeline within the Heroku dashboard, you can easily start a deployment or promote your code from one environment to the next with the click of a button. From the dashboard, you can monitor the deployment and view its progress. This all works really well, assuming that you have Heroku open in your browser. But, what if you wanted to do it all from Slack? Software engineers use a lot of apps at work. Throughout the day, we are constantly bouncing between Zoom meetings, Jira tasks, Slack conversations, GitHub, email, our calendar, and our IDE. This context switching can be exhausting and also lead to a lot of visual clutter on our monitors. Sometimes, it’s nice to just live in Slack, and that’s why many tools offer Slack integrations. With these Slack integrations, you can monitor various processes and even use shortcut commands to trigger actions. Heroku ChatOps, the Heroku Slack integration, allows you to start and monitor deployments directly from Slack. In this article, we’ll explore some of the Slack commands it offers. Getting Started If you’d like to follow along throughout this tutorial, you’ll need a Heroku account and a GitHub account. You can create a Heroku account here, and you can create a GitHub account here. The demo app that we will use with our Heroku pipeline in this article is deployed to Heroku, and the code is hosted on GitHub. Create Our Heroku Pipeline We won’t go through the step-by-step process for creating a Heroku pipeline in this article. Refer to these articles for a walkthrough of creating a Heroku pipeline: How to create a Heroku pipeline with a staging and production app and a single main branch How to create a Heroku pipeline with a staging and production app with a dev branch and a main branch You can also read the Heroku docs for Heroku pipelines. Configuring your Heroku pipeline includes the following steps: Create a GitHub repo. Create a Heroku pipeline. Connect the GitHub repo to the Heroku pipeline. Add a staging app to the pipeline. Add a production app to the pipeline. The other activities that you’ll see in those articles, such as configuring review apps, Heroku CI, or automated deployments are optional. In fact, for the purposes of this demo, I recommend not configuring automated deployments, since we’ll be using some Slack commands to start the deployments. When you’re done, you should have a Heroku pipeline that looks something like this: Example Heroku Pipeline Connect to Slack Now that you have your Heroku pipeline created, it’s time for the fun part: integrating with Slack. You can install the Heroku ChatOps Slack app here. Clicking that link will prompt you to grant the Heroku ChatOps app permission to access your Slack workspace: Grant Heroku ChatOps access to your Slack workspace After that, you can add the Heroku ChatOps app to any Slack channel in your workspace. Add the Heroku ChatOps app After adding the app, type /h login and hit Enter. This will prompt you to connect your Heroku and GitHub accounts. You’ll see several Heroku OAuth and GitHub OAuth screens where you confirm connecting these accounts. (As a personal anecdote, I found that it took me several tries to connect my Heroku account and my GitHub account. It may be due to having several Slack workspaces to choose from, but I’m not sure.) After connecting your Heroku account and your GitHub account, you’re ready to start using Heroku in Slack. Connect your Heroku and GitHub accounts View All Pipelines To view all deployable pipelines, you can type /h pipelines: View all pipelines View Pipeline Info To see information about any given pipeline, type /h info <PIPELINE_NAME>. (Anything you see in angle brackets throughout this article should be replaced by an actual value. In this case, the value would be the name of a pipeline — for example, “heroku-flow-demo-pipeline”.) View pipeline info View Past Releases To view a history of past releases for any given pipeline, type /h releases <PIPELINE_NAME>. View past releases This command defaults to showing you past releases for the production app, so if you want to see the past releases for the staging app, you can type /h releases <PIPELINE_NAME> in <STAGE_NAME>, where <STAGE_NAME> is “staging”. View past staging releases Deploy To Staging Now that we know which pipelines are available, we can see information about any given pipeline along with when the code was last released for that pipeline. We’re ready to trigger a deployment. Most engineering organizations have a Slack channel (or channels) where they monitor deployments. Imagine being able to start a deployment right from that channel and monitor it as it goes out! That’s exactly what we’ll do next. To start a deployment to your staging environment, type /h deploy <PIPELINE_NAME> to <STAGE_NAME>, where <STAGE_NAME> is “staging.” Deploy to staging After running that command, an initial message is posted to communicate that the app is being deployed. Shortly after, you’ll also see several more messages, this time in a Slack thread on the original message: Slack messages sent when deploying to staging If you want to verify what you’re seeing in Slack, you can always check the Heroku pipeline in your Heroku dashboard. You’ll see the same information: The staging app has been deployed! Staging app shown in the Heroku dashboard Promote to Production Now, let’s promote our app to production. Without the Slack commands, we could navigate to our Heroku pipeline, click the “Promote to production” button, and then confirm that action in the modal dialog that appears. However, we’d prefer to stay in Slack. To promote the app to production from Slack, type /h promote <PIPELINE_NAME>. Promote to production Just like with the staging deployment, an initial Slack message will be sent, followed by several other messages as the production deployment goes out: Slack messages sent when promoting to production And — voilà — the latest changes to the app are now in production! Conclusion Now you can start and monitor Heroku app deployments all from Slack — no need to context switch or move between multiple apps. For more use cases and advanced setups, you can also check out the docs. Happy deploying!
I moved my blog from WordPress to GitLab Pages in... 2016. I'm happy with the solution. However, I used GitHub Pages when I was teaching for both the courses and the exercises, e.g., Java EE. At the time, there was no GitHub Actions: I used Travis CI to build and deploy. Recently, I had to use GitHub Pages to publish my Apache APISIX workshop. Travis is no longer free. GitHub Actions are a thing. I used the now nominal path and faced a few hurdles; here are my findings. GitHub Pages, at the Time The previous usage of GitHub Pages was pretty straightforward. You pushed to a specific branch, gh-pages. GitHub Pages rendered the root of the branch as a website. Travis works by watching a .travis.yml build file at the repository root. When it detects a change, it runs it. I designed the script to build HTML from Asciidoc sources and push it to the branch. Here's the significant bit: YAML after_success: # - ... - git push --force --quiet "https://${GH_TOKEN}@${GH_REF}" master:gh-pages > /dev/null 2>&1 GitHub Pages Now When you enable GitHub Pages, you can choose its source: GitHub Actions or Deploy from a branch. I used a workflow to generate HTML from Asciidoctor, and my mistake was selecting the first choice. GitHub Pages From a Branch If you choose Deploy from a branch, you can select the branch name and the source root folder. Apart from that, the behavior is similar to the pre-GitHub Action behavior. A vast difference, however, is that GitHub runs a GitHub Action after each push to the branch, whether the push happens via an Action or not. While you can see the workflow executions, you cannot access its YAML source. By default, the build job in the workflow runs the following phases: Set up job Pull the Jekyll build page Action Checkout Build with Jekyll Upload artifact Post Checkout Complete job Indeed, whether you want it or not, GitHub Pages builds for Jekyll! I don't want it because I generate HTML from Asciidoc. To prevent Jekyll build, you can put a .nojekyll file at the root of the Pages branch. With it, the phases are: Set up job Checkout Upload artifact Post Checkout Complete job No more Jekyll! GitHub Pages From Actions The pages-build-deployment Action above creates a tar.gz archive and uploads it to the Pages site. The alternative is to deploy yourself using a custom GitHub workflow. The GitHub Marketplace offers Actions to help you with it: configure-github-pages: extracts various metadata about a site so that later actions can use them; upload-pages-artifact: packages and uploads the GitHub Page artifact deploy-pages: deploys a Pages site previously uploaded as an artifact The documentation does an excellent job of explaining how to use them across your custom workflow. Conclusion Deploying to GitHub Pages offers two options: either from a branch or from a custom workflow. In the first case, you only have to push to the configured branch; GitHub will handle the internal mechanics to make it work via a provided workflow. You don't need to pay attention to the logs. The alternative is to create your custom workflow and assemble the provided GitHub Actions. Once I understood the options, I made the first one work. It's good enough for me, and I don't need to care about GitHub Pages' internal workings. To Go Further GitHub Pages Using custom workflows with GitHub Pages configure-github-pages Marketplace Action upload-pages-artifact Marketplace Action deploy-pages Marketplace Action
Debugging Terraform providers is crucial for ensuring the reliability and functionality of infrastructure deployments. Terraform providers, written in languages like Go, can have complex logic that requires careful debugging when issues arise. One powerful tool for debugging Terraform providers is Delve, a debugger for the Go programming language. Delve allows developers to set breakpoints, inspect variables, and step through code, making it easier to identify and resolve bugs. In this blog, we will explore how to use Delve effectively for debugging Terraform providers. Setup Delve for Debugging Terraform Provider Shell # For Linux sudo apt-get install -y delve # For macOS brew instal delve Refer here for more details on the installation. Debug Terraform Provider Using VS Code Follow the below steps to debug the provider Download the provider code. We will use IBM Cloud Terraform Provider for this debugging example. Update the provider’s main.go code to the below to support debugging Go package main import ( "flag" "log" "github.com/IBM-Cloud/terraform-provider-ibm/ibm/provider" "github.com/IBM-Cloud/terraform-provider-ibm/version" "github.com/hashicorp/terraform-plugin-sdk/v2/plugin" ) func main() { var debug bool flag.BoolVar(&debug, "debug", true, "Set to true to enable debugging mode using delve") flag.Parse() opts := &plugin.ServeOpts{ Debug: debug, ProviderAddr: "registry.terraform.io/IBM-Cloud/ibm", ProviderFunc: provider.Provider, } log.Println("IBM Cloud Provider version", version.Version) plugin.Serve(opts) } Launch VS Code in debug mode. Refer here if you are new to debugging in VS Code. Create the launch.json using the below configuration. JSON { "version": "0.2.0", "configurations": [ { "name": "Debug Terraform Provider IBM with Delve", "type": "go", "request": "launch", "mode": "debug", "program": "${workspaceFolder}", "internalConsoleOptions": "openOnSessionStart", "args": [ "-debug" ] } ] } In VS Code click “Start Debugging”. Starting the debugging starts the provider for debugging. To attach the Terraform CLI to the debugger, console prints the environment variable TF_REATTACH_PROVIDERS. Copy this from the console. Set this as an environment variable in the terminal running the Terraform code. Now in the VS Code where the provider code is in debug mode, open the go code to set up break points. To know more on breakpoints in VS Code refer here. Execute 'terraform plan' followed by 'terraform apply', to notice the Terraform provider breakpoint to be triggered as part of the terraform apply execution. This helps to debug the Terraform execution and comprehend the behavior of the provider code for the particular inputs supplied in Terraform. Debug Terraform Provider Using DLV Command Line Follow the below steps to debug the provider using the command line. To know more about the dlv command line commands refer here. Follow the 1& 2 steps mentioned in Debug Terraform provider using VS Code In the terminal navigate to the provider go code and issue go build -gcflags="all=-N -l" to compile the code To execute the precompiled Terraform provider binary and begin a debug session, run dlv exec --accept-multiclient --continue --headless <path to the binary> -- -debug where the build file is present. For IBM Cloud Terraform provider use dlv exec --accept-multiclient --continue --headless ./terraform-provider-ibm -- -debug In another terminal where the Terraform code would be run, set the TF_REATTACH_PROVIDERS as an environment variable. Notice the “API server” details in the above command output. In another (third) terminal connect to the DLV server and start issuing the DLV client commands Set the breakpoint using the break command Now we are set to debug the Terraform provider when Terraform scripts are executed. Issue continue in the DLV client terminal to continue until the breakpoints are set. Now execute the terraform plan and terraform apply to notice the client waiting on the breakpoint. Use DLV CLI commands to stepin / stepout / continue the execution. This provides a way to debug the terraform provider from the command line. Remote Debugging and CI/CD Pipeline Debugging Following are the extensions to the debugging using the dlv command line tool. Remote Debugging Remote debugging allows you to debug a Terraform provider running on a remote machine or environment. Debugging in CI/CD Pipelines Debugging in CI/CD pipelines involves setting up your pipeline to run Delve and attach to your Terraform provider for debugging. This can be challenging due to the ephemeral nature of CI/CD environments. One approach is to use conditional logic in your pipeline configuration to only enable debugging when a specific environment variable is set. For example, you can use the following script in your pipeline configuration to start Delve and attach to your Terraform provider – YAML - name: Debug Terraform Provider if: env(DEBUG) == 'true' run: | dlv debug --headless --listen=:2345 --api-version=2 & sleep 5 # Wait for Delve to start export TF_LOG=TRACE terraform init terraform apply Best Practices for Effective Debugging With Delve Here are some best practices for effective debugging with Delve, along with tips for improving efficiency and minimizing downtime: Use version control: Always work with version-controlled code. This allows you to easily revert changes if debugging introduces new issues. Start small: Begin debugging with a minimal, reproducible test case. This helps isolate the problem and reduces the complexity of debugging. Understand the code: Familiarize yourself with the codebase before debugging. Knowing the code structure and expected behavior can speed up the debugging process. Use logging: Add logging statements to your code to track the flow of execution and the values of important variables. This can provide valuable insights during debugging. Use breakpoints wisely: Set breakpoints strategically at critical points in your code. Too many breakpoints can slow down the debugging process. Inspect variables: Use the print (p) command in Delve to inspect the values of variables. This can help you understand the state of your program at different points in time. Use conditional breakpoints: Use conditional breakpoints to break execution only when certain conditions are met. This can help you focus on specific scenarios or issues. Use stack traces: Use the stack command in Delve to view the call stack. This can help you understand the sequence of function calls leading to an issue. Use goroutine debugging: If your code uses goroutines, use Delve's goroutine debugging features to track down issues related to concurrency. Automate debugging: If you're debugging in a CI/CD pipeline, automate the process as much as possible to minimize downtime and speed up resolution. By following these best practices, you can improve the efficiency of your debugging process and minimize downtime caused by issues in your code. Conclusion In conclusion, mastering the art of debugging Terraform providers with Delve is a valuable skill that can significantly improve the reliability and performance of your infrastructure deployments. By setting up Delve for debugging, exploring advanced techniques like remote debugging and CI/CD pipeline debugging, and following best practices for effective debugging, you can effectively troubleshoot issues in your Terraform provider code. Debugging is not just about fixing bugs; it's also about understanding your code better and improving its overall quality. Dive deep into Terraform provider debugging with Delve, and empower yourself to build a more robust and efficient infrastructure with Terraform.
A typical machine learning (ML) workflow involves processes such as data extraction, data preprocessing, feature engineering, model training and evaluation, and model deployment. As data changes over time, when you deploy models to production, you want your model to learn continually from the stream of data. This means supporting the model’s ability to autonomously learn and adapt in production as new data is added. In practice, data scientists often work with Jupyter Notebooks for development work and find it hard to translate from notebooks to automated pipelines. To achieve the two main functions of an ML service in production, namely retraining (retrain the model on newer labeled data) and inference (use the trained model to get predictions), you might primarily use the following: Amazon SageMaker: A fully managed service that provides developers and data scientists the ability to build, train, and deploy ML models quickly AWS Glue: A fully managed extract, transform, and load (ETL) service that makes it easy to prepare and load data In this post, we demonstrate how to orchestrate an ML training pipeline using AWS Glue workflows and train and deploy the models using Amazon SageMaker. For this use case, you use AWS Glue workflows to build an end-to-end ML training pipeline that covers data extraction, data processing, training, and deploying models to Amazon SageMaker endpoints. Use Case For this use case, we use the DBpedia Ontology classification dataset to build a model that performs multi-class classification. We trained the model using the BlazingText algorithm, which is a built-in Amazon SageMaker algorithm that can classify unstructured text data into multiple classes. This post doesn’t go into the details of the model but demonstrates a way to build an ML pipeline that builds and deploys any ML model. Solution Overview The following diagram summarizes the approach for the retraining pipeline. The workflow contains the following elements: AWS Glue crawler: You can use a crawler to populate the Data Catalog with tables. This is the primary method used by most AWS Glue users. A crawler can crawl multiple data stores in a single run. Upon completion, the crawler creates or updates one or more tables in your Data Catalog. ETL jobs that you define in AWS Glue use these Data Catalog tables as sources and targets. AWS Glue triggers: Triggers are Data Catalog objects that you can use to either manually or automatically start one or more crawlers or ETL jobs. You can design a chain of dependent jobs and crawlers by using triggers. AWS Glue job: An AWS Glue job encapsulates a script that connects source data, processes it, and writes it to a target location. AWS Glue workflow: An AWS Glue workflow can chain together AWS Glue jobs, data crawlers, and triggers, and build dependencies between the components. When the workflow is triggered, it follows the chain of operations as described in the preceding image. The workflow begins by downloading the training data from Amazon Simple Storage Service (Amazon S3), followed by running data preprocessing steps and dividing the data into train, test, and validate sets in AWS Glue jobs. The training job runs on a Python shell running in AWS Glue jobs, which starts a training job in Amazon SageMaker based on a set of hyperparameters. When the training job is complete, an endpoint is created, which is hosted on Amazon SageMaker. This job in AWS Glue takes a few minutes to complete because it makes sure that the endpoint is in InService status. At the end of the workflow, a message is sent to an Amazon Simple Queue Service (Amazon SQS) queue, which you can use to integrate with the rest of the application. You can also use the queue to trigger an action to send emails to data scientists that signal the completion of training, add records to management or log tables, and more. Setting up the Environment To set up the environment, complete the following steps: Configure the AWS Command Line Interface (AWS CLI) and a profile to use to run the code. For instructions, see Configuring the AWS CLI. Make sure you have the Unix utility wget installed on your machine to download the DBpedia dataset from the internet. Download the following code into your local directory. Organization of Code The code to build the pipeline has the following directory structure: --Glue workflow orchestration --glue_scripts --DataExtractionJob.py --DataProcessingJob.py --MessagingQueueJob,py --TrainingJob.py --base_resources.template --deploy.sh --glue_resources.template The code directory is divided into three parts: AWS CloudFormation templates: The directory has two AWS CloudFormation templates: glue_resources.template and base_resources.template. The glue_resources.template template creates the AWS Glue workflow-related resources, and base_resources.template creates the Amazon S3, AWS Identity and Access Management (IAM), and SQS queue resources. The CloudFormation templates create the resources and write their names and ARNs to AWS Systems Manager Parameter Store, which allows easy and secure access to ARNs further in the workflow. AWS Glue scripts: The folder glue_scripts holds the scripts that correspond to each AWS Glue job. This includes the ETL as well as model training and deploying scripts. The scripts are copied to the correct S3 bucket when the bash script runs. Bash script: A wrapper script deploy.sh is the entry point to running the pipeline. It runs the CloudFormation templates and creates resources in the dev, test, and prod environments. You use the environment name, also referred to as stage in the script, as a prefix to the resource names. The bash script performs other tasks, such as downloading the training data and copying the scripts to their respective S3 buckets. However, in a real-world use case, you can extract the training data from databases as a part of the workflow using crawlers. Implementing the Solution Complete the following steps: Go to the deploy.sh file and replace algorithm_image name with <ecr_path> based on your Region. The following code example is a path for Region us-west-2: Shell algorithm_image="433757028032.dkr.ecr.us-west-2.amazonaws.com/blazingtext:latest" For more information about BlazingText parameters, see Common parameters for built-in algorithms. Enter the following code in your terminal: Shell sh deploy.sh -s dev AWS_PROFILE=your_profile_name This step sets up the infrastructure of the pipeline. On the AWS CloudFormation console, check that the templates have the status CREATE_COMPLETE. On the AWS Glue console, manually start the pipeline. In a production scenario, you can trigger this manually through a UI or automate it by scheduling the workflow to run at the prescribed time. The workflow provides a visual of the chain of operations and the dependencies between the jobs. To begin the workflow, in the Workflow section, select DevMLWorkflow. From the Actions drop-down menu, choose Run. View the progress of your workflow on the History tab and select the latest RUN ID. The workflow takes approximately 30 minutes to complete. The following screenshot shows the view of the workflow post-completion. After the workflow is successful, open the Amazon SageMaker console. Under Inference, choose Endpoint. The following screenshot shows that the endpoint of the workflow deployed is ready. Amazon SageMaker also provides details about the model metrics calculated on the validation set in the training job window. You can further enhance model evaluation by invoking the endpoint using a test set and calculating the metrics as necessary for the application. Cleaning Up Make sure to delete the Amazon SageMaker hosting services—endpoints, endpoint configurations, and model artifacts. Delete both CloudFormation stacks to roll back all other resources. See the following code: Python def delete_resources(self): endpoint_name = self.endpoint try: sagemaker.delete_endpoint(EndpointName=endpoint_name) print("Deleted Test Endpoint ", endpoint_name) except Exception as e: print('Model endpoint deletion failed') try: sagemaker.delete_endpoint_config(EndpointConfigName=endpoint_name) print("Deleted Test Endpoint Configuration ", endpoint_name) except Exception as e: print(' Endpoint config deletion failed') try: sagemaker.delete_model(ModelName=endpoint_name) print("Deleted Test Endpoint Model ", endpoint_name) except Exception as e: print('Model deletion failed') This post describes a way to build an automated ML pipeline that not only trains and deploys ML models using a managed service such as Amazon SageMaker, but also performs ETL within a managed service such as AWS Glue. A managed service unburdens you from allocating and managing resources, such as Spark clusters, and makes it easy to move from notebook setups to production pipelines.
In the rapidly evolving landscape of cloud computing, deploying Docker images across multiple Amazon Web Services (AWS) accounts presents a unique set of challenges and opportunities for organizations aiming for scalability and security. According to the State of DevOps Report 2022, 50% of DevOps adopters are recognized as elite or high-performing organizations. This guide offers a comprehensive blueprint for leveraging AWS services—such as ECS, CodePipeline, and CodeDeploy — combined with the robust Blue/Green deployment strategy, to facilitate seamless Docker deployments. It also emphasizes employing best security practices within a framework designed to streamline and secure deployments across AWS accounts. By integrating CloudFormation with a cross-account deployment strategy, organizations can achieve an unparalleled level of control and efficiency, ensuring that their infrastructure remains both robust and flexible. Proposed Architecture The architecture diagram showcases a robust AWS deployment model that bridges the gap between development and production environments through a series of orchestrated services. It outlines how application code transitions from the development stage, facilitated by AWS CodeCommit, through a testing phase, and ultimately to production. This system uses AWS CodePipeline for continuous integration and delivery, leverages Amazon ECR for container image storage, and employs ECS with Fargate for container orchestration. It provides a clear, high-level view of the path an application takes from code commit to user delivery. Prerequisites To successfully implement the described infrastructure for deploying Docker images on Amazon ECS with a multi-account CodePipeline and Blue/Green deployment strategy, several prerequisites are necessary. Here are the key prerequisites: Create three separate AWS accounts: Development, Test, and Production. Install and configure the AWS Command Line Interface (CLI) and relevant AWS SDKs for scripting and automation. Fork the aws-cicd-cross-account-deployment GitHub repo and add all the files to your CodeCommit. Environment Setup This guide leverages a comprehensive suite of AWS services and tools, meticulously orchestrated to facilitate the seamless deployment of Docker images on Amazon Elastic Container Service (ECS) across multiple AWS accounts. Before we start setting up the environment, use this code repo for the relevant files mentioned in the steps below. 1. IAM Roles and Permissions IAM roles: Create IAM roles required for the deployment process. Use cross-account.yaml template in CloudFormation to create cross-account IAM roles in Test and Production accounts, allowing necessary permissions for cross-account interactions. YAML AWSTemplateFormatVersion: "2010-09-09" Parameters: CodeDeployRoleInThisAccount: Type: CommaDelimitedList Description: Names of existing Roles you want to add to the newly created Managed Policy DevelopmentAccCodePipelinKMSKeyARN: Type: String Description: ARN of the KMS key from the Development/Global Resource Account DevelopmentAccCodePipelineS3BucketARN: Type: String Description: ARN of the S3 Bucket used by CodePipeline in the Development/Global Resource Account DevelopmentAccNumber: Type: String Description: Account Number of the Development Resources Account Resources: CrossAccountAccessRole: Type: 'AWS::IAM::Role' Properties: AssumeRolePolicyDocument: Version: "2012-10-17" Statement: - Effect: Allow Principal: AWS: - !Join [ ":", [ "arn","aws","iam:",!Ref DevelopmentAccNumber,"root" ] ] Service: - codedeploy.amazonaws.com - codebuild.amazonaws.com Action: - 'sts:AssumeRole' Policies: - PolicyName: CrossAccountServiceAccess PolicyDocument: Version: "2012-10-17" Statement: - Effect: Allow Action: - 's3:List*' - 's3:Get*' - 's3:Describe*' Resource: '*' - Effect: Allow Action: - 's3:*' Resource: !Ref DevelopmentAccCodePipelineS3BucketARN - Effect: Allow Action: - 'codedeploy:*' - 'codebuild:*' - 'sns:*' - 'cloudwatch:*' - 'codestar-notifications:*' - 'chatbot:*' - 'ecs:*' - 'ecr:*' - 'codedeploy:Batch*' - 'codedeploy:Get*' - 'codedeploy:List*' Resource: '*' - Effect: Allow Action: - 'codedeploy:Batch*' - 'codedeploy:Get*' - 'codedeploy:List*' - 'kms:*' - 'codedeploy:CreateDeployment' - 'codedeploy:GetDeployment' - 'codedeploy:GetDeploymentConfig' - 'codedeploy:GetApplicationRevision' - 'codedeploy:RegisterApplicationRevision' Resource: '*' - Effect: Allow Action: - 'iam:PassRole' Resource: '*' Condition: StringLike: 'iam:PassedToService': ecs-tasks.amazonaws.com KMSAccessPolicy: Type: 'AWS::IAM::ManagedPolicy' Properties: PolicyDocument: Version: '2012-10-17' Statement: - Sid: AllowThisRoleToAccessKMSKeyFromOtherAccount Effect: Allow Action: - 'kms:DescribeKey' - 'kms:GenerateDataKey*' - 'kms:Encrypt' - 'kms:ReEncrypt*' - 'kms:Decrypt' Resource: !Ref DevelopmentAccCodePipelinKMSKeyARN Roles: !Ref CodeDeployRoleInThisAccount S3BucketAccessPolicy: Type: 'AWS::IAM::ManagedPolicy' Properties: PolicyDocument: Version: '2012-10-17' Statement: - Sid: AllowThisRoleToAccessS3inOtherAccount Effect: Allow Action: - 's3:Get*' Resource: !Ref DevelopmentAccCodePipelineS3BucketARN Effect: Allow Action: - 's3:ListBucket' Resource: !Ref DevelopmentAccCodePipelineS3BucketARN Roles: !Ref CodeDeployRoleInThisAccount 2. CodePipeline Configuration Stages and actions: Configure CodePipeline actions for source, build, and deploy stages by running the pipeline.yaml in CloudFormation. Source repository: Use CodeCommit as the source repository for all the files. Add all the files from the demo-app GitHub folder to the repository. 3. Networking Setup VPC Configuration: Utilize the vpc.yaml CloudFormation template to set up the VPC. Define subnets for different purposes, such as public and private. YAML Description: This template deploys a VPC, with a pair of public and private subnets spread across two Availability Zones. It deploys an internet gateway, with a default route on the public subnets. It deploys a pair of NAT gateways (one in each AZ), and default routes for them in the private subnets. Parameters: EnvVar: Description: An environment name that is prefixed to resource names Type: String VpcCIDR: #Description: Please enter the IP range (CIDR notation) for this VPC Type: String PublicSubnet1CIDR: Description: Please enter the IP range (CIDR notation) for the public subnet in the first Availability Zone Type: String PublicSubnet2CIDR: Description: Please enter the IP range (CIDR notation) for the public subnet in the second Availability Zone Type: String PrivateSubnet1CIDR: Description: Please enter the IP range (CIDR notation) for the private subnet in the first Availability Zone Type: String PrivateSubnet2CIDR: Description: Please enter the IP range (CIDR notation) for the private subnet in the second Availability Zone Type: String DBSubnet1CIDR: Description: Please enter the IP range (CIDR notation) for the private subnet in the first Availability Zone Type: String DBSubnet2CIDR: Description: Please enter the IP range (CIDR notation) for the private subnet in the second Availability Zone Type: String vpcname: #Description: Please enter the IP range (CIDR notation) for the private subnet in the second Availability Zone Type: String Resources: VPC: Type: AWS::EC2::VPC Properties: CidrBlock: !Ref VpcCIDR EnableDnsSupport: true EnableDnsHostnames: true Tags: - Key: Name Value: !Ref vpcname InternetGateway: Type: AWS::EC2::InternetGateway Properties: Tags: - Key: Name Value: !Ref EnvVar InternetGatewayAttachment: Type: AWS::EC2::VPCGatewayAttachment Properties: InternetGatewayId: !Ref InternetGateway VpcId: !Ref VPC PublicSubnet1: Type: AWS::EC2::Subnet Properties: VpcId: !Ref VPC AvailabilityZone: !Select [ 0, !GetAZs '' ] CidrBlock: !Ref PublicSubnet1CIDR MapPublicIpOnLaunch: true Tags: - Key: Name Value: !Sub ${EnvVar} Public Subnet (AZ1) PublicSubnet2: Type: AWS::EC2::Subnet Properties: VpcId: !Ref VPC AvailabilityZone: !Select [ 1, !GetAZs '' ] CidrBlock: !Ref PublicSubnet2CIDR MapPublicIpOnLaunch: true Tags: - Key: Name Value: !Sub ${EnvVar} Public Subnet (AZ2) PrivateSubnet1: Type: AWS::EC2::Subnet Properties: VpcId: !Ref VPC AvailabilityZone: !Select [ 0, !GetAZs '' ] CidrBlock: !Ref PrivateSubnet1CIDR MapPublicIpOnLaunch: false Tags: - Key: Name Value: !Sub ${EnvVar} Private Subnet (AZ1) PrivateSubnet2: Type: AWS::EC2::Subnet Properties: VpcId: !Ref VPC AvailabilityZone: !Select [ 1, !GetAZs '' ] CidrBlock: !Ref PrivateSubnet2CIDR MapPublicIpOnLaunch: false Tags: - Key: Name Value: !Sub ${EnvVar} Private Subnet (AZ2) DBSubnet1: Type: AWS::EC2::Subnet Properties: VpcId: !Ref VPC AvailabilityZone: !Select [ 0, !GetAZs '' ] CidrBlock: !Ref DBSubnet1CIDR MapPublicIpOnLaunch: false Tags: - Key: Name Value: !Sub ${EnvVar} DB Subnet (AZ1) DBSubnet2: Type: AWS::EC2::Subnet Properties: VpcId: !Ref VPC AvailabilityZone: !Select [ 1, !GetAZs '' ] CidrBlock: !Ref DBSubnet2CIDR MapPublicIpOnLaunch: false Tags: - Key: Name Value: !Sub ${EnvVar} DB Subnet (AZ2) NatGateway1EIP: Type: AWS::EC2::EIP DependsOn: InternetGatewayAttachment Properties: Domain: vpc NatGateway2EIP: Type: AWS::EC2::EIP DependsOn: InternetGatewayAttachment Properties: Domain: vpc NatGateway1: Type: AWS::EC2::NatGateway Properties: AllocationId: !GetAtt NatGateway1EIP.AllocationId SubnetId: !Ref PublicSubnet1 NatGateway2: Type: AWS::EC2::NatGateway Properties: AllocationId: !GetAtt NatGateway2EIP.AllocationId SubnetId: !Ref PublicSubnet2 PublicRouteTable: Type: AWS::EC2::RouteTable Properties: VpcId: !Ref VPC Tags: - Key: Name Value: !Sub ${EnvVar} Public Routes DefaultPublicRoute: Type: AWS::EC2::Route DependsOn: InternetGatewayAttachment Properties: RouteTableId: !Ref PublicRouteTable DestinationCidrBlock: 0.0.0.0/0 GatewayId: !Ref InternetGateway PublicSubnet1RouteTableAssociation: Type: AWS::EC2::SubnetRouteTableAssociation Properties: RouteTableId: !Ref PublicRouteTable SubnetId: !Ref PublicSubnet1 PublicSubnet2RouteTableAssociation: Type: AWS::EC2::SubnetRouteTableAssociation Properties: RouteTableId: !Ref PublicRouteTable SubnetId: !Ref PublicSubnet2 PrivateRouteTable1: Type: AWS::EC2::RouteTable Properties: VpcId: !Ref VPC Tags: - Key: Name Value: !Sub ${EnvVar} Private Routes (AZ1) DefaultPrivateRoute1: Type: AWS::EC2::Route Properties: RouteTableId: !Ref PrivateRouteTable1 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: !Ref NatGateway1 PrivateSubnet1RouteTableAssociation: Type: AWS::EC2::SubnetRouteTableAssociation Properties: RouteTableId: !Ref PrivateRouteTable1 SubnetId: !Ref PrivateSubnet1 PrivateRouteTable2: Type: AWS::EC2::RouteTable Properties: VpcId: !Ref VPC Tags: - Key: Name Value: !Sub ${EnvVar} Private Routes (AZ2) DefaultPrivateRoute2: Type: AWS::EC2::Route Properties: RouteTableId: !Ref PrivateRouteTable2 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: !Ref NatGateway2 PrivateSubnet2RouteTableAssociation: Type: AWS::EC2::SubnetRouteTableAssociation Properties: RouteTableId: !Ref PrivateRouteTable2 SubnetId: !Ref PrivateSubnet2 NoIngressSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupName: "no-ingress-sg" GroupDescription: "Security group with no ingress rule" VpcId: !Ref VPC Outputs: VPC: Description: A reference to the created VPC Value: !Ref VPC PublicSubnets: Description: A list of the public subnets Value: !Join [ ",", [ !Ref PublicSubnet1, !Ref PublicSubnet2 ]] PrivateSubnets: Description: A list of the private subnets Value: !Join [ ",", [ !Ref PrivateSubnet1, !Ref PrivateSubnet2 ]] PublicSubnet1: Description: A reference to the public subnet in the 1st Availability Zone Value: !Ref PublicSubnet1 PublicSubnet2: Description: A reference to the public subnet in the 2nd Availability Zone Value: !Ref PublicSubnet2 PrivateSubnet1: Description: A reference to the private subnet in the 1st Availability Zone Value: !Ref PrivateSubnet1 PrivateSubnet2: Description: A reference to the private subnet in the 2nd Availability Zone Value: !Ref PrivateSubnet2 NoIngressSecurityGroup: Description: Security group with no ingress rule Value: !Ref NoIngressSecurityGroup 4. ECS Cluster and Service Configuration ECS clusters: Create two ECS clusters: one in the Test account and one in the Production account. Service and task definitions: Create ECS services and task definitions in the Test Account using new-ecs-test-infra.yaml CloudFormation templates. YAML Parameters: privatesubnet1: Type: String privatesubnet2: Type: String Resources: ECSService: Type: AWS::ECS::Service # DependsOn: HTTPListener # DependsOn: HTTPSListener Properties: LaunchType: FARGATE Cluster: new-cluster DesiredCount: 0 TaskDefinition: new-taskdef-anycompany DeploymentController: Type: CODE_DEPLOY HealthCheckGracePeriodSeconds: 300 SchedulingStrategy: REPLICA NetworkConfiguration: AwsvpcConfiguration: AssignPublicIp: DISABLED Subnets: [!Ref privatesubnet1 , !Ref privatesubnet2] LoadBalancers: - TargetGroupArn: arn:aws:elasticloadbalancing:us-east-1:487269258483:targetgroup/TargetGroup1/6b75e9eb3289df56 ContainerPort: 80 ContainerName: anycompany-test Create ECS services and task definitions in the Test account using new-ecs-prod-infra.yaml CloudFormation templates. YAML Parameters: privatesubnet1: Type: String privatesubnet2: Type: String Resources: ECSService: Type: AWS::ECS::Service # DependsOn: HTTPListener # DependsOn: HTTPSListener Properties: LaunchType: FARGATE Cluster: new-cluster DesiredCount: 0 TaskDefinition: new-anycompany-prod DeploymentController: Type: CODE_DEPLOY HealthCheckGracePeriodSeconds: 300 SchedulingStrategy: REPLICA NetworkConfiguration: AwsvpcConfiguration: AssignPublicIp: DISABLED Subnets: [!Ref privatesubnet1 , !Ref privatesubnet2] LoadBalancers: - TargetGroupArn: arn:aws:elasticloadbalancing:us-east-1:608377680862:targetgroup/TargetGroup1/d18c87e013000697 ContainerPort: 80 ContainerName: anycompany-test 5. CodeDeploy Blue/Green Deployment CodeDeploy configuration: Configure CodeDeploy for Blue/Green deployments. Deployment groups: Create specific deployment groups for each environment. Deployment configurations: Configure deployment configurations based on your requirements. 6. Notification Setup (SNS) SNS configuration: Manually create an SNS topic for notifications during the deployment process. Notification content: Configure SNS to send notifications for manual approval steps in the deployment pipeline. Pipeline and Deployment 1. Source Stage CodePipeline starts with the source stage, pulling Docker images from the CodeCommit repository. 2. Build Stage The build stage involves building and packaging the Docker images and preparing them for deployment. 3. Deployment to Development Upon approval, the pipeline deploys the Docker images to the ECS cluster in the Development account using a Blue/Green deployment strategy. 4. Testing in Development The deployed application in the Development environment undergoes testing and validation. 5. Deployment to Test If testing in the Development environment is successful, the pipeline triggers the deployment to the ECS cluster in the Test account using the same Blue/Green strategy. 6. Testing in Test The application undergoes further testing in the Test environment. 7. Manual Approval After successful testing in the Test environment, the pipeline triggers an SNS notification and requires manual approval to proceed. 8. Deployment to Production After the approval, the pipeline triggers the deployment to the ECS cluster in the Production account using the Blue/Green strategy. 9. Final Testing in Production The application undergoes final testing in the Production environment. 10. Completion The pipeline completes, and the new version of the application is running in the Production environment. Conclusion In this guide, we’ve explored the strategic approach to deploying Docker images across multiple AWS accounts using a combination of ECS, CodePipeline, CodeDeploy, and the reliability of Blue/Green deployment strategies, all through the power of AWS CloudFormation. This methodology not only enhances security and operational efficiency but also provides a scalable infrastructure capable of supporting growth. By following the steps outlined, organizations can fortify their deployment processes, embrace the agility of Infrastructure as Code, and maintain a robust and adaptable cloud environment. Implementing this guide's recommendations allows businesses to optimize costs by utilizing AWS services such as Fargate and embracing DevOps practices. The Blue/Green deployment strategy minimizes downtime, ensuring resources are utilized efficiently during transitions. With a focus on DevOps practices and the use of automation tools like AWS Code Pipeline, operational overhead is minimized. CloudFormation templates automate resource provisioning, reducing manual intervention and ensuring consistent and repeatable deployments.
Jenkins allows you to automate everything, from building and testing code to deploying to production. Jenkins works on the principle of pipelines, which can be customized to fit the needs of any project. After installing Jenkins, we launch it and navigate to the web interface, usually available at http://localhost:8080. On the first launch, Jenkins will ask you to enter a password, which is displayed in the console or located in a file on the server. After entering the password, you are redirected to the plugin setup page. To work with infrastructure pipelines, you will need the following plugins: Pipeline: The main plugin for creating and managing pipelines in Jenkins. Git plugin: Necessary for integration with Git and working with repositories. Docker Pipeline: Allows you to use Docker within Jenkins pipelines. Also, in the Jenkins settings, there is a section related to the configuration of version control systems, and there you need to add a repository. For Git, this will require specifying the repository URL and account credentials. Now you can create an infrastructure pipeline, which is a series of automated steps that transform your code into production-ready software. The main goal of all this is to make the software delivery process as fast as possible. Creating a Basic Pipeline A pipeline consists of a series of steps, each of which performs a specific task. Typically, the steps look like this: Checkout — extracting the source code from the version control system Build — building the project using build tools, such as Maven Test — running automated tests to check the code quality Deploy — deploying the built application to the target server or cloud Conditions determine the circumstances under which each pipeline step should or should not be executed. Jenkins Pipeline has a "when" directive that allows you to restrict the execution of steps based on specific conditions. Triggers determine what exactly triggers the execution of the pipeline: Push to repository — the pipeline is triggered every time new commits are pushed to the repository. Schedule — the pipeline can be configured to run on a schedule, for example, every night for nightly builds. External events — the pipeline can also be configured to run in response to external events. To make all this work, you need to create a Jenkinsfile — a file that describes the pipeline. Here's an example of a simple Jenkinsfile: Groovy pipeline { agent any stages { stage('Checkout') { steps { git 'https://your-repository-url.git' } } stage('Build') { steps { sh 'mvn clean package' } } stage('Test') { steps { sh 'mvn test' } } stage('Deploy') { steps { // deployment steps } } } post { success { echo 'The pipeline has completed successfully.' } } } Jenkinsfile describes a basic pipeline with four stages: checkout, build, test, and deploy Parameterized Builds Parameterized builds allow you to dynamically manage build parameters. To start, you need to define the parameters in the Jenkinsfile used to configure the pipeline. This is done using the "parameters" directive, where you can specify various parameter types (string, choice, booleanParam, etc.). Groovy pipeline { agent any parameters { string(name: 'DEPLOY_ENV', defaultValue: 'staging', description: 'Target environment') choice(name: 'VERSION', choices: ['1.0', '1.1', '2.0'], description: 'App version to deploy') booleanParam(name: 'RUN_TESTS', defaultValue: true, description: 'Run tests?') } stages { stage('Initialization') { steps { echo "Deploying version ${params.VERSION} to ${params.DEPLOY_ENV}" script { if (params.RUN_TESTS) { echo "Tests will be run" } else { echo "Skipping tests" } } } } // other stages } } When the pipeline is executed, the system will prompt the user to fill in the parameters according to their definitions. You can use parameters to conditionally execute certain pipeline stages. For example, only run the testing stages if the RUN_TESTS parameter is set to true. The DEPLOY_ENV parameter can be used to dynamically select the target environment for deployment, allowing you to use the same pipeline to deploy to different environments, such as production. Dynamic Environment Creation Dynamic environment creation allows you to automate the process of provisioning and removing temporary test or staging environments for each new build, branch, or pull request. In Jenkins, this can be achieved using pipelines, Groovy scripts, and integration with tools like Docker, Kubernetes, Terraform, etc. Let's say you want to create a temporary test environment for each branch in a Git repository, using Docker. In the Jenkinsfile, you can define stages for building a Docker image, running a container for testing, and removing the container after the tests are complete: Groovy pipeline { agent any stages { stage('Build Docker Image') { steps { script { // For example, the Dockerfile is located at the root of the project sh 'docker build -t my-app:${GIT_COMMIT} .' } } } stage('Deploy to Test Environment') { steps { script { // run the container from the built image sh 'docker run -d --name test-my-app-${GIT_COMMIT} -p 8080:80 my-app:${GIT_COMMIT}' } } } stage('Run Tests') { steps { script { // steps to run tests echo 'Running tests against the test environment' } } } stage('Cleanup') { steps { script { // stop and remove the container after testing sh 'docker stop test-my-app-${GIT_COMMIT}' sh 'docker rm test-my-app-${GIT_COMMIT}' } } } } } If Kubernetes is used to manage the containers, you can dynamically create and delete namespaces to isolate the test environments. In this case, the Jenkinsfile might look like this: Groovy pipeline { agent any environment { KUBE_NAMESPACE = "test-${GIT_COMMIT}" } stages { stage('Create Namespace') { steps { script { // create a new namespace in Kubernetes sh "kubectl create namespace ${KUBE_NAMESPACE}" } } } stage('Deploy to Kubernetes') { steps { script { // deploy the application to the created namespace sh "kubectl apply -f k8s/deployment.yaml -n ${KUBE_NAMESPACE}" sh "kubectl apply -f k8s/service.yaml -n ${KUBE_NAMESPACE}" } } } stage('Run Tests') { steps { script { // test the application echo 'Running tests against the Kubernetes environment' } } } stage('Cleanup') { steps { script { // delete the namespace and all associated resources sh "kubectl delete namespace ${KUBE_NAMESPACE}" } } } } } Easily Integrate Prometheus The Prometheus metrics can be set up in Jenkins through "Manage Jenkins" -> "Manage Plugins." After installation, we go to the Jenkins settings, and in the Prometheus Metrics section, we enable the exposure of metrics — enable Prometheus metrics. The plugin will be accessible by default at the URL http://<JENKINS_URL>/prometheus/, where <JENKINS_URL> is the address of the Jenkins server. In the Prometheus configuration file prometheus.yml, we add a new job to collect metrics from Jenkins: YAML scrape_configs: - job_name: 'jenkins' metrics_path: '/prometheus/' static_configs: - targets: ['<JENKINS_IP>:<PORT>'] Then, through Grafana, we can point to the Prometheus source and visualize the data. The Prometheus integration allows you to monitor various Jenkins metrics, such as the number of builds, job durations, and resource utilization. This can be particularly useful for identifying performance bottlenecks, tracking trends, and optimizing your Jenkins infrastructure. By leveraging the power of Prometheus and Grafana, you can gain valuable insights into your Jenkins environment and make data-driven decisions to improve your continuous integration and deployment processes. Conclusion Jenkins is a powerful automation tool that can help streamline your software delivery process. By leveraging infrastructure pipelines, you can easily define and manage the steps required to transform your code into production-ready software.
CI/CD (Continuous Integration and Continuous Delivery) is an essential part of modern software development. CI/CD tools help developers automate the process of building, testing, and deploying software, which saves time and improves code quality. GitLab and Jenkins are two popular CI/CD tools that have gained widespread adoption in the software development industry. In this article, we will compare GitLab and Jenkins and help you decide which one is the best CI/CD tool for your organization. What Are GitLab and Jenkins? Before we get down to brass tacks, let's quickly go over some definitions in order to give you a clearer picture of each tool's purpose and capabilities. GitLab: GitLab is a web-based DevOps lifecycle tool that provides a complete DevOps platform, including source code management, CI/CD pipelines, issue tracking, and more. It offers an integrated environment for teams to collaborate on projects, automate workflows, and deliver software efficiently. Jenkins: Jenkins is an open-source automation server that enables developers to build, test, and deploy software projects continuously. It offers a wide range of plugins and integrations, making it highly customizable and adaptable to various development environments. Jenkins is known for its flexibility and extensibility, allowing teams to create complex CI/CD pipelines tailored to their specific needs. The Technical Difference Between GitLab and Jenkins FEATURE GITLAB JENKINS Version Control Git N/A (requires integration with a separate VCS tool). Continuous Integration Yes, built-in. Yes, built-in. Continuous Delivery Yes, built-in. Requires plugins or scripting. Security Built-in security features. Requires plugins or scripting. Code Review Built-in code review features. Requires plugins or scripting. Performance Generally faster due to built-in Git repository May require additional resources for performance Scalability Scales well for small to medium-sized teams. Scales well for large teams. Cost Free for self-hosted and cloud-hosted versions. Free for self-hosted and has a cost for cloud-hosted. Community Active open-source community and enterprise support. Active open-source community and enterprise support. GitLab vs Jenkins: Features and Performance 1. Ease of Use GitLab is an all-in-one platform that provides a comprehensive solution for CI/CD, version control, project management, and collaboration. It has a simple and intuitive user interface that makes it easy for developers to set up and configure their CI/CD pipelines. On the other hand, Jenkins is a highly customizable tool that requires some technical expertise to set up and configure. It has a steep learning curve, and new users may find it challenging to get started. 2. Integration GitLab and Jenkins support integration with a wide range of tools and services. However, GitLab offers more native integrations with third-party services, including cloud providers, deployment platforms, and monitoring tools. This makes it easier for developers to set up their pipelines and automate their workflows. Jenkins also has a vast library of plugins that support integration with various tools and services. These plugins cover a wide range of functionalities, including source code management, build triggers, testing frameworks, deployment automation, and more. 3. Performance GitLab is known for its fast and reliable performance. It has built-in caching and parallel processing capabilities that allow developers to run their pipelines quickly and efficiently. Jenkins, on the other hand, can suffer from performance issues when running large and complex pipelines. It requires manual optimization to ensure it can handle the load. 4. Security GitLab has built-in security features that ensure code is secure at every pipeline stage. It provides features, like code scanning, vulnerability management, and container scanning, that help developers identify and fix security issues before they make it into production. Jenkins relies heavily on plugins for security features. This can make it challenging to ensure your pipeline is secure, especially if you are using third-party plugins. 5. Cost GitLab offers free and paid plans. The free plan includes most features a small team would need for CI/CD. The paid plans include additional features like deployment monitoring, auditing, and compliance. Jenkins is an open-source tool that is free to use. However, it requires significant resources to set up and maintain, which can add to the overall cost of using the tool. GitLab vs Jenkins: Which One Is Best? GitLab and Jenkins are two popular tools used in the software development process. However, it’s difficult to say which one is better as it depends on the specific needs of your project and organization. GitLab may be a better choice if you want an integrated solution with an intuitive interface and built-in features. Jenkins could be the better option if you want a customizable and extensible automation server that can be easily integrated with other tools in your workflow. GitLab is a complete DevOps platform that includes source code management, continuous integration/continuous delivery (CI/CD), and more. It offers features such as Git repository management, issue tracking, code review, and continuous integration/continuous delivery (CI/CD) pipelines. GitLab also has a built-in container registry and Kubernetes integration, making it easy to deploy applications to container environments. On the other hand, Jenkins is a popular open-source automation server widely used for continuous integration and continuous delivery (CI/CD) pipelines. It offers several plugins for various functionalities, such as code analysis, testing, deployment, and monitoring. Jenkins can be easily integrated with other tools in the software development process, such as Git, GitHub, and Bitbucket. Ultimately, the choice between GitLab and Jenkins will depend on your specific needs and preferences. GitLab is an all-in-one solution, while Jenkins is more flexible and can be customized with plugins. Conclusion GitLab and Jenkins are excellent CI/CD tools that offer a range of features and integrations. However, GitLab has the edge when it comes to ease of use, integration, performance, security, and cost. GitLab’s all-in-one platform makes it easy for developers to set up and configure their pipelines, while its native integrations and built-in features make it more efficient and secure than Jenkins. Therefore, if you are looking for a CI/CD tool that is easy to use, cost-effective, and reliable, GitLab is the best option for your organization.
John Vester
Staff Engineer,
Marqeta
Raghava Dittakavi
Manager , Release Engineering & DevOps,
TraceLink