Continuous Deployment with VMware Code Stream

The expectations on application and infrastructure delivery are continually increasing, in this blog, we will look at how VMware vRealize Automation Cloud and the Code Stream service can be used to deliver on these expectations with a focus on continuous deployment, for infrastructure as code (IaC) and applications with consistent delivery regardless of the cloud provider.

To demonstrate our continuous deployment use case we will use the Time Traffic Overview application (Tito), this is a multi-tier load-balanced application based on Apache and MySQL, this app uses google maps data to determine optimal travel time from home to work, and was developed by Vincent Meoc.


Tito web application

Cloud Agnostic Blueprint for Tito

The Tito application is deployed in the Code Stream pipeline via VMware Cloud Assembly, cloud-agnostic blueprints enable deployment to Amazon Web Services, Azure, Google Cloud Platform, VMWare Cloud on AWS or vSphere with the change of a single tag driving placement constraints matched to cloud account capabilities.

Blue-Green Deployment

In our example deployment, we are using a Blue-Green deployment strategy with new builds triggered on an infrastructure or application code change in Git via webhook to Code Stream. UI automation testing and user load testing is used to gate the release, and Route53 is used for traffic management to promote a new deployment, this allows for a blue-green failover scenario via DNS, alternate strategies like a canary release could be achieved using weighted distribution.

VMware Code Stream

VMware Code Stream provides release automation and continuous delivery to enable frequent, reliable releases of infrastructure and applications. The new deployment is triggered via a commit to Git which results in pipeline execution via a webhook.

VMware Code Stream Pipeline Stages

The diagram below provides an outline of the task sequence of the Code Stream deployment pipeline broken down into stages.

Build stage - Our blueprint deploys to any cloud based on pipeline variables which define the target cloud using inputs variables these are bound to constraint tags. The constraint tags supplied by the code stream drive the Cloud Assembly placement engine.

Once the deployment is successfully created, we poll the application load balancer address to check for http 200 OK response and our application is online and ready for tests to be executed.

Test stage - During the test stage we perform UI testing and performance testing. UI automation tests are executed using a JavaScript test framework called Cypress, with the results tracked via Slack. Once functional testing is complete, load tests are executed using a Python based load test framework called Locust. In the scenario of a failed load test result we can perform an idempotent optional scale-out, using a conditional approval task, we can then revalidate of user performance load tests against the scaled web tier.

Release stage - The release stage validates all required steps for promotion of the new application instance to production or performs any required cleanup on failure. We are performing a blue/green failover using DNS to direct traffic to the new online Tito application instance.

Code Stream Integrations

The VMware Code Stream pipeline provides the glue to easily tie together the many disparate systems initiated from a Git trigger, this enables automation of the build, test and release stages to move from code to production.

Overview of pipeline Integrations
VMware CodeStream integrations

VMware Cloud Assembly

Cloud Assembly provides the Infrastructure as Code and cloud-agnostic blueprint engine, along with cloud placement and governance this enables simple consumption regardless of the underlying cloud provider, in the pipeline we consume the Tito blueprint via the Blueprint service task.


Cypress is a JavaScript-based UI automation test framework for web applications. Headless UI tests are executed via a VMware Code Stream SSH task on an instance we have preconfigured with Cypress installed. Alternatively, these tests could also be executed via a pipeline workspace container and the CI integration method in VMware Code Stream. Deployment and configuration of Cypress is a pre-requisite to the pipeline execution and is automated via VMware Cloud Assembly blueprint.


Locust is an opensource load testing tool based on Python for simulating real user load. Load tests are executed using the VMware Code Stream SSH agent on a test runner instance which has been pre-deployed and configured via a Cloud Assembly blueprint. Pipeline inputs drive load configuration including max user load, and Locust test specifications are pulled from a git snippet.


As an alternative to native execution from Code Stream of our pipeline test stage, we can also call out to another CICD tool where there are pre-existing tests. This scenario includes integrating with pipeline tools such as Jenkins and Bamboo.

In this case, we can use the native VMware code stream Jenkins task. The Jenkins task allows us to call out to an existing Jenkins job passing in inputs parameters, this scenario makes sense where you have existing IP in Jenkins you want to consume as part of the Code Stream pipeline. The diagram below shows the same test execution using Cypress and Locust executing within Jenkins but initiated from Code stream using the Jenkins task.

VMware Wavefront

VMware Wavefront provides real-time application metric monitoring and analytics. Wavefront application metric collection for the Tito application is via Telegraf agent configuration using cloud-init within the cloud-assembly blueprint. Blueprint inputs are passed into the Telegraf configuration as tags for metric filtering and association to application-specific dashboards.

We also use the real-time observability of Wavefront during user load test execution. Using event hooks in Locust test framework we can forward custom metrics to allow us to understand Tito application behaviour under user load. Metrics we can analyse in Wavefront include successful vs failed requests, along with response time. This also enabled the ability to correlate with apache and OS metrics to understand bottlenecks to performance and the ability of the application to respond to demand.

Amazon Route53

AWS Route53 is utilised to provide DNS for the Tito Application. As part of our Code Stream pipeline we integrate with Route53 to update DNS and repoint to a new application instance using a blue-green failover strategy. The DNS updates are made to Route53 using the Code Stream REST task and Cloud Assembly Action Based Extensibility (ABX), which is conditionally executed based success pipeline execution.

The ABX action is executed in AWS Lambda with the AWS SDK (Bota3) used to interact with Route53 in Python, a new copy of the site is then brought online in approximately 60 seconds of pipeline completion… Just to note the same DNS integration could be with implemented with Azure or any other cloud provider DNS, in this case we chose Route53 but this could be expanded to other DNS providers.


Slack is used for pipeline progress tracking via colour-coded threaded messages to an SRE operations channel. To make the slack integration reusable, we are using custom integration tasks. Custom integrations allow for code functions with inputs and outputs to be executed in a standard runtime environment, NodeJS, Python.


In this blog post, we looked at an example continuous deployment pipeline scenario automating the end to end application release process for the Tito application, including a blue-green failover scenario to bring a new instance seamlessly online. VMware Code Stream allows for end to end automation of your application release automation process regardless of the tools you need to integrate.