We at Administrate are transitioning our dev environment from running locally on our laptops to running on an EC2 instance in our development AWS environment.
Cloudboot is our attempt at a Github CodeSpaces-esque model. It utilizes docker-compose, VSCode, dev containers and AWS features to provide a much-improved developer experience for our engineers.
Running on AWS
The Administrate suite of apps is made up of numerous Docker services. Running this locally using docker-compose can cause your laptop to strain - we’ve had many of our engineers compare their laptop fans to an aircraft taking off! ✈️
By running on an EC2 instance we can easily scale up the instance as needed. No need to procure new expensive laptops - just pay for what you use in AWS.
Another massive benefit is being able to use AWS services giving as close to a production-like environment you can have in a dev environment. We use S3 and Lambda for some of our services - better integration gives us better confidence that our changes will work before we merge them.
Doing this locally can be difficult as you’ll need to ensure the engineer has the correct IAM and Role permissions. However, we can setup the IAM role for the EC2 instance to have permission to access the services we need and that will be used for all Cloudboot instances. This means you have out-of-the-box permission as soon as your instance is created.
Managing Environment Variables using Parameter Store
One common issue we had was getting engineers to update their local environment variables when a new one was added or an existing one was modified. We had no way to roll these out easily and it was possible engineers missed those updates causing issues further down the line.
In Cloudboot we have set up shared environment variables in Parameter Store and have configured the IAM role for our Cloudboot instances with read access.
When a new session is created on the Cloudboot instance it will query Parameter Store and load the keys and store them as environment variables. No more manual updates - it’s all done automatically 🎉🎉🎉
Using Profiles with docker-compose
Profiles were added to docker-compose in version 3. Profiles allow you to tag services with profiles and then specify which profiles you want to run. Any common services can be left untagged - these will be started automatically.
In this example of a docker-compose file we have 3 services: database
, graphql
and lms
. When we run the tms
service we only require database
and graphql
, but when we run the lms
service we need all 3.
version: "3"
services:
database:
image: mysql:5.7.37
graphql:
image: 0123456789.dkr.ecr.eu-west-1.amazonaws.com/graphql:latest
profiles:
- tms
- lms
lms:
image: 0123456789.dkr.ecr.eu-west-1.amazonaws.com/lms:latest
profiles:
- lms
If I wanted to only run the tms
services, I would run the following command:
docker-compose --profile=tms up -d
This would run the database
and graphql
services but not the lms
service.
Continuing this example, if we wanted to start up the lms
service too we would run the following:
docker-compose --profile=tms --profile=lms up -d
This will detect that the only service not running is the lms
service and will start the container.
The great advantage for our engineers with using profiles is that we only need to run the services that we need for the area of the system we are working on and are able to easily start and stop multiple services with a single command.
Running docker-compose using latest images
We use AWS ECR to store our Docker images that are built as part of our CI / CD process. We only tag trunk
builds as latest
.
Since we are running on an EC2 instance we can pull latest
images from our ECRs by setting up the IAM role for the Cloudboot instances to be able to pull the images.
We also configured our Cloudboot instances to run docker login
when a new session is created so no action is required on the engineer’s part to be able to pull the latest image for one of our services.
This would be a bit trickier to do locally as we would need to ensure the engineer had the correct credentials and IAM role and also remember to log in using docker login
each time.
Additionally, the engineer would require a good internet connection to be able to pull the required Docker images - it would be a lot slower than it would be on the EC2 instance as it’s all within AWS.
Using docker-compose overrides for dev containers
With docker-compose it’s possible to use multiple docker-compose files which can override services.
Our main compose file docker-compose.prod.yml
will have the latest
docker image for all our services.
We can also have compose files for dev containers. This is an example of a dev container compose file for our graphql
service:
version: "3"
services:
graphql:
image: graphql:dev
build: ./cloud-boot/graphql
working_dir: /project
volumes:
- ./projects/neoteric:/project:delegated
- graphql_venv:/project/.venv
- graphql_vscode_extensions:/root/.vscode-server/extensions
- graphql_vscode_insider_extensions:/root/.vscode-server-insiders/extensions
- graphql_histdata:/root/.histdata
entrypoint: /usr/bin/make -f graphql_app/Makefile.dev.mk
command: prepare-and-wait
environment:
- HISTFILE=/root/.histdata/.zsh_history
- TEST_DATABASE_URI=mysql://user:password@database:3306/ci_test_common
- FLASK_ENV=development
volumes:
graphql_venv:
graphql_vscode_extensions:
graphql_vscode_insider_extensions:
graphql_histdata:
Here you can see it is using a local Dockerfile instead of the latest
trunk image. We use volumes for the code, container setup and command line history for usability.
In the following example we are specifying two compose files by using the -f
parameter. The docker-compose.prod.yml
is our main compose file and we are overriding the graphql
service with the dev compose file.
docker-compose -f docker-compose.prod.yml -f cloudboot/graphql/docker-compose.dev.yml --profile=tms up -d
Therefore the graphql
container that will be running from this command will be the one specified in the dev compose file.
Now that we have a dev container running we can connect to it using the Container view within VSCode. For more information on dev containers see Daniel’s previous post on our Engineering blog
Enabling a fully integrated FT environment
Because we are running our dev environment on AWS our app is no longer just available on our engineer’s network, it is publicly accessible. This means we can easily share a link with our colleagues for testing and sanity checking before merging.
We have taken this one step further and added a script that can create a fully integrated FT environment that can test the changes made for a Jira ticket.
Our script connects to the Github GraphQL API and calculates which Docker images have been built for all pull requests linked to the Jira ticket. It then creates a special compose file for FT that swaps out latest
for the relevant image. It then runs the docker-compose process specifying all profiles to run all the services.
This allows us to test our changes alongside other apps which gives us more confidence that our changes will be successful before we merge them.
What’s next?
We are #alwaysImproving here at Administrate. Here are some of the features that will be coming soon:
- Using dotfiles for our developer environment
- Using dotfiles for our dev containers
- Tooling for provisioning on-demand FT instances
Expect to see future blog posts on how we are trying to make our developer experience the best for our engineers.
If you like the sound of this then see our current job posts here - come join us!