Building a CI/CD environment with Bitbucket pipelines && Docker && AWS ECS for an Angular app

Building a CI/CD environment with Bitbucket pipelines && Docker && AWS ECS for an Angular app

A good developer workflow will give your team a huge advantage over competitors, it allows your developers to focus on creating new features, tests. If your team has bad workflow and doesn’t have automated tests, automated deployment, developers will have to wait for the guy with the ssh key to the Linux box to deploy the code, only to find out after trying it out that it doesn’t work properly because there aren’t sufficient tests or someone forgot to run them.  This happening every day will drive the best developers away, and it will reduce their output significantly.

Tests and deployment are automated

Setting up continous integration

The core of continuous integration and continuous development is automation. Software development is messy, mistakes happen, for a CI to be effective you have to have tests which are able to protect your developers from breaking functionality, most of the developers are afraid of change, a good CI with tests is able to solve this problem. Your tests are going to run when you’re creating a pull request to the master branch / production (or whenever you push code).

So first let’s look at a very simple workflow, this is very common for new projects where there aren’t a lot of developers. In this workflow there are no branches yet, every code gets pushed to master, and after a green test it gets deployed to int or prod server automatically.


A simple workflow, for small teams / new projects.

Let’s create the CI part which runs the tests after each commit to master. We create a new project on bitbucket and go to the pipeline part a click on javascript, it will create a sample pipeline.yml file.

Sample pipeline.yml

After enabling the pipeline we pull our repo, and we can edit our pipeline describing file. I use karma for running tests and chrome as an environment, so I looked for a docker image which has Node and Chrome. The image I choose to go with is this.

With this pipeline.yml, we run the tests.

#The docker image what we are useing
image: weboaks/node-karma-protractor-chrome

    - step:
          - node
          #First step, we get the dependencies.
          - npm install
          #We install karma so its accesible from the cli.
          - npm install karma -g
          #We run the karma tests.
          - karma start --single-run --browsers ChromeHeadless karma.conf.js

After uploading this file, it will run your tests on remote machine after every commit and will show a green or red light beside your commit depending on if the tests have passed.


Setting up continuous delivery

The second part is the CD, there are multiple ways to deploy & scale an application, in an automated way, in this tutorial we are going to go with Docker and AWS Elastic Container Service.

The thoughts behind the decision. There is a big question whether we should use our own orchestration layer (Mesos/Kubernetes/Docker Swarm) or a ready-made solution. If you decide to go with your orchestration layer you’ll have more flexibility, probably its going to be cheaper but its a lot more responsibility and many more things can go wrong than if you just go with a cloud providers solution. So In this tutorial, we are going to use AWS Elastic Container Service (ECS later on) it will manage our docker image restart/update/scale it if it is needed. The reason why its so great because we don’t have to create too much AWS specific code, so we can avoid vendor lock-in and we can migrate our app pretty fast to anywhere if we have to, also, Docker grants us a high level of abstraction, which will give us robustness and our app will be less prone to errors and if something bad happens we will know where to look. So as the first step I have set up a Dockerfile in the root of the app which looks like:

#We will use the node image from dockerhub, we dont want to waste time setting up node.
FROM node:9.0.0

#We create a forder for our app
RUN mkdir /app

#We copy the contet of our whole project to the docker containers app folder
ADD . /app

#We go to the app dir (similar to "cd /app")

#We install the npm dependecies of the project (based on package.json)
RUN npm install

#We compile an optimized version of our app. (AOT && Treeshaking)
RUN npm run buildprod

#We install a http-server which is a lightweight solution for serving the compiled files
RUN npm install -g http-server 

#We expose port 80 (which is more like documentation)

#We go to /app/dist where the compiled files are.
WORKDIR /app/dist

#When we run the dockerimage it will spin up the http-server on port 80
CMD ["http-server", "-p 80"]

Now we can test our app locally. First, we have to build our image, let’s go to the project folder where the Dockerfile is.

docker build -t firebase-test-in .

With this command we run everything in the Dockerfile except for the last command (the CMD one). It takes a short while to build the image you can see the output in the terminal. If everything is okay, you can list your images.

Andrass-Mac:Docker-stuff andras$ docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
firebase-test-in    latest              bfa1f44e3c1b        30 minutes ago      876MB
firebase-test       latest              d6a81f9720f1        44 hours ago        940MB
node                9.0.0               cbea0ebe4f3c        4 days ago          674MB
alpine              latest              37eec16f1872        11 days ago         3.97MB
hello-world         latest              05a3bd381fc2        7 weeks ago         1.84kB

As you can see the images is there lets run it.

Andrass-Mac:firebase-test janosandrasnemeth$ docker run -p 8080:80 firebase-test-in
Starting up http-server, serving ./
Available on: 80 80
Hit CTRL-C to stop the server

-p 8080:80 means we want to forward port 80 from the docker image to our host’s machines 8080 port. (The IP of the host’s machine depends on whether you’re using Mac/Linux/Windows)


is the name of our image.

In another tab let’s list the running machines, to get some more info if everything runs great.

Andrass-Mac:Docker-stuff janosandrasnemeth$ docker ps
CONTAINER ID        IMAGE               COMMAND                 CREATED             STATUS              PORTS                  NAMES
88b4589c426a        firebase-test-in    "http-server '-p 80'"   2 minutes ago       Up 2 minutes>80/tcp   focused_newton

As we can see we have a container running and port 80 is forwarded to So let’s check it out, we should see our running app on localhost:8080. The next step after successfully Dockerizing our application to integrate Docker hub, AWS ECS and Bitbucket Pipelines. Luckily we have an official documentation for it. (If you get stuck somewhere check it out)

We are going to create a hook for Docker hub so whenever there is a new commit on Bitbucket it can download the repo and build a new image based on the Dockerfile. When you log in to Docker hub under settings there is a linked accounts & services part where you can connect your Bitbucket account. After connecting them you have to set up build rules.

Currently in my setup whenever there is a new commit or merge on production branch, Docker hub creates a new build with the tag latest. This is the image we are going to use on ECS. You can also pull your images to your local machine to test them. I have also modified the bitbucket-pipelines.yml so whenever there is a change on the production branch a script is called which will deploy our app to ECS.

  max-time: 5 #5minutes incase something hangs up
      - step:
          image: weboaks/node-karma-protractor-chrome
            - node
            - npm install
            - npm install karma -g
            - karma start --single-run --browsers ChromeHeadless karma.conf.js
      - step:
          image: python:3.5.1
            - pip install boto3==1.3.0
            - export TAG=`git describe --abbrev=0 --tags`
            # invoke the ecs_deploy python script
            # the first argument is a template for the task definition
            # the second argument is the docker image we want to deploy
            #   composed of our environment variables
            # the third argument is the number of tasks to be run on our cluster
            # the fourth argument is the minimum number of healthy containers
            #   that should be running on the cluster
            #   zero is used for the purposes of a demo running a cluster with
            #   one host
            #   in production this number should be greater than zero
            # the fifth argument is the maximum number of healthy containers
            #   that should be running on the cluster
            - python task_definition.json $DOCKER_IMAGE:latest 1 0 200

I have also added a “task_definition.json” to the root of our project, which gets used by the pipeline.

    "memory": 256,
    "portMappings": [
        "hostPort": 80,
        "containerPort": 80,
        "protocol": "tcp"
    "name": "firebase-angular",
    "image": "yodeah/firebase-test",
    "cpu": 10

The next part is creating a cluster in ECS and deploying our image. You can start with this link, it will guide you through creating your first cluster. Use the username/image_name on dockerhub for the image, you image on dockerhub has to be public in this simeple scenario.

It is advised to create a new user in AWS IAM for handling ECS related tasks. After creating a user with sufficient privileges add the following environment variables to your bitbucket project:

  • AWS_SECRET_ACCESS_KEY: Secret key for a user with the required permissions.
  • AWS_ACCESS_KEY_ID: Access key for a user with the required permissions.
  • AWS_DEFAULT_REGION: Region where the target AWS ECS cluster is. (you can find region codes here)
  • ECS_SERVICE_NAME: Name of the ECS Service to update.
  • ECS_CLUSTER_NAME: Name of the ECS Cluster the service should run on.
  • ECS_TASK_FAMILY_NAME: Family name for the Task Definition.
  • DOCKER_IMAGE: Location of the Docker Image to be run. The tag/version is passed in bitbucket-pipelines.yml. (username/image_name on dockerhub, same as used in the ECS cluster setup.)

So by now, our pipeline should be working. Whenever we merge the master into the production branch a new image will be built on Docker hub and it will be deployed onto ECS. If you go to the cluster and click on the EC2 instance you can see the public address and you can access your application. YEY!!!

Our final workflow

(Actually right now Docker hub seems to be a huge bottleneck, sometimes it hangs for an hour before building the image. In this case, you have to run the pipeline script on production branch again with hand)


After creating a cool pipeline it would be also advised to set up a logging service for our pipeline, application, so we could monitor our application more easily. At the end, we have created a pipeline which saves a lot of time for developers and makes development much more convenient, and our product will probably contain fewer bugs.

Leave a Reply

Your email address will not be published. Required fields are marked *