Browsed by
Author: Andras

Building a CI/CD environment with Bitbucket pipelines && Docker && AWS ECS for an Angular app

Building a CI/CD environment with Bitbucket pipelines && Docker && AWS ECS for an Angular app

A good developer workflow will give your team a huge advantage over competitors, it allows your developers to focus on creating new features, tests. If your team has bad workflow and doesn’t have automated tests, automated deployment, developers will have to wait for the guy with the ssh key to the Linux box to deploy the code, only to find out after trying it out that it doesn’t work properly because there aren’t sufficient tests or someone forgot to run them.  This happening every day will drive the best developers away, and it will reduce their output significantly.

Tests and deployment are automated

Setting up continous integration

The core of continuous integration and continuous development is automation. Software development is messy, mistakes happen, for a CI to be effective you have to have tests which are able to protect your developers from breaking functionality, most of the developers are afraid of change, a good CI with tests is able to solve this problem. Your tests are going to run when you’re creating a pull request to the master branch / production (or whenever you push code).

So first let’s look at a very simple workflow, this is very common for new projects where there aren’t a lot of developers. In this workflow there are no branches yet, every code gets pushed to master, and after a green test it gets deployed to int or prod server automatically.

 

A simple workflow, for small teams / new projects.

Let’s create the CI part which runs the tests after each commit to master. We create a new project on bitbucket and go to the pipeline part a click on javascript, it will create a sample pipeline.yml file.

Sample pipeline.yml

After enabling the pipeline we pull our repo, and we can edit our pipeline describing file. I use karma for running tests and chrome as an environment, so I looked for a docker image which has Node and Chrome. The image I choose to go with is this.

With this pipeline.yml, we run the tests.

#The docker image what we are useing
image: weboaks/node-karma-protractor-chrome

pipelines:
  default:
    - step:
        caches:
          - node
        script:
          #First step, we get the dependencies.
          - npm install
          #We install karma so its accesible from the cli.
          - npm install karma -g
          #We run the karma tests.
          - karma start --single-run --browsers ChromeHeadless karma.conf.js

After uploading this file, it will run your tests on remote machine after every commit and will show a green or red light beside your commit depending on if the tests have passed.

 

Setting up continuous delivery

The second part is the CD, there are multiple ways to deploy & scale an application, in an automated way, in this tutorial we are going to go with Docker and AWS Elastic Container Service.

The thoughts behind the decision. There is a big question whether we should use our own orchestration layer (Mesos/Kubernetes/Docker Swarm) or a ready-made solution. If you decide to go with your orchestration layer you’ll have more flexibility, probably its going to be cheaper but its a lot more responsibility and many more things can go wrong than if you just go with a cloud providers solution. So In this tutorial, we are going to use AWS Elastic Container Service (ECS later on) it will manage our docker image restart/update/scale it if it is needed. The reason why its so great because we don’t have to create too much AWS specific code, so we can avoid vendor lock-in and we can migrate our app pretty fast to anywhere if we have to, also, Docker grants us a high level of abstraction, which will give us robustness and our app will be less prone to errors and if something bad happens we will know where to look. So as the first step I have set up a Dockerfile in the root of the app which looks like:

#We will use the node image from dockerhub, we dont want to waste time setting up node.
FROM node:9.0.0

#We create a forder for our app
RUN mkdir /app

#We copy the contet of our whole project to the docker containers app folder
ADD . /app

#We go to the app dir (similar to "cd /app")
WORKDIR /app

#We install the npm dependecies of the project (based on package.json)
RUN npm install

#We compile an optimized version of our app. (AOT && Treeshaking)
RUN npm run buildprod

#We install a http-server which is a lightweight solution for serving the compiled files
RUN npm install -g http-server 

#We expose port 80 (which is more like documentation)
EXPOSE 80

#We go to /app/dist where the compiled files are.
WORKDIR /app/dist

#When we run the dockerimage it will spin up the http-server on port 80
CMD ["http-server", "-p 80"]

Now we can test our app locally. First, we have to build our image, let’s go to the project folder where the Dockerfile is.

docker build -t firebase-test-in .

With this command we run everything in the Dockerfile except for the last command (the CMD one). It takes a short while to build the image you can see the output in the terminal. If everything is okay, you can list your images.

Andrass-Mac:Docker-stuff andras$ docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
firebase-test-in    latest              bfa1f44e3c1b        30 minutes ago      876MB
firebase-test       latest              d6a81f9720f1        44 hours ago        940MB
node                9.0.0               cbea0ebe4f3c        4 days ago          674MB
alpine              latest              37eec16f1872        11 days ago         3.97MB
hello-world         latest              05a3bd381fc2        7 weeks ago         1.84kB

As you can see the images is there lets run it.

Andrass-Mac:firebase-test janosandrasnemeth$ docker run -p 8080:80 firebase-test-in
Starting up http-server, serving ./
Available on:
  http://127.0.0.1: 80
  http://172.17.0.2: 80
Hit CTRL-C to stop the server

-p 8080:80 means we want to forward port 80 from the docker image to our host’s machines 8080 port. (The IP of the host’s machine depends on whether you’re using Mac/Linux/Windows)

firebase-test-in

is the name of our image.

In another tab let’s list the running machines, to get some more info if everything runs great.

Andrass-Mac:Docker-stuff janosandrasnemeth$ docker ps
CONTAINER ID        IMAGE               COMMAND                 CREATED             STATUS              PORTS                  NAMES
88b4589c426a        firebase-test-in    "http-server '-p 80'"   2 minutes ago       Up 2 minutes        0.0.0.0:8080->80/tcp   focused_newton

As we can see we have a container running and port 80 is forwarded to 0.0.0.0:8080. So let’s check it out, we should see our running app on localhost:8080. The next step after successfully Dockerizing our application to integrate Docker hub, AWS ECS and Bitbucket Pipelines. Luckily we have an official documentation for it. (If you get stuck somewhere check it out)

We are going to create a hook for Docker hub so whenever there is a new commit on Bitbucket it can download the repo and build a new image based on the Dockerfile. When you log in to Docker hub under settings there is a linked accounts & services part where you can connect your Bitbucket account. After connecting them you have to set up build rules.

Currently in my setup whenever there is a new commit or merge on production branch, Docker hub creates a new build with the tag latest. This is the image we are going to use on ECS. You can also pull your images to your local machine to test them. I have also modified the bitbucket-pipelines.yml so whenever there is a change on the production branch a script is called which will deploy our app to ECS.

options:
  max-time: 5 #5minutes incase something hangs up
pipelines:
  branches:
    master:
      - step:
          image: weboaks/node-karma-protractor-chrome
          caches:
            - node
          script:
            - npm install
            - npm install karma -g
            - karma start --single-run --browsers ChromeHeadless karma.conf.js
    production:
      - step:
          image: python:3.5.1
          script:
            - pip install boto3==1.3.0
            - export TAG=`git describe --abbrev=0 --tags`
            # invoke the ecs_deploy python script
            # the first argument is a template for the task definition
            # the second argument is the docker image we want to deploy
            #   composed of our environment variables
            # the third argument is the number of tasks to be run on our cluster
            # the fourth argument is the minimum number of healthy containers
            #   that should be running on the cluster
            #   zero is used for the purposes of a demo running a cluster with
            #   one host
            #   in production this number should be greater than zero
            # the fifth argument is the maximum number of healthy containers
            #   that should be running on the cluster
            - python ecs_deploy.py task_definition.json $DOCKER_IMAGE:latest 1 0 200

I have also added a “task_definition.json” to the root of our project, which gets used by the pipeline.

  {
    "memory": 256,
    "portMappings": [
      {
        "hostPort": 80,
        "containerPort": 80,
        "protocol": "tcp"
      }
    ],
    "name": "firebase-angular",
    "image": "yodeah/firebase-test",
    "cpu": 10
  }

The next part is creating a cluster in ECS and deploying our image. You can start with this link, it will guide you through creating your first cluster. Use the username/image_name on dockerhub for the image, you image on dockerhub has to be public in this simeple scenario.

It is advised to create a new user in AWS IAM for handling ECS related tasks. After creating a user with sufficient privileges add the following environment variables to your bitbucket project:

  • AWS_SECRET_ACCESS_KEY: Secret key for a user with the required permissions.
  • AWS_ACCESS_KEY_ID: Access key for a user with the required permissions.
  • AWS_DEFAULT_REGION: Region where the target AWS ECS cluster is. (you can find region codes here)
  • ECS_SERVICE_NAME: Name of the ECS Service to update.
  • ECS_CLUSTER_NAME: Name of the ECS Cluster the service should run on.
  • ECS_TASK_FAMILY_NAME: Family name for the Task Definition.
  • DOCKER_IMAGE: Location of the Docker Image to be run. The tag/version is passed in bitbucket-pipelines.yml. (username/image_name on dockerhub, same as used in the ECS cluster setup.)

So by now, our pipeline should be working. Whenever we merge the master into the production branch a new image will be built on Docker hub and it will be deployed onto ECS. If you go to the cluster and click on the EC2 instance you can see the public address and you can access your application. YEY!!!

Our final workflow

(Actually right now Docker hub seems to be a huge bottleneck, sometimes it hangs for an hour before building the image. In this case, you have to run the pipeline script on production branch again with hand)

 

After creating a cool pipeline it would be also advised to set up a logging service for our pipeline, application, so we could monitor our application more easily. At the end, we have created a pipeline which saves a lot of time for developers and makes development much more convenient, and our product will probably contain fewer bugs.

5 Most important new features of ES6

5 Most important new features of ES6

Whether you’re working or preparing for an interview where you might get some tricky questions about JS the following 5 features can prove to be useful. The ECMAScript6 (ES6) final specification was released in 2015 July, since then it’s supported in Node, new browsers and you can transpile the code to ES5 with babel to be compatible with older browsers. Let’s get to it.

1. Let and const, block-scoped variables.

On a high level, using const and let will result in a more robust codebase, which is easier to understand, because your code will be more restricting.

Const keyword prevents the value of a primitive from changing or a new object being made and overwriting an existing object. Using const over let/var has one huge advantage in my opinion:

It makes the intent of your code much clearer, leading to a quicker understanding of it whenever anyone takes a look at it, a readable clear code is nowadays one of the most important attributes of good code.

The let keyword is a nice improvement after var which is function scoped when let is block scoped. As you can see in the example a variable created with var in a loop can be seen outside of it, when the same thing cannot be said for let.

Example for var:

JS Bin on jsbin.com

Example for let:

JS Bin on jsbin.com

2. Arrow functions

This shouldn’t be new to anyone who is using the language on a daily basis.

JS Bin on jsbin.com

Basically, it looks like syntactic sugar for a simpler function declaration although arrow functions in ES6 have at least two limitations:

  • Doesn’t work with new.
  • ‘this’ doesnt get bound, so you cannot use the instance variables.

So for example in the next scenario, it wouldn’t work because ‘this’ wouldn’t be bound to the function. (You could make it work with anonymous functions but it wouldn’t be as nice as this.)

JS Bin on jsbin.com

3. Classes

So finally ES6 introduced classes to the language, so people can understand/write code with less invested time, finally, you can write code which is somewhat similar to OO stuff like Java, although JS doesn’t have interfaces nor abstract classes. Classes are syntactic sugar for its hard to understand prototypal inheritance. The main difference between classes and constructor functions (which are used to create instances pre ES6) is hoisting, so first you must declare your classes and you can only use them after that. (In Javascript functions and ‘var’ declarations get hoisted, which means you can use them in your code before you have declared them.)

Lets see an example for a class with ES5:

JS Bin on jsbin.com

With ES6:

JS Bin on jsbin.com

The inheritance is where it gets tricky with the prototype syntax, as you can see its pretty complicated with ES5:

JS Bin on jsbin.com

The same thing with the new version is way more easy to understand and less code:

JS Bin on jsbin.com

4. Spread operator

Spread operator is a really great addition to the language it has multiple use cases:

  • as function parameters
  • copying arrays
  • copying objects

So let’s see the first use case as function parameters, as you can see we want to pass multiple parameters to our function and we don’t know how many to expect. After the first one, the rest of the parameters are going to be in an array. This way we can create really versatile functions.

JS Bin on jsbin.com

The second use case is copying arrays, let’s see an example for this. Its pretty straightforward, we copy the first array into the middle of the second one.

JS Bin on jsbin.com

Copying objects are pretty similar to arrays:

JS Bin on jsbin.com

5. Default function parameters

Default parameters are a nice add on for the language, it helps in writing denser and easy to understand code and in my opinion, it’s nicer than function overloading (what we have in Java and C++). In many cases, we want to create a function where most of the time we want to pass only one parameter but in some edge cases, another parameters are necessary. Let’s see an example.

JS Bin on jsbin.com

I hope these 5 features will help you preparing for your next interview, becoming a better developer. Be sure to experiment with these examples, it is the best way that you won’t forget them.

A practical introduction into functional programming with JavaScript.

A practical introduction into functional programming with JavaScript.

Many articles talk about advanced functional programming topics, but I want to show you simple and useful code that you can use in the day to day developer life. I’ve chosen JavaScript because you can run it almost everywhere and it’s well suited for functional programming. Two of the reasons why it’s so great are that functions are first class citizens and you can create higher-order functions with it.

Update: You can also read this post on DZone.

Higher order functions are functions that can take a function as an argument or return a function as a result. Such as createAdd below:

function createAdd(a){
    return function(x){
        return a + x;
    }
}

add3 = createAdd(3);

console.log( add3(5) );
//output is 8

Note how you can store a function in a variable and call it later. Functions in variables are treated as just any other variable.

typeof add3
"function"

But why is it great that you can return a function as a result? Because you are able to return behaviour not just values and because of this the abstraction will be higher and the code will be more readable and elegant.

Let’s take an example, you want to print the double of every number in an array something that every one of us does once in a while, you would probably do something similar:

nums = [1, 2, 3, 4, 5];

for (x = 0; x < nums.length; x++) {
    console.log(nums[x] * 2);
}

You can see a common pattern here, the looping through an array is a behaviour that you can extract into a function so you don’t have to write it again.

How to do it?

nums = [1, 2, 3, 4, 5];

printDouble = function (k) {
    console.log(k * 2);
}

function forEach(array, functionToRunOnEveryElement) {
    for (x = 0; x < array.length; x++) {
        functionToRunOnEveryElement(array[x]);
    }
}

forEach(nums, printDouble);

// output: 
// 2 
// 4 
// 6 
// 8 
// 10

The forEach function gets an array of numbers and a function printDouble and calls printDouble on every element of the array. Actually, this is a very useful function and its implemented in the prototype of array. So you don’t have to write the previous code in every codebase that you work on.

(forEach is a higher-order function too because it takes a function as a parameter.)

nums = [1, 2, 3, 4, 5];

printDouble = function(k){
    console.log(k * 2);
};

nums.forEach(printDouble);

// output:
// 2
// 4
// 6
// 8
// 10

Welcome to a life without having to write loops again to do something with an array.

You can also write the previous code this way:

[1, 2, 3, 4, 5].forEach((x) = > console.log(x * 2))

Javascript has abstractions for similar common patterns such as:

  • Reduce can be used to have a single output from an array.
nums = [1, 2, 3, 4, 5];

add = function (a, b) {
    return a + b;
}

nums.reduce(add, 0);

//returns 15

What it does is:

0 + 1 = 1
1 + 2 = 3
3 + 3 = 6
6 + 4 = 10
10 + 5 = 15
  • Map is similar to forEach but the function you give in modifies the value of the current element:
nums = [1, 2, 3, 4, 5];

function isEven(x) {
    if (x % 2 == 0) {
        return x + " is an even number";
    } else {
        return x + " is an odd number";
    }
}

nums.map(isEven)

// returns an array:
// [ '1 is an odd number',
// '2 is an even number',
// '3 is an odd number',
// '4 is an even number',
// '5 is an odd number' ]
  • filter is used for removing the element that do not match a criteria:
nums = [1, 2, 3, 4, 5];

isEven = function(x){
    return x % 2==0;
};

nums.filter(isEven);

//returns an array with even numbers [ 2, 4 ]

or with using the fat arrow operator:

[1,2,3,4,5].filter((x) => {return x%2==0});

A similarly common example for a web application:

function addAMonthOfSubscriptionToUser(username) {
    user = db.getUserByUsername(username);
    user = addAMonthOfSubscription(user);
    db.saveUser(user);
}

function addAYearOfSubscriptionToUser(username) {
    user = db.getUserByUsername(username);
    user = addAYearOfSubscription(user);
    db.saveUser(user);
}

function cancelSubscriptionForUser(username) {
    user = db.getUserByUsername(username);
    user = cancelSubscription(user);
    db.saveUser(user);
}

addAMonthOfSubscriptionToUser("Jay");

Don’t repeat yourself – as every good programmes knows

In this scenario, we can see a pattern, but it cannot be abstracted with OOP structures so let’s do our functional magic.

modifyUser = function(modification){
   return function(username) {
       user = db.getUserByUsername(username);
       user = modification(user);
       db.saveUser(user)
   }
}

addAMonthOfSubscriptionToUser = modifyUser(addAMonthOfSubscription);
addAYearOfSubscriptionToUser = modifyUser(addAYearOfSubscription);
cancelSubscriptionForUser = modifyUser(cancelSubscription);

addAYearOfSubscriptionToUser("Bob");

In the end, we have the same functions in a more elegant way.

I think in the future functional programming will creep into our everyday life and will be used beside with OOP so we can have a more abstract and readable codebase.

The cost of all this nice and abstract code is efficiency and more expensive developers.

Current state of our home project

Current state of our home project

Quick overview

Last summer me and my cousin started to work on our idea to crawl the comments of the internet and harness the data from it. Our current goal is to help the marketers to get useful insights from our data about the effects of their campaigns, releases, and presence in the digital world.

For example, a Chinese brand releases new phones. How do they get info about their users feedback? Besides looking at the numbers of the sales, returned handsets, perhaps emails from the customers about the features not working.

Our quest is to solve this problem by having a huge dataset of comments, and by analysing this we can give useful insight about:

  • How positively their brand is perceived.
  • What do people think is positive/negative about certain products.
  • Yearly/monthy/weekly breakdown of the buzz around them on the internet, with channel distribution. (On which sites were the comments)
  • Did they share their own ideas or shared someone elses content about your product.
  • Conversational clouds, what did people mention in relation to your product (screen, charger, packaging)
  • Performance comparison with similar products. (Number of mentions Samsung vs Xiaomi)
Current state of the solution

Our application’s backend side based on ‘templates’/scripts which instruct the crawler service to get the comments of various forums currently a few tech sites, Ars Techinca, Xda developers we have 12M documets at the moment. Then the comments are analysed in microservices by their language, their sentiment, and the structure of the sentence. This info is saved into a database, then the data is indexed in an elasticsearch cluster so we can quickly query it.

The frontend (you can access it here) currently enables you to search the data and create some really basic pie charts based on keywords.

The architecture without the language detection and the sentiment analysis services looks like this:

The tech stack is:

  • Spring boot for microservices
  • Angular 1 + Bootstrap for the frontend
  • Mongo for storage
  • Elastic for indexing & querying
MVP

Things to add so we can demo it:

  • Create views which show where were keywords mentioned time/site/language. Sentiment of keywords. Conversational clouds. Basically anything statistics what delivers value with little as possible development time.
  • Add user/group/organisation management so only people with certain right can access the data and generate reports. (We planned to use Stormpath but their future is kinda shady)

Our current goal is to deliver the MVP in 2-3 months and get feedback from customers as fast as possible so we can make sure we are heading in the right direction.

 

Thanks for reading, we appreciate any feedbacks/ideas in the comments or in an email.

Best lightweight GIT service for your Raspberry/SOHO server

Best lightweight GIT service for your Raspberry/SOHO server

Not a paid ad.

Finally found what I’ve been looking for, GITEA. Its love at first sight, not only because of the beautiful UI, but because it doesn’t use SO MUCH GODDAMN MEMORY which is expensive in the cloud for such a mundane thing as version management. For the past years, I have had gitlab/bitbucket/stash servers for my personal projects but they used too much memory, considering that the server was used only by 2 people tops (gitlab recommends 4 gigs, runs with 1 gb ram + 3gb swap). The problem with them is that they are written in Java and designed for massive scalability, on the other hand, gitea is a lightweight go service forked from gogs, consuming ~30MB memory with light usage. Also, it is blazing fast, has a great ticket management and a built-in wiki. Its almost as good as Github.

You can get it here: https://gitea.io/

It’s also fairly quick to set up:

  • clone it
  • create a user for it
  • register it as a service (it’s only a single binary so not necessary)
  • edit the config file

GOD BLESS the great guys who designed golang, so people can write efficent applicantions and don’t have to bother with C and memory allocation anymore.

How to offload cpu heavy code to lambda with Nodejs

How to offload cpu heavy code to lambda with Nodejs

Nodejs-2-562x309@2x-op

The title, says it all, but basically if you have a small service (in this case written with nodejs) running on a server with limited capabilities, you might run into problems if you want to do processor/memory heavy computations. For this can be a solution to offload the work to lambda, which can scale automatically, and you only have to pay for computation, so basically if you rarely need the computation, instead of renting a server you can do this and save heaps of money.

Talk is cheap, lets build it.

First of all if your dont already have you need the following:

Amazon AWS account

We will create the lambda function with this, and our extension of the application will run here.

Nodejs

Node.js is an open-source, cross-platform JavaScript runtime environment for developing a diverse variety of tools and applications.

You can get it here: download nodejs

NPM

Its for managing javascript packages, if you install nodejs, you will have this.

 

You have to log in to AWS and create a new lambda function:

new_lambda_function_button

You can choose the “Blank template” blueprint.

Name your function give it a description, a choose the Nodejs environment:

function_name_and_description

Insert the following code, it gives back a json when it recieves a request, if the json has an attribute named “data_to_transform” it gives back its value squared.

compute_heavy_task = function(data){
 return data*data;
}

exports.handler = (event, context, callback) => {
 context.succeed(
 {
 "lambda_function_name":"convert_stuff",
 "original_data":event.data_to_transform,
 "transformed_data":compute_heavy_task(event.data_to_transform)
 });
};

Create a role from a policy template and name it:

function_handler_and_role

review your settings:

review

Now you can test it, with the test settings and you’re supposed to get back a json like:

{ 
 lambda_function_name: 'convert_stuff',
 original_data: 500,
 transformed_data: 250000 
}

if your test request looks like:

{
        "data_to_transform":500
}

So in the second part we are going to create a nodejs app which has AWS auth credentials and can call the previously made function.

But first we have to create the credentials. Go to AWS console / IAM -> Users -> Add user.

welcome_to_iam   step1 step2

step3

step4

Save the credentials as a csv file. After clone the folowoing git repository npm install & run it with nodejs.

git clone https://github.com/yodeah/offload_to_lambda
cd offload_to_lambda
npm install

!! edit the conf file, with the neccesary data !!

node app.js

 

Nginx https load balancer with lets encrypt cert (On AWS)

Nginx https load balancer with lets encrypt cert (On AWS)

letsencrypt

Part 1: Create a working http load balancer

I’v decided to use amazon for hosting my (Ubuntu 14.04 trusty) server (t2.nano (still an overkill, anything with 256 mb ram is more than sufficent))

  1. you have to create a security profile which opens port 22 for ssh, 80 for http, and 443 for https.
  2. ssh into your server.
  3. Fetches the updates from the server, downloads nginx, apt-get update & upgrade, sudo apt-get install nginx
  4. Backup the config file, it is always considered a good practise to do. cp /etc/nginx/nginx.conf /etc/nginx/nginx.conf.backup
  5. Modify the http part of the config file to the code below, the 2 servers are the ones youre sending the load too (the connection to those is http). sudo nano /etc/nginx/nginx.conf
    http {
        upstream myapp1 {
            server google.com;
            server yahoo.com;
            }
    
        server {
            listen 80;
    
            location / {
                proxy_pass http://myapp1;
            }
        }  
    }
  6. Restart nginx (sudo service nginx restart), if everything is alright then you should have a working loadbalancer which responds with either something from google or yahoo. Congrats.

Part 2: Generating a cert & assigning it to nginx.

  1. Install the certbot script which helps you to get the cert quickly
    wget https://dl.eff.org/certbot-auto
    chmod a+x certbot-auto
    ./certbot-auto
    
  2. The prompt will guide you through, though it is recommended to turn off nginx while you do this, so you dont have anything listening on port 80, 443. After you have finished youll have your cert files in /etc/letsencrypt/live/yoururl
  3. Modify the nginx conf to use the cert files.
    http {
        upstream myapp1 {
            server google.com;
        }
    
            server{
                    listen 443 ssl;
                    server_name beta.daggersandsorcery.com www.daggersandsorcery$
    
                    ssl on;
                    ssl_certificate /etc/letsencrypt/live/beta.daggersandsorcery$
                    ssl_certificate_key /etc/letsencrypt/live/beta.daggersandsor$
    
    
                    location / {
                        proxy_pass http://myapp1;
                    }
            }
    }
    
  4. Restart nginx. Well done, it should be working for you.

Update:

The following script is doing the same thing that I’ve shown you in this article. With the addition of the automatic renewal process with crontab.

References: