Skip to content

Milestone 3

Roja Raman edited this page Apr 3, 2019 · 9 revisions

Revised architecture diagram and user stories

Visit Home

Tasks accomplished in this milestone

  • Containerization of each service using Docker
  • Setup Kubernetes Master-Slave integration
  • CI/CD using Apache Jenkins to build, test and deploy containers on Kubernetes Master
  • Perform load balancing testing on each microservice using JMeter
  • Modernized look and feel of the website

Improvements from Milestone 2

  • We have utilized Kubernetes and Docker to facilitate fault tolerance, high-availability and advanced scaling for the individual services and hence for the entire project.
  • A single instance of PostgreSQL has been deployed to facilitate storage of user profile information and search details (for analytics) instead of using 2 separate instances of H2 database for the individual microservices (red and blue).
  • The purple microservice has improved functionality and is now able to retrieve detailed information about the provider’s practice(s) and location coordinates.
  • We have also integrated maps for practices’ location in the front-end.
  • The project now does not need to be installed manually on one server and can be accessed ubiquitously.

Docker

The 3 microservices (namely for profile management, search analytics, and API gateway) along with the front-end have been dockerized.

DockerHub

Docker hub credentials: sanjeevni/sanjeevni@1234

Whenever there is a push in the development branch, Jenkins builds the docker images of the corresponding microservice branch and uploads it to Docker Hub.

Kubernetes

1 kubernetes master has 3 worker nodes. After the Docker Image is built, the master node follows the configuration suggested in the microservice_deployment.yml file (created for each microservice and front end) to deploy 5 replicas of each of them and use the service_discovery.yml file to expose the endpoints being used for the services on the 3 worker nodes. The docker images are pulled from Docker Hub by the Kubernetes cluster.

To see the details of the Kubernetes configuration, follow the url below:

Kubernetes Dashboard

To login, select token option and paste the following:

eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImRhc2hib2FyZC10b2tlbi1rNWs3ciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJkYXNoYm9hcmQiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJiMDAzMmQwNy00ZjkwLTExZTktOTQwZC1mYTE2M2UzZDgyYjciLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6ZGVmYXVsdDpkYXNoYm9hcmQifQ.jxfW9Ck6HxJhxteDCAgr8f36oC110P6uPIlZ4H1pMOS_B-uznC2tW20psRngMUZfHBmO2307KSmKJVnOvPiaMZB7KYlKqVHqvle-grXq-kCqpiJzGKrMENJa02HDhk4eBmzPPfirCXWgLTC-uVP0axPPRv7K0djxfyCHxOfVE5mf3nUoDJcObsy2hQxjpsvfEeLjkCJmLU4B7Doprcv4gz76eUyChztCuKOAsohe-KTdoj5NCqc_7n-QdqQ26OkeJr5_z9RkQ727Ty9UzsjGBT5-a_IFVFbnL2GWwXqm102yJFIyCmc0mNLTkDq_lxvBhVE-wIOZpjah9l3LQfkhvg

Jenkins

Jenkins - 149.165.156.79

jenkins_restricted/jenkins_restricted

Created 2 new slaves:

  1. First slave utilizes github webhook to build docker image and upload it on DockerHub corresponding to the branch on which the change is pushed.
  2. After this job is executed, another slave triggers Kubernetes master to deploy the new docker image.

Steps to test Fault Tolerance

  1. Log on to our Kubernetes dashboard (Details provided under Kubernetes section)

  2. From the menu on the left side, select pods. Here you can see all the pods that have been deployed by the Kubernetes.

  3. Click on any pod of your choice and on the top right corner of the screen click on Delete to delete it.

  4. You will see a new container automatically being created and a new pod will soon be running on this container.

Accessing front end and microservices

  • Rhino (front-end)

http://js-156-182.jetstream-cloud.org:30006

  • Red (Profile Management)

http://149.165.156.182:30007

  • Blue (Search Analytics)

http://149.165.156.182:30008/test

  • Purple (API Gateway)

http://149.165.156.182:30009/test

Development branches are as follows

Rhino front - end branch

Branch URL: https://github.com/airavata-courses/MayDay/tree/develop-client-rhino

Jenkins dev job: http://149.165.156.79:8080/job/rhino/

Test URL: http://js-168-167.jetstream-cloud.org:4200

Purple Gateway microservice branch

Branch URL: https://github.com/airavata-courses/MayDay/tree/develop-microservice-purple

Jenkins job: http://149.165.156.79:8080/job/purple/

Test URL: http://149.165.169.31:3000/test

Red Profile management microservice branch

Branch URL: https://github.com/airavata-courses/MayDay/tree/develop-microservice-red

Jenkins job: http://149.165.156.79:8080/job/red/

Test URL: http://149.165.169.159:8080/

Blue Search Analysis microservice branch

Branch URL: https://github.com/airavata-courses/MayDay/tree/develop-microservice-blue

Jenkins job: http://149.165.156.79:8080/job/blue/

Test URL: http://149.165.168.77:7000/test

Docker and Kubernetes related jobs

Jenkins docker image builder job: http://149.165.156.79:8080/job/docker_image_builder/

Jenkins Kubernetes deployer job: http://149.165.156.79:8080/job/kubernetes_deployer/

Perform load balancing testing on microservices using JMeter

  • Created test plan for 3 and 5 instances of the service.
  • Measured the throughput against 3000,5000,7000,10000,15000 requests with 100 seconds as ramp-up time.

Graph Results

5 instance

For 3000 Requests 3000 Graph

For 5000 Requests 5000 Graph

For 7000 Requests 7000 Graph

For 10000 Requests 10000 Graph

For 15000 Requests 15000 Graph

Summary Report

5 instance

For 3000 Requests 3000

For 5000 Requests 5000

For 7000 Requests 7000

For 10000 Requests 10000

For 15000 Requests 15000

Summary Results for 5 instances:

Requests Error Rate (%) Throughput(per second)
3000 2.75 87.3
5000 7.36 143.6
7000 22.96 201.0
10000 26.14 281.4
15000 34.24 444.8

Known Issues

  • Only a single instance of the database is available at all times

  • UI form validation does not work perfectly

Challenges Faced

  • Moving from H2 to Postgres took quite some time. We had to spend a lot of effort to get Postgres up on Jetstream instance and to change the connector for each of the microservice.

  • There were issues while trying to configure the Kubernetes cluster as the required network configuration was tough to accomplish.

  • Kube Monkey didn’t seem to work on our instances as it didn’t pick up the config.toml from /var/etc/ directory.

  • We had issues while testing our portal with Jmeter wherein a higher number of requests would lead to memory issues. We had to reconfigure the PostgreSQL server to accommodate these high number of requests coming in.

Future Scope

Install Chaos Monkey or Kube Monkey.

Fix form validations.

Maintain replicas of the database instance.

Contacts

Clone this wiki locally