Skip to content

Milestone 3

Arpit Bansal edited this page Apr 3, 2019 · 9 revisions

Task accomplished in this milestone

  • Containerized each service using Docker
  • Setup Kubernetes Master-Slave integration
  • CI/CD using Apache Jenkins to build, test and deploy containers on Kubernetes Master
  • Perform load balancing testing on each microservice using JMeter
  • Revamped the look and feel of the website

Improvements from Project 2

  • We have utilized Kubernetes and Docker to facilitate fault tolerance, high-availability and advanced scaling for the individual services and hence for the entire project.
  • A single instance of PostgreSQL has been deployed to facilitate storage of user profile information and search details (for analytics) instead of using 2 separate instances of H2 database for the individual microservices (red and blue).
  • The purple microservice has improved functionality and is now able to retrieve detailed information about the provider’s practice(s) and location coordinates.
  • We have also integrated maps for practices’ location in the front-end.
  • The project now does not need to be installed manually on one server and can be accessed ubiquitously.

Docker

The 3 microservices (namely for profile management, search analytics and API gateway) along with the front-end have been dockerized.

DockerHub

Docker hub credentials: sanjeevni/sanjeevni@1234

Whenever there is a push in the development branch, Jenkins builds the docker images of the corresponding microservice branch and uploads it to docker hub.

Kubernetes

1 kubernetes master has 3 worker nodes. After the Docker Image is built, the master node follows the configuration suggested in the microservice_deployment.yml file (created for each microservice and front end) to deploy 5 replicas of each of them and use the service_discovery.yml file to expose the endpoints being used for the services on the 3 worker nodes. The docker images are pulled from dockerhub by the Kubernetes cluster.

To see the details of the Kubernetes configuration, follow the url below:

Kubernetes Dashboard

To login, select token option and paste the following:

eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImRhc2hib2FyZC10b2tlbi1rNWs3ciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJkYXNoYm9hcmQiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJiMDAzMmQwNy00ZjkwLTExZTktOTQwZC1mYTE2M2UzZDgyYjciLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6ZGVmYXVsdDpkYXNoYm9hcmQifQ.jxfW9Ck6HxJhxteDCAgr8f36oC110P6uPIlZ4H1pMOS_B-uznC2tW20psRngMUZfHBmO2307KSmKJVnOvPiaMZB7KYlKqVHqvle-grXq-kCqpiJzGKrMENJa02HDhk4eBmzPPfirCXWgLTC-uVP0axPPRv7K0djxfyCHxOfVE5mf3nUoDJcObsy2hQxjpsvfEeLjkCJmLU4B7Doprcv4gz76eUyChztCuKOAsohe-KTdoj5NCqc_7n-QdqQ26OkeJr5_z9RkQ727Ty9UzsjGBT5-a_IFVFbnL2GWwXqm102yJFIyCmc0mNLTkDq_lxvBhVE-wIOZpjah9l3LQfkhvg

Jenkins

Jenkins - 149.165.156.79

jenkins_restricted/jenkins_restricted

Created 2 new slaves:

  1. First slave utilizes github webhook to build docker image and upload it on DockerHub corresponding to the branch on which the change is pushed.
  2. After this job is executed, another slave triggers Kubernetes master to deploy the new docker image.

Steps to test Fault Tolerance

  1. Log on to our Kubernetes dashboard (Details provided under Kubernetes section)

  2. From the menu on the left side, select pods. Here you can see all the pods that have been deployed by the Kubernetes.

  3. Click on any pod of your choice and on the top right corner of the screen click on Delete to delete it.

  4. You will see a new container automatically being created and a new pod will soon be running on this container.

Accessing front end and microservices

  • Rhino (front-end)

http://js-156-182.jetstream-cloud.org:30006

  • Red (Profile Management)

http://149.165.156.182:30007

  • Blue (Search Analytics)

http://149.165.156.182:30008/test

  • Purple (API Gateway)

http://149.165.156.182:30009/test

Development Branches are as follows

Rhino front - end branch

Branch URL: https://github.com/airavata-courses/MayDay/tree/develop-client-rhino

Jenkins dev job: http://149.165.156.79:8080/job/rhino/

Test URL: http://js-168-167.jetstream-cloud.org:4200

Purple Gateway microservice branch

Branch URL: https://github.com/airavata-courses/MayDay/tree/develop-microservice-purple

Jenkins job: http://149.165.156.79:8080/job/purple/

Test URL: http://149.165.169.31:3000/test

Red Profile management microservice branch

Branch URL: https://github.com/airavata-courses/MayDay/tree/develop-microservice-red

Jenkins job: http://149.165.156.79:8080/job/red/

Test URL: http://149.165.169.159:8080/

Blue Search Analysis microservice branch

Branch URL: https://github.com/airavata-courses/MayDay/tree/develop-microservice-blue

Jenkins job: http://149.165.156.79:8080/job/blue/

Test URL: http://149.165.168.77:7000/test

Docker and Kubernetes related jobs

Jenkins docker image builder job: http://149.165.156.79:8080/job/docker_image_builder/

Jenkins Kubernetes deployer job: http://149.165.156.79:8080/job/kubernetes_deployer/

Known issues

Database has only 1 instance

UI form validation does not work perfectly

Challenges faced

Moving from H2 to Postgres took some time. We had to spend a lot of effort to get Postgres up on Jetstream instance and to change the connector code for each of the microservice.

There were issues while trying to configure the Kubernetes cluster as network configuration was tough to accomplish.

Kube Monkey didn’t seem to work on our instances as it didn’t seem to pick up the config.toml from /var/etc/ directory.

We had issues while testing our portal with Jmeter wherein a higher number of requests would lead to memory issues. We had to reconfigure postgres server to accommodate these high number of requests coming in.

Future Scope

Install Chaos Monkey or Kube Monkey.

Fix form validations.

Maintain replicas of the database instance.

Contacts

Clone this wiki locally