-
Notifications
You must be signed in to change notification settings - Fork 3
Milestone 3
Visit Home
- Containerization of each service using Docker
- Setup Kubernetes Master-Slave integration
- CI/CD using Apache Jenkins to build, test and deploy containers on Kubernetes Master
- Perform load balancing testing on each microservice using JMeter
- Modernized look and feel of the website
- We have utilized Kubernetes and Docker to facilitate fault tolerance, high-availability and advanced scaling for the individual services and hence for the entire project.
- A single instance of PostgreSQL has been deployed to facilitate storage of user profile information and search details (for analytics) instead of using 2 separate instances of H2 database for the individual microservices (red and blue).
- The purple microservice has improved functionality and is now able to retrieve detailed information about the provider’s practice(s) and location coordinates.
- We have also integrated maps for practices’ location in the front-end.
- The project now does not need to be installed manually on one server and can be accessed ubiquitously.
The 3 microservices (namely for profile management, search analytics, and API gateway) along with the front-end have been dockerized.
Docker hub credentials:
sanjeevni/sanjeevni@1234
Whenever there is a push in the development branch, Jenkins builds the docker images of the corresponding microservice branch and uploads it to Docker Hub.
1 kubernetes master has 3 worker nodes. After the Docker Image is built, the master node follows the configuration suggested in the microservice_deployment.yml file (created for each microservice and front end) to deploy 5 replicas of each of them and use the service_discovery.yml file to expose the endpoints being used for the services on the 3 worker nodes. The docker images are pulled from Docker Hub by the Kubernetes cluster.
To see the details of the Kubernetes configuration, follow the url below:
To login, select token option and paste the following:
eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImRhc2hib2FyZC10b2tlbi1rNWs3ciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJkYXNoYm9hcmQiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJiMDAzMmQwNy00ZjkwLTExZTktOTQwZC1mYTE2M2UzZDgyYjciLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6ZGVmYXVsdDpkYXNoYm9hcmQifQ.jxfW9Ck6HxJhxteDCAgr8f36oC110P6uPIlZ4H1pMOS_B-uznC2tW20psRngMUZfHBmO2307KSmKJVnOvPiaMZB7KYlKqVHqvle-grXq-kCqpiJzGKrMENJa02HDhk4eBmzPPfirCXWgLTC-uVP0axPPRv7K0djxfyCHxOfVE5mf3nUoDJcObsy2hQxjpsvfEeLjkCJmLU4B7Doprcv4gz76eUyChztCuKOAsohe-KTdoj5NCqc_7n-QdqQ26OkeJr5_z9RkQ727Ty9UzsjGBT5-a_IFVFbnL2GWwXqm102yJFIyCmc0mNLTkDq_lxvBhVE-wIOZpjah9l3LQfkhvg
Jenkins - 149.165.156.79
jenkins_restricted/jenkins_restricted
Created 2 new slaves:
- First slave utilizes github webhook to build docker image and upload it on DockerHub corresponding to the branch on which the change is pushed.
- After this job is executed, another slave triggers Kubernetes master to deploy the new docker image.
-
Log on to our Kubernetes dashboard (Details provided under Kubernetes section)
-
From the menu on the left side, select pods. Here you can see all the pods that have been deployed by the Kubernetes.
-
Click on any pod of your choice and on the top right corner of the screen click on Delete to delete it.
-
You will see a new container automatically being created and a new pod will soon be running on this container.
- Rhino (front-end)
http://js-156-182.jetstream-cloud.org:30006
- Red (Profile Management)
- Blue (Search Analytics)
http://149.165.156.182:30008/test
- Purple (API Gateway)
http://149.165.156.182:30009/test
Branch URL: https://github.com/airavata-courses/MayDay/tree/develop-client-rhino
Jenkins dev job: http://149.165.156.79:8080/job/rhino/
Test URL: http://js-168-167.jetstream-cloud.org:4200
Branch URL: https://github.com/airavata-courses/MayDay/tree/develop-microservice-purple
Jenkins job: http://149.165.156.79:8080/job/purple/
Test URL: http://149.165.169.31:3000/test
Branch URL: https://github.com/airavata-courses/MayDay/tree/develop-microservice-red
Jenkins job: http://149.165.156.79:8080/job/red/
Test URL: http://149.165.169.159:8080/
Branch URL: https://github.com/airavata-courses/MayDay/tree/develop-microservice-blue
Jenkins job: http://149.165.156.79:8080/job/blue/
Test URL: http://149.165.168.77:7000/test
Jenkins docker image builder job: http://149.165.156.79:8080/job/docker_image_builder/
Jenkins Kubernetes deployer job: http://149.165.156.79:8080/job/kubernetes_deployer/
- Created test plan for 3 and 5 instances of the service.
- Measured the throughput against 3000,5000,7000,10000,15000 requests with 100 seconds as ramp-up time.
For 3000 Requests
For 5000 Requests
For 7000 Requests
For 10000 Requests
For 15000 Requests
For 3000 Requests
For 5000 Requests
For 7000 Requests
For 10000 Requests
For 15000 Requests
Requests | Error Rate (%) | Throughput(per second) |
---|---|---|
3000 | 11.46 | 36.0 |
5000 | 52.92 | 46.1 |
7000 | 28.85 | 109.9 |
10000 | 24.89 | 283.3 |
15000 | 35.34 | 444.7 |
For 3000 Requests
For 5000 Requests
For 7000 Requests
For 10000 Requests
For 15000 Requests
For 3000 Requests
For 5000 Requests
For 7000 Requests
For 10000 Requests
For 15000 Requests
Requests | Error Rate (%) | Throughput(per second) |
---|---|---|
3000 | 2.75 | 87.3 |
5000 | 7.36 | 143.6 |
7000 | 22.96 | 201.0 |
10000 | 26.14 | 281.4 |
15000 | 34.24 | 444.8 |
Improvements which can be made:
- Resolve 'connection reset' errors in most of our logs due to which the throughputs are low which is due to the database access as 2 of our services to need to access the database connection.
-
Only a single instance of the database is available at all times
-
UI form validation does not work perfectly
-
Moving from H2 to Postgres took quite some time. We had to spend a lot of effort to get Postgres up on Jetstream instance and to change the connector for each of the microservice.
-
There were issues while trying to configure the Kubernetes cluster as the required network configuration was tough to accomplish.
-
Kube Monkey didn’t seem to work on our instances as it didn’t pick up the config.toml from /var/etc/ directory.
-
We had issues while testing our portal with Jmeter wherein a higher number of requests would lead to memory issues. We had to reconfigure the PostgreSQL server to accommodate these high number of requests coming in.
Install Chaos Monkey or Kube Monkey.
Fix form validations.
Maintain replicas of the database instance.