-
Notifications
You must be signed in to change notification settings - Fork 20
/
Docker_kubernetes_minikube_CKAD_Commands.txt
840 lines (646 loc) · 29.4 KB
/
Docker_kubernetes_minikube_CKAD_Commands.txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
-- ★★ Docker Kubernetes Commands ★★ --
legend
★ : new big topic
-- : smaller topics
# : command
= : command execution result
★★★Creating, Running, and Sharing Container Image★★★
--Running simple container
# docker run <image>:<tag>
ex:
# docker run busybox:latest echo "Hello World"
= docker pull busybox-image from Docker Registry or local machine, create a container and run the "Hello World"
--Creating a Dockerfile for the image
Dockerfile sample:
FROM node:7
ADD app.js /app.js
ENTRYPOINT ["node", "app.js"]
Directory tree:
.
├── Dockerfile
└── app.js
--Build container image
# docker build [OPTIONS] PATH
# docker build -t kubia .
-t, --tag list : Name and optionally a tag in the 'name:tag' format
--Check the built images
# docker images ls
# docker images ls | grep <image that had just been build>
= REPOSITORY TAG IMAGE ID CREATED SIZE
kubia latest 01741bed2b6c 2 weeks ago 660MB
luksa/kubia latest bf5bc565dc79 2 weeks ago 660MB
--Remove built image
# docker image rm <image-name>
# docker image rm -f <image-name>
--Running the container image
# docker run --name <container-name> -p <port> -d <image-name-to-be-detached>
# docker run --name kubia-container -p 8080:8080 -d kubia
-p, --publish list : Publish a container's port(s) to the host
-d, --detach : Run container in background and print container ID
--Check whether the container normally built
# docker container ls
# docker ps -a
--Accessing http docker app
# curl localhost:8080
or open your browser and access localhost:8080
--Listing all running containers
# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5844d8fa5c50 kubia "node app.js" 18 minutes ago Up 18 minutes 0.0.0.0:8080->8080/tcp kubia-container
--List all running and stopped containers
# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
796f0b2a7fca luksa/kubia "node app.js" 2 weeks ago Exited (137)2 weeks ago ff-kubia-container
ca08efc6669e kubia "node app.js" 2 weeks ago Exited (137) 2 weeks ago kubia-container
--Getting additional information about a container
# docker inspect [OPTIONS] NAME|ID [NAME|ID...]
# docker inspect kubia-container
Docker will print out a long JSON containing low-level information about the container.
--Exploring the inside of running container
# docker exec -it <container-name> bash
# docker exec -it kubia-container bash
-i: makes sure STDIN is kept open
-t: allocates pseudo terminal (TTY)
--Stopping and Removing Container
# docker stop <container-name|id> --> # docker start <conainer-name|id>
# docker rm <container-name|id>
--Pushing the image to an image registry
--Login to http://hub.docker.com
# docker login
Username: #famy
Password:
Login succeeded
--Tag image under additional tag
# docker tag SOURCE_IMAGE[:TAG] TARGET_IMAGE[:TAG]
# docker tag kubia famy/kubia
# docker image ls | head
REPOSITORY TAG IMAGE ID CREATED SIZE
famy/kubia latest bf5bc565dc79 2 hours ago 660MB
kubia latest bf5bc565dc79 2 hours ago 660MB
--Pushing the image to Docker Hub
# docker push famy/kubia
!! make sure your docker hub id same as the tag name
--Running the image on different machine
# docker run -p 8080:8080 -d famy/kubia
★★Kubernetes with Minikube
--Installing Minikube
MAC OSX
https://kubernetes.io/docs/tasks/tools/install-minikube/
# brew cask install minikube
--Starting Minikube
# minikube start
# minikube status
host: Running
kubelet: Running
apiserver: Running
kubectl: Correctly Configured: pointing to minikube-vm at 192.168.99.112
--Starting Minikube with NetworkPolicy
minikube start \
--extra-config=kubelet.authentication-token-webhook=true \
--network-plugin=cni \
--enable-default-cni \
--container-runtime=containerd \
--bootstrapper=kubeadm
# deploy the latest metric-server
$ kubectl apply -f 99.metric-server/deploy/1.8+/
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
serviceaccount/metrics-server created
deployment.extensions/metrics-server created
service/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
$ vim 99.metrics-server/deploy/1.8+/metrics-server-deployment.yaml
...
containers:
- name: metrics-server
image: k8s.gcr.io/metrics-server-amd64:v0.3.6
args:
- --kubelet-insecure-tls # add this
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname # add this
- --cert-dir=/tmp
- --secure-port=4443
...
$ kubectl apply -f 99.metrics-server/deploy/1.8+/metrics-server-deployment.yaml
$ kubectl get deployment -n kube-system
NAME READY UP-TO-DATE AVAILABLE AGE
coredns 2/2 2 2 3d7h
metrics-server 1/1 1 1 17m
$ kubectl top pod --all-namespaces
NAMESPACE NAME CPU(cores) MEMORY(bytes)
default depl-nginx1-5fd4fbf9bb-rlpcr 0m 2Mi
default nginx 0m 2Mi
kube-system coredns-5644d7b6d9-p4jlm 5m 7Mi
kube-system coredns-5644d7b6d9-q5zvl 5m 22Mi
kube-system etcd-minikube 32m 45Mi
kube-system kube-addon-manager-minikube 138m 35Mi
kube-system kube-apiserver-minikube 81m 287Mi
kube-system kube-controller-manager-minikube 26m 60Mi
kube-system kube-proxy-mqxwn 2m 22Mi
kube-system kube-scheduler-minikube 2m 26Mi
kube-system metrics-server-5cd4f58f4d-c7qrf 2m 10Mi
kube-system storage-provisioner 1m 14Mi
$ kubectl top nodes
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
minikube 359m 17% 1286Mi 66%
--Check cluster availability
# kubectl cluster-info
--Deploying Node.js app to Kubernetes
# kubectl run kubia --image=famy/kubia --port=8080 --generator=run/v1
# kubect get pods
--Accessing your web application pod
To make pod accessible from outside, you'll expose it through a Service object.
A special service of type LoadBalancer need to be configured not only enable internal access but also provide public IP to the pod.
-Expose the ReplicationController (rc)
# kubectl get rc
NAME DESIRED CURRENT READY AGE
kubia 1 1 1 16h
# kubectl expose rc kubia --type=LoadBalancer --name kubia-http
= service/kubia-http exposed
-Listing services
# kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 17h
kubia-http LoadBalancer 10.108.140.71 <pending> 8080:30188/TCP 3m33s
!! Minikube doesn't support LoadBalancer, so the external IP will not available. But we may access the pod via external ports.
-Run and get the IP and port (Minikube)
# minikube service <service-name>
# minikube service kubia-http
🎉 Opening kubernetes service default/kubia-http in default browser...
-Replication Controller Role
It makes sure there is always exactly one instance of your pod running. If a pod were disappear for any reason, the Replication Controller will create a new pod to replace the missing one.
The number of instance can be configured in ReplicationController
--Scaling application
# kubectl get rc
# kubectl scale rc <rc-name> --replicas=<number-of-replicas>
# kubectl scale rc kubia --replicas=3
--Minikube dashboard
# kubectl cluster-info
# minikube dashboard
★★★Chap3 PODS: running containers in Kubernetes★★★
--Flat inter-pod network
All pods in Kubernetes cluster reside in a single flat, shared, network-address space, which means every pod can access every other pod at other pod's IP address.
No NAT (Network Address Translation) gateway exist between them, and pods can communicate with each other like computers on local area network (LAN).
--Creating pods from YAML or JSON descriptor
--Examining YAML descriptor of a POD
# kubectl get pod <pod-name> -o yaml
# kubectl get pod <pod-name> -o json
--A simple YAML descriptor for a pod
apiVersion: ver1 #1
kind: Pod #2
metadata:
name: kubia #3
spec:
containers:
- image: luksa/kubia #4
name: kubia #5
ports:
- containerPort: 8080 #6
protocol: TCP
#1 Descriptor conforms to version v1 of Kubernetes API
#2 You’re describing a pod.
#3 The name of the pod
#4 Container image to create the container from
#5 Name of the container
#6 The port the app is listening on
--Pod detail attribute
# kubectl explain pods
# kubectl explain pod.spec
# kubectl explain pod.metadata
--Using kubectl create to create the pod
# kubectl create -f <yaml-file>
= "kubectl create -f" command is used to create any resource from YAML or JSON
--Viewing application logs
# kubectl logs <pod-name>
# kubectl logs <pod-name> -c <container-name>
-Container logs are automatically rotated daily and every time the log file reaches 10MB in size.
-The kubectl logs command only shows the log entries from the last rotation.
--Forwarding a local network port to a port in the pod
-When you want to talk to a specific pod without going through a service, Kubernetes allows you to configure port forwarding to the pod
# kubectl port-forward <pod-name> <from-port>:<to-port>
★ORGANIZING PODS WITH LABELS★
With microservices architecture, the number of pods can easily exceed 20 or more. Those components may be replicated and multipled accordingly by its release (stable, beta, canary, etc) and also will run concurrently. This lead to hundreds of Pods in the system. We need a way of organizing pods into smaller groups based on arbitrary criteria, so the developer and administrator can easily see which pod is which.
Organizing pods and all other Kubernetes objects is done through LABELs.
A label is an arbitrary key-value pair attached to a resource, and utilized when selecting resources using 'label selector'.
-app, spesifies which app, component, or microservice the pod belongs to.
-rel, shows whether the application running in the pod is a stable, beta, or canary release.
※Canary release = when you deploy a new version of an application next to the stable version, but only a parts of users can access the new version (to see how it behaves before rolling it out to all users).
--Specify label to a pod
apiVersion: ver1
kind: Pod
metadata:
name: kubia-manual-v2
labels:
creation_method: manual #1
env: prod #1
spec:
containers:
- image: famy/kubia
name: kubia
ports:
- containerPort: 8080
protocol: TCP
#1 two labels attached to the pod
labelname = kubia-manual-v2.yaml
--Create pod with label
# kubectl create -f <pod-yaml>
# kubectl create -f kubia-manual-v2.yaml
= pod "kubia-manual-v2" created
# kubectl get pod --show-labels
NAME READY STATUS RESTARTS AGE LABELS
kubia-2q9hb 1/1 Running 3 17d run=kubia
kubia-manual-v2 1/1 Running 0 10m creation_method=manual,env=prod
--Add labels of existing pod
# kubectl label pod <pod-name> <labelname>=<label-value>
# kubectl label pod kubia-manual creation_method=manual
--Modify/overwrite labels
# kubectl label pod <pod-name> <existing-label-name>=<new-value> --overwrite
# kubectl label pod kubia-manual creation_method=auto --overwrite
--List pod with labels
# kubectl get pod -L creation_method,app
NAME READY STATUS RESTARTS AGE CREATION_METHOD APP
kubia-2q9hb 1/1 Running 3 17d
kubia-manual 1/1 Running 0 84m test
kubia-manual-v2 1/1 Running 0 111m manual
kubia-r7t7b 1/1 Running 1 16h
--List subset with 'label selector'
A label selector can list pod as:
-label with certain key only
-label with key and value
# kubectl get pod -l creation_method --show-labels
NAME READY STATUS RESTARTS AGE LABELS
kubia-manual-v2 1/1 Running 0 118m creation_method=manual,env=prod
# kubectl get pod -l creation_method=manual
NAME READY STATUS RESTARTS AGE
kubia-manual-v2 1/1 Running 0 116m
# kubectl get pod -l '!creation_method' --show-labels
NAME READY STATUS RESTARTS AGE LABELS
kubia-2q9hb 1/1 Running 3 17d run=kubia
kubia-manual 1/1 Running 0 91m app=test,rel=test
kubia-r7t7b 1/1 Running 1 16h run=kubia
# kubectl get pod -l app,rell
NAME READY STATUS RESTARTS AGE LABELS
kubia-manual 1/1 Running 0 91m app=test,rel=test
--Using labels for categorizing worker nodes
# kubectl get nodes --show-labels
# kubectl label node <node-name> <label-name>=<label-value>
# kubectl get nodes -l <label-name>=<label-value>
--Scheduling pods to specific nodes
apiVersion: ver1
kind: Pod
metadata:
name: kubia-gpu
spec:
nodeSelector: # 1
gpu: "true" # 1
containers:
- image: luksa/kubia
name: kubia
# 1 = nodeSelector tells Kubernetes to deploy this pod only to a certain nodes labeled with gpu=true
! Don't forget to associates the nodes with label as mentioned before.
# kubectl label node <node-name> gpu=true
--Adding and modifying annotations
# kubectl annotation pod <pod-name> <annotation>=<value>
# kubectl annotation pod kubia-manual mycompany.com/someannotation="foo-bar"
★USING NAMESPACES TO GROUP RESOURCES★
# kubectl get namespaces
NAME STATUS AGE
default Active 18d
kube-node-lease Active 18d
kube-public Active 18d
kube-system Active 18d
kubernetes-dashboard Active 18d
# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-5644d7b6d9-lxf7p 1/1 Running 3 18d
coredns-5644d7b6d9-rggml 1/1 Running 3 18d
etcd-minikube 1/1 Running 3 18d
--Creating namespaces
apiVersion: v1
kind: Namespace
metadata:
name: custom-namespace
# kubectl create -f custom-namespace.yaml
-OR-
# kubectl create namespace custom-namespace
--Creating pods to specific namespaces
# kubectl create -f pods-yaml.yaml -n custom-namespace
!! When listing, describing, modifying or deleting objects in other namespaces, you need to pass the --namespace (or -n) flag to kubectl. If you dont, kubectl performs the action in the default namespace configures in the current kubectl context.
★STOPPING AND REMOVING PODS★
--Delete pod by name
# kubectl delete pod <pod-name>
--Delete pod by label selectors
# kubectl delete pod -l <label-name>=<value>
# kubectl delete pod -l creation_method=manual
--Delete pod and namespace
# kubectl delete namespace <namespace-name>
# kubectl delete ns custom-namespace
--Delete all pods in a namespace
# kubectl delete pod --all
--Delete almost all resources in a namespace
# kubectl delete all --all
★★★Chap4 REPLICATION AND OTHER CONTROLLERS: deploying managed pods★★★
--Keeping pods healthy
Kubernetes will restart pod automatically when an application has a bug causes it to crash. Kubernetes automatically gives it the ability to heal itself.
--Liveness probes
a. HTTP GET probe
HTTP GET request performed on the container's IP, a port and path that specified in advance. If the HTTP response code is 2xx or 3xx the probe is considered successful.
b. TCP Socket probe
Open a TCP connection to the specified port of the container. If the connection successfully established, the probe is success.
c. Exec probe
Execute an arbitrary command inside the container and check the command's exit code. If the return code 0, the probe is success.
!!For pods running in production, you should always define a liveness probe.
!!Keeps probe light, liveness probes should not use too many computational resources and should not take too log to compute.
Example yaml:
apiVersion: v1
kind: Pod
metadata:
name: kubia-liveness
spec:
containers:
- image: luksa/kubia-unhealthy
name: kubia
livenessProbe:
httpGet:
path: /
port: 8080
--Obtaining logs from the crashed pods
# kubectl logs <pod-name> --previous
--Configuring additional properties of the liveness probe
spec:
containers:
- image: luksa/kubia-unhealthy
name: kubia
livenessProbe:
httpGet:
path: /
port: 8080
initialDelaySeconds: 15 #1
#1 : Kubernetes will wait 15 seconds before executing the first probe
★REPLICATION CONTROLLER★
-Replication Controller is a Kubernetes resource that ensure its pods to keep running. If pod dissapear for any reason, the Replication Controller notices the missing pod and creates the replacement pod.
-Three essential parts of ReplicationController:
a. label selector, which determines what pods are in the RC scope
b. replica count, which specifies the desired number of pod should be running
c. pod template, which is used to when creating the pod
-Advantages using Replication Controller:
a. It makes sure a pod is always running by starting a new pod when an existing one goes down
b. It creates a replacement replicas for all pods when a cluster node fails, and re-create the pods on the fail node
c. It enables easy horizontal scaling of pods
--Creating a ReplicationController
apiVersion: v1
kind: ReplicationController
metadata:
name: kubia
spec:
replicas: 3
selector:
app: kubia
template:
metadata:
labels:
app: kubia
spec:
containers:
- name: kubia
image: luksa/kubia
ports:
- containerPort: 8080
--Moving pods in and out of the scope of a ReplicationController
A ReplicationController(RC) manages pods that match its label selector. By changing a pod's labels, it can be removed from or added to the scope of RC.
# kubectl get rc
NAME DESIRED CURRENT READY AGE
kubia 3 3 3 12d
# kubectl describe rc kubia | egrep "Selector|Labels"
Selector: app=kubia
Labels: app=kubia
Labels: app=kubia
# kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
kubia-2qcwd 1/1 Running 1 12d app=kubia
kubia-hs4ml 1/1 Running 1 12d app=kubia
kubia-wzwxs 1/1 Running 1 12d app=kubia
kubia-manual 1/1 Running 2 14d app=test,rel=test
kubia-manual-v2 1/1 Running 2 15d creation_method=manual,env=prod
# kubectl label pod kubia-wzwxs app=foo --overwrite
pod/kubia-wzwxs labeled
# kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
kubia-2qcwd 1/1 Running 1 12d app=kubia
kubia-hs4ml 1/1 Running 1 12d app=kubia
kubia-6qdc4 1/1 Running 0 20s app=kubia # newly created pod (controlled by RC)
kubia-manual 1/1 Running 2 14d app=test,rel=test
kubia-manual-v2 1/1 Running 2 15d creation_method=manual,env=prod
kubia-wzwxs 1/1 Running 1 12d app=foo # label-changed pod
--Delete a ReplicationController
When you delete a RC with kubectl delete, the corresponded pods are also deleted.
# kubectl delete rc <RC-name>
To delete a RC only and keep its pods can be managed by this command:
# kubectl delete rc <RC-name> --cascade=false
★REPLICASET★
ReplicaSet(RS) is a new generation of ReplicationController. It works exactly like a RC, and has more expressive pod selectors. RS can match the pods label or the label-value.
RS sample yaml(kubia-relicaset.yaml)
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: kubia-rs
spec:
replicas: 3
selector:
matchLabels:
app: kubia
template:
metadata:
labels:
app: kubia
spec:
containers:
- name: kubia
image: luksa/kubia
# kubectl create -f kubia-replicaset.yaml
# kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
kubia-rs-8lvxg 1/1 Running 0 4m51s app=kubia
kubia-rs-jzbqk 1/1 Running 0 4m51s app=kubia
kubia-rs-tkvqr 1/1 Running 0 4m51s app=kubia
★DAEMONSET -Running exactly one pod on each node-★
Daemonset create exactly one pod on each node.
When a new node is added to the cluster, the Daemonset immediately deploys a new pod instance to it. As well as when a pod is deleted from a node, Daemonset will make sure one pod is created on each node.
--Using a DaemonSet to run pods only on certain nodes
A DaemonSet deploys pod on all nodes in the cluster, unless you specify that the pods should only run on a subset of all the nodes. This can be done by specifying the node-Selector property in the pod template.
!!DaemonSet deploys pod into a node even that node is being made unschedulable (cordoned, preventing pods from being deployed to them). Since the unschedulable attribute is only used by the Scheduler, whereas pods managed by DaemonSet bypass the Scheduler completely.
DaemonSet (ssd-monitor-daemonset.yaml)
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: ssd-monitor
spec:
selector:
matchLabel:
app: ssd-monitor
template:
metadata:
labels:
app: ssd-monitor
spec:
nodeSelector:
disk: ssd
containers:
- name: main
image: luksa/ssd-monitor
# kubectl get ds
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
ssd-monitor 0 0 0 0 0 disk=ssd 11s
!!Desired number of DS is 0. Need to check the node-label, that matched the setting "disk=ssd"
# kubectl get nodes --show-labels
NAME STATUS ROLES AGE VERSION LABELS
minikube Ready master 32d v1.16.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=minikube,kubernetes.io/os=linux,node-role.kubernetes.io/master=
!!"disk=ssd" is not available at the ready node. Need to add manually the label to the node.
# kubectl label node minikube disk=ssd
node/minikube labeled
# kubectl get nodes --show-labels
NAME STATUS ROLES AGE VERSION LABELS
minikube Ready master 32d v1.16.0 disk=ssd,beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=minikube,kubernetes.io/os=linux,node-role.kubernetes.io/master=
# kubectl get ds
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
ssd-monitor 1 1 1 1 1 disk=ssd 3m43s
# kubectl get pods -l app=ssd-monitor -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
ssd-monitor-d8hsj 1/1 Running 0 2m32s 172.17.0.15 minikube <none> <none>
★JOB, resource that perform a single completable task★
Job resource allows you to run a pod whose container isn't restarted when the process running inside finishes successfully. Once it does, the pod is considered COMPLETE.
--Defining Job resource (batch-job.yaml)
apiVersion: batch/v1
kind: Job
metadata:
name: batch-job
spec:
template:
metadata:
labels:
app: batch-job
spec:
restartPolicy: OnFailure
containers:
- name: main
image: luksa/batch-job
# kubectl create -f batch-job.yaml
# kubectl get pods
NAME READY STATUS RESTARTS AGE
batch-job-zbqd6 1/1 Running 0 13s
# kubectl logs batch-job-zbqd6
Mon Nov 4 05:50:39 UTC 2019 Batch job starting
!! The dockerfile for the above Job resource as follow:
FROM busybox
ENTRYPOINT echo "$(date) Batch job starting"; sleep 120; echo "$(date) Finished succesfully"
After 120 seconds, the pod will finish its job and exit successfully:
# kubectl get pods
NAME READY STATUS RESTARTS AGE
batch-job-zbqd6 0/1 Completed 0 3m23s
# kubectl logs batch-job-zbqd6
Mon Nov 4 05:50:39 UTC 2019 Batch job starting
Mon Nov 4 05:52:39 UTC 2019 Finished succesfully
--Running multiple pod instance in a Job
If you need a Job to run more than once, put the "completions" attribute to the Job yaml definition.
apiVersion: batch/v1
kind: Job
metadata:
name: multi-completion-batch-job
spec:
completions: 5
template: ...
--Running job pods in parallel
...
spec:
completions: 5
parallelism: 2
...
--Scaling a Job
# kubectl scale <job-name> --replicas 3
--Limiting the time allowed for a Job to complete
You can configure how many times a Job can be retried before it is marked as failed, by specifying the spec.backoffLimit field in the manifest yaml file. The default is 6
★CronJob★
CronJob allows you to schedule jobs to run periodically or once in the future.
A cron job in Kubernetes is configured by creating a CronJob resource.
apiVersion: batch/v1
kind: CronJob
metadata:
name: batch-job-every-five-minutes
spec:
schedule: "*/5 * * * *"
jobTemplate:
spec:
template:
metadata:
labels:
app: periodic-batch-job
spec:
restartPolicy: OnFailure
containers:
- name: main
image: luksa/batch-job
--Check the executed jobs
# kubectl get jobs
NAME COMPLETIONS DURATION AGE
batch-job-every-five-minutes-1572854460 1/1 2m4s 3m12s
batch-job-every-five-minutes-1572854580 0/1 72s 72s
★★★Chap5 SERVICES: enabling clients to discover and talk to pods★★★
In the case of microservices, pods will respond to HTTP requests either coming from other pods inside the cluster or from clients outside the cluster.
In the non-Kubernetes world, sysadmin will configure each client-app by specifying the exact IP or hostname of the server providing the service. But, doing the same in Kubernetes won't work, because:
1. Pods are ephemeral
Pods may come and go at any time due to it is removed, scaled down, or because a cluster node failed.
2. Kubernetes assigns an IP to a pod after the pod has been scheduled to a node and before it's started.
3. Horizontal scaling means multiple pods may provide the same service.
Client shouldnt care how may pods backing the service and what their IPs are. Instead, all those pods should be accessible through a single IP address.
To solve all the above problems, Kubernetes provide Service resource type.
★INTRODUCING SERVICES★
-A Kubernetes Service is a resource you create to make a single, constant point of entry to a group of pods providing the same service. Each service has an IP address and PORT that never change while the service exist.
-Label selectors determine which pods belong to the Service.
Service sample yaml (kubia-svc.yaml)
apiVersion: v1
kind: Service
metadata:
name: kubia-svc
spec:
ports:
- port: 80 #1
targetPort: 8080 #2
selector:
app: kubia #3
#1 the port the service will be available on
#2 the container port the service will forward to
#3 All pod with the "app=kubia" label will be part of this service
# kubectl create -f kubia-svc.yaml
# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubia-svc ClusterIP 10.97.237.121 <none> 80/TCP 4m26s
-From the above list, IP address 10.97.237.121 is assigned to the kubia-svc service. This is the cluster IP and only accessible from inside the cluster. The primary purpose of service is to expose group of pods to other pods in the cluster. But ofcourse we can expose the service outside the cluster too.
-We can send requests to the above service by several ways:
1. Create a pod that will send the service's cluster IP and log the response.
2. ssh into one of the Kubernetes nodes and use the curl command.
3. execute the curl command inside one of existing pods (through kubectl exec)
# kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
kubia-2qcwd 1/1 Running 1 12d app=kubia
kubia-rs-8lvxg 1/1 Running 0 7h22m app=kubia
kubia-wzwxs 1/1 Running 1 12d app=foo
# kubectl exec -it kubia-wzwxs bash
root@kubia-wzwxs:/# curl http://10.97.237.121:80
You've hit kubia-2qcwd
root@kubia-wzwxs:/# curl http://10.97.237.121
You've hit kubia-rs-8lvxg
root@kubia-wzwxs:/# exit
# kubectl exec kubia-wzwxs -- curl -s http://10.97.237.121:80
You've hit kubia-rs-8lvxg
!! The double dash (--) in the command signals the end of command options for kubectl. Everything after the double dash is the command that should be executed inside the pod.
★CONNECTING TO SERVICES LIVING OUTSIDE THE CLUSTER★
★EXPOSING SERVICES TO EXTERNAL CLUSTER★
★EXPOSING SERVICES EXTERNALLY THROUGH AN INGRESS RESOURCE★
★SIGNALING WHEN A POD IS READY TO ACCEPT CONNECTIONS★
★USING A HEADLESS SERVICE FOR DISCOVERING INDIVIDUAL PODS★
QUESTIONS:
-How to expose service to external IP by LoadBalancer
https://access.redhat.com/documentation/en-us/openshift_container_platform/4.1/html/logging/configuring-your-cluster-logging-deployment#efk-logging-elasticsearch-limits_efk-logging-elasticsearch
https://access.redhat.com/documentation/en-us/openshift_container_platform/3.11/html/logging