k8s and CICD lab
[TOC]

01. Installation
01.1.A Setup K8S Cluster (< V1.24)
Preparation
Lab servers
OS image: CentOS 8 stream
K8S Master: 192.168.22.91
K8S Node-1: 192.168.22.92
K8S Node-2: 192.168.22.93
Rancher: 192.168.22.84 admin/novirus
GitLab: 192.168.22.85. root/P@ssw0rd
Harbor: 192.168.22.86 admin/novirus
(Jenkins on Kubernetes: 192.168.22.61:32001)
Jenkins on CentOS8: 192.168.22.87:8080
Grafana: 192.168.22.61:32000
Initial environment setup for master and node servers
Environment_ini.sh
1 | |
Set proxy for Lab environment
Proxy_set.sh
1 | |
Install Docker for master and node servers
Docker_install.sh
1 | |
Install Kubernetes
Installing kubeadm, kubelet and kubectl on Master and node servers
K8s_install.sh
1 | |
Init Kubenetes with network and configure kubectl as regular user on Master server
Network_kubectl_user.sh
1 | |
Install network plugin and weave tool on Master server
Network_plugin.sh
1 | |
Post-Installation
Join node server to master
Node_join.sh
1 | |
Install helm3 on K8SMaster
Helm3_install.sh
1 | |
something more about helm
1 | |
Common Troubleshooting Tips
1 | |
01.1.B Setup K8S Cluster (>= V1.24)
since the new Kubernetes 1.24, there’s deprecation of dockershim
Refer to Install latest K8S version
Refer to containerd introduction

Preparation
Lab servers
OS image: CentOS 8 stream
K8S Master: 192.168.22.91
K8S Node-1: 192.168.22.92
K8S Node-2: 192.168.22.93
Rancher: 192.168.22.84 admin/novirus
GitLab: 192.168.22.85. root/P@ssw0rd
Harbor: 192.168.22.86 admin/novirus
(Jenkins on Kubernetes: 192.168.22.61:32001)
Jenkins on CentOS8: 192.168.22.87:8080
Grafana: 192.168.22.61:32000
Initial environment setup for master and node servers
Environment_ini.sh
1 | |
Set proxy for Lab environment
Proxy_set.sh
1 | |
Install Docker for master and node servers
Docker_install.sh
1 | |
Install Kubernetes
Installing kubeadm, kubelet and kubectl on Master and node servers
K8s_install.sh
1 | |
Init Kubenetes with network and configure kubectl as regular user on Master server
Network_kubectl_user.sh
1 | |
Install network plugin and weave tool on Master server
Network_plugin.sh
Note: Without network plugin installed, master node and kubelet status will not be ready
1 | |


Post-Installation
Join node server to master
Node_join.sh
1 | |
Install helm3 on K8SMaster
Helm3_install.sh
1 | |
something more about helm
1 | |
Common Troubleshooting Tips
1 | |
Check Cgroup drivers for kubelet, containerd
1 | |
cgroup setting highlights
https://kubernetes.io/docs/setup/production-environment/container-runtimes/#cgroup-drivers
cgroupfs driver
The
cgroupfsdriver is the default cgroup driver in the kubelet. When thecgroupfsdriver is used, the kubelet and the container runtime directly interface with the cgroup filesystem to configure cgroups.The
cgroupfsdriver is not recommended when systemd is the init system because systemd expects a single cgroup manager on the system. Additionally, if you use cgroup v2 , use thesystemdcgroup driver instead ofcgroupfs.systemd cgroup driver
When systemd is chosen as the init system for a Linux distribution, the init process generates and consumes a root control group (
cgroup) and acts as a cgroup manager.systemd has a tight integration with cgroups and allocates a cgroup per systemd unit. As a result, if you use
systemdas the init system with thecgroupfsdriver, the system gets two different cgroup managers.Two cgroup managers result in two views of the available and in-use resources in the system. In some cases, nodes that are configured to use
cgroupfsfor the kubelet and container runtime, but usesystemdfor the rest of the processes become unstable under resource pressure.The approach to mitigate this instability is to use
systemdas the cgroup driver for the kubelet and the container runtime when systemd is the selected init system.To set
systemdas the cgroup driver, edit theKubeletConfigurationoption ofcgroupDriverand set it tosystemd. For example:
1
2
3
4apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
...
cgroupDriver: systemdIf you configure
systemdas the cgroup driver for the kubelet, you must also configuresystemdas the cgroup driver for the container runtime. Refer to the documentation for your container runtime for instructions. For example:Configuring the
systemdcgroup driverTo use the
systemdcgroup driver in/etc/containerd/config.tomlwithrunc, set
1
2
3
4[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
...
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
SystemdCgroup = trueThe
systemdcgroup driver is recommended if you use cgroup v2.Note:
If you installed containerd from a package (for example, RPM or
.deb), you may find that the CRI integration plugin is disabled by default.You need CRI support enabled to use containerd with Kubernetes. Make sure that
criis not included in thedisabled_pluginslist within/etc/containerd/config.toml; if you made changes to that file, also restartcontainerd.If you experience container crash loops after the initial cluster installation or after installing a CNI, the containerd configuration provided with the package might contain incompatible configuration parameters. Consider resetting the containerd configuration with
containerd config default > /etc/containerd/config.tomlas specified in getting-started.md and then set the configuration parameters specified above accordingly.
Accessing Cluster via kubeconfig
https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/
https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/
01.2. Setup Rancher
prerequisites
refer to rancher support matrix
| Rancher Server | OS | Docker | K8S |
|---|---|---|---|
| v2.4.9 | CentOS8.2 | 20.10.18 | v1.18.20 |
Install rancher on docker
1 | |
Import existing K8S cluster on rancher console



Run commands on K8S master
1 | |
01.3. Setup Gitlab
Prerequisites
https://packages.gitlab.com/gitlab/gitlab-ce/packages/el/8/gitlab-ce-14.2.3-ce.0.el8.x86_64.rpm
Install gitlab
1 | |
Output:
1 | |
1 | |
Useful commands
1 | |
01.4. Setup Jenkins
Option-1: Set up a Jenkins server on Kubernetes
How To Setup Jenkins On Kubernetes Cluster – Beginners Guide
Access Jenkins:
http://192.168.22.61:32001/login?from=%2F
Get the initial password of Jenkins
1 | |
Errors and resolution
1 | |
Resolution is to restart Jenkins: http://localhost:8080/restart
Option-2: Set up a Jenkins server on CentOS8
Reference:
https://sysadminxpert.com/install-openjdk-11-on-centos-7/
https://www.jianshu.com/p/85ab4db26857
Install OpenJDK 11
1
2
3
4
5
6
7yum -y install java-11-openjdk java-11-openjdk-devel
echo "export JAVA_HOME=$(dirname $(dirname $(readlink $(readlink $(which javac)))))" >> ~/.bash_profile
cat ~/.bash_profile
java –version
echo $JAVA_HOMESet or configure default Java version if there’s another/old version already installed
1
2
3
4
5
6
7
8
9alternatives --config javac
# Set JAVA_HOME Environment Variable
vim ~/.bash_profile
JAVA_HOME="/usr/lib/jvm/java-11-openjdk-11.0.7.10-4.el7_8.x86_64/bin/java"
source ~/.bash_profile
echo $JAVA_HOMETest Java installation
1
2
3
4
5
6
7
8
9cat > hello_world.java <<EOF
public class helloworld {
public static void main(String[] args) {
System.out.println("Hello Java World!");
}
}
EOF
java hello_world.javaAdd the Jenkins Repository and Verify security key
1
2
3sudo wget -O /etc/yum.repos.d/jenkins.repo http://pkg.jenkins-ci.org/redhat-stable/jenkins.repo
sudo rpm --import https://jenkins-ci.org/redhat/jenkins-ci.org.keyUpdate packages
1
yum updateInstall Jenkins
1
yum install jenkins -yConfigure and start Jenkins service
1
2systemctl daemon-reload
systemctl start jenkinsGet initial password
1
cat /var/lib/jenkins/secrets/initialAdminPassword
01.5. Setup Harbor
Prerequisites
Install docker
Install docker compose
1 | |
Download harbor offline package:
https://github.com/goharbor/harbor/releases/tag/v2.6.0
Install harbor
1 | |
Modify harbor.yml to disable https and add IP address in hostname
1 | |
Install harbor
1 | |
Optional - HTTPS access enablement
1 | |
Access the harbor console

01.6. Setup InfluxDB on Kubernetes
Step by step
Reference articles:
https://www.cnblogs.com/zhangsi-lzq/p/14457707.html
https://opensource.com/article/19/2/deploy-influxdb-grafana-kubernetes
Create namespace: influxdb
1
kubectl create namespace influxdbcreate a secret using the kubectl create secret command and some basic credentials
1
2
3
4
5
6
7
8kubectl create secret generic influxdb-creds -n influxdb \
--from-literal=INFLUXDB_DATABASE=twittergraph \
--from-literal=INFLUXDB_USERNAME=root \
--from-literal=INFLUXDB_PASSWORD=root \
--from-literal=INFLUXDB_HOST=influxdb
kubectl get secret influxdb-creds -n influxdb
kubectl describe secret influxdb-creds -n influxdbcreate and configure persistent storage for InfluxDB
Setup the nfs server: 192.168.22.60
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16# 在nfs上安装nfs服务
[root@nfs ~]# yum install nfs-utils -y
# 准备一个共享目录
[root@nfs ~]# mkdir /root/data/influxdbpv -pv
# 将共享目录以读写权限暴露给192.168.5.0/24网段中的所有主机
[root@nfs ~]# vim /etc/exports
[root@nfs ~]# more /etc/exports
/root/data/influxdbpv 192.168.22.0/24(rw,no_root_squash)
# 启动nfs服务
[root@nfs ~]# systemctl restart nfs-server
# 在每个node上安装nfs服务,注意不需要启动
[~]# yum install nfs-utils -ycreate pv and pvc
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25apiVersion: v1
kind: PersistentVolume
metadata:
name: influxdbpv
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
nfs:
path: /root/data/influxdbpv
server: 192.168.22.60
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: influxdb
namespace: influxdb
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 8Gi
create configmap from default conf file
1
2
3
4
5
6
7
8docker pull influxdb:1.6.4
# --rm means to remove the container after the command run
docker run --rm influxdb:1.6.4 influxd config > influxdb.conf
kubectl create configmap influxdb-config --from-file influxdb.conf -n influxdb
kubectl get cm influxdb-config -n influxdbcreate deployment yaml
kubectl get deploy influxdb -n influxdb -o yaml > influxdb_deploy.yaml1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "4"
labels:
app: influxdb
name: influxdb
namespace: influxdb
spec:
replicas: 1
selector:
matchLabels:
app: influxdb
template:
metadata:
labels:
app: influxdb
spec:
containers:
- envFrom:
- secretRef:
name: influxdb-creds
image: docker.io/influxdb:1.6.4
imagePullPolicy: IfNotPresent
name: influxdb
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/lib/influxdb
name: var-lib-influxdb
- mountPath: /etc/influxdb
name: influxdb-config
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- name: var-lib-influxdb
persistentVolumeClaim:
claimName: influxdb
- configMap:
defaultMode: 420
name: influxdb-config
name: influxdb-configexpose service for external access
1
2
3
4
5
6
7
8
9
10
11
12
13
14apiVersion: v1
kind: Service
metadata:
name: influxdb-svc
namespace: influxdb
spec:
type: NodePort
ports:
- port: 8086
targetPort: 8086
nodePort: 32002
name: influxdb
selector:
app: influxdbcheck status
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18[root@k8smaster influxdb]# kubectl get pods,deployment,rs,svc,ep -n influxdb -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/influxdb-5c576d94b4-r8ln8 1/1 Running 0 3h26m 10.42.0.5 k8s-n301 <none> <none>
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
deployment.apps/influxdb 1/1 1 1 5h55m influxdb docker.io/influxdb:1.6.4 app=influxdb
NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
replicaset.apps/influxdb-5c576d94b4 1 1 1 4h2m influxdb docker.io/influxdb:1.6.4 app=influxdb,pod-template-hash=5c576d94b4
replicaset.apps/influxdb-6564f56995 0 0 0 4h40m influxdb docker.io/influxdb:1.6.4 app=influxdb,pod-template-hash=6564f56995
replicaset.apps/influxdb-88c898c6f 0 0 0 4h30m influxdb docker.io/influxdb:1.6.4 app=influxdb,pod-template-hash=88c898c6f
replicaset.apps/influxdb-c4df979df 0 0 0 4h37m influxdb docker.io/influxdb:1.6.4 app=influxdb,pod-template-hash=c4df979df
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/influxdb-svc NodePort 10.106.242.32 <none> 8086:32002/TCP 3h32m app=influxdb
NAME ENDPOINTS AGE
endpoints/influxdb-svc 10.42.0.5:8086 3h32m
Enable authentication for InfluxDB
Create db user
1
2
3
4
5
6
7
8
9
10kubectl -n influxdb exec influxdb-dp-6c8756cfcd-pc8gf -it -- /bin/bash
bash-5.0# influx
Connected to http://localhost:8086 version 1.6.4
InfluxDB shell version: 1.6.4
> CREATE USER root WITH PASSWORD '123456' WITH ALL PRIVILEGES
> show users
user admin
---- -----
root true
> exitmodify configmap, to enable authentication
1
2
3
4
5
6
7kubectl edit cm influxdb-config -n influxdb
#....skip....
[http]
enabled = true
bind-address = ":8086"
auth-enabled = true #change from false to true
#....skip....Restart pod to reload configmap file
1
2
3
4kubectl -n influxdb delete pod influxdb-dp-6c8756cfcd-pc8gf
kubectl -n influxdb get pod
NAME READY STATUS RESTARTS AGE
influxdb-dp-6c8756cfcd-fhvws 1/1 Running 0 30sTest authentication
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17kubectl -n influxdb exec influxdb-dp-6c8756cfcd-fhvws -it -- /bin/bash
bash-5.0# influx
Connected to http://localhost:8086 version 1.6.4
InfluxDB shell version: 1.6.4
> show database;
ERR: unable to parse authentication credentials
Warning: It is possible this error is due to not setting a database.
Please set a database with the command "use <database>".
> auth
username: root
password:
> show databases
name: databases
name
----
_internal
> exit
Useful commands
1 | |
02. CICD configuration
02.1. Configure Gitlab
Add new group : MyApp
Add user: felix
Assign user to the group
Add a project : test
Add ssh key for gitlab connection
At rancher server, create the ssh key
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30[root@CentOS004 ~]# ssh-keygen -t rsa -b 2048 -C "felix@demo.com"
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:QFuC63n5VJBGTlKbG7oCzjXfnoR+azYAZKi490C918Q felix@demo.com
The key's randomart image is:
+---[RSA 2048]----+
| . .+o=. |
| . +. B+o |
|.. + .oo=. |
|o . + oEo. |
| o..o+.+S. |
|.ooo+o=+o |
| .oo.o++o |
| .o o*. |
| .++o |
+----[SHA256]-----+
[root@CentOS004 ~]# cd ~/.ssh/
[root@CentOS004 .ssh]# ll
total 8
-rw-------. 1 root root 1823 Sep 29 09:41 id_rsa
-rw-r--r--. 1 root root 396 Sep 29 09:41 id_rsa.pub
[root@CentOS004 .ssh]# cat id_rsa.pub
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCnB34PpjflNQhmHq3ZxJqRb6veoMRJe49ZQWNEqWFAnndDYlHPUw/q+6z14Ef2YWMLZqvNzLRPCYALTmsJNe05Ln/q/iQ2AM0wDrAUz8FegTPwTrTn9w7L+JezRDQh0Q46tPP8snJdUE+NjEqFNwUT+0ug42+LG9BaLho9HViPBeAQDI2UKNb33lg/y09JxsHqXqqviM+Bjo3G79DsP4p+przNGztReln8+fu6rax3czGfnIs6FvIH6GN8pRny3pVfM89sPxtyF3JqkhJGVgrbwN4v8zRUUI/bXJgKYBWpROt3K2GXf9aY2YSS1xsvvs1rl00xdmYL44YnrAMuv8Xd felix@demo.comCopy id_rsa.pub text into public SSH key field

Test connection and git pull from rancher server
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18[root@CentOS004 .ssh]# ssh -T -p 22 git@192.168.22.85
The authenticity of host '192.168.22.85 (192.168.22.85)' can't be established.
ECDSA key fingerprint is SHA256:/QoEvOYhu/IIbXkiLmBAeTtIyEceztIMRGr8//HfcH0.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '192.168.22.85' (ECDSA) to the list of known hosts.
Welcome to GitLab, @root!
[root@CentOS004 ~]# git clone http://192.168.22.85/myapp/test.git
Cloning into 'test'...
remote: Enumerating objects: 3, done.
remote: Counting objects: 100% (3/3), done.
remote: Total 3 (delta 0), reused 0 (delta 0), pack-reused 0
Receiving objects: 100% (3/3), done.
[root@CentOS004 ~]# cd test
[root@CentOS004 test]# ll
total 4
-rw-r--r--. 1 root root 8 Sep 29 09:56 README.md
02.2. Configure Harbor
- Create project
- project name: demo
- Access level: public
- Create user
- Name: felix
- Assign user into the project
- Role: project admin
02.3. Configure Jenkins
Create account
- Name: felix
Create jenkins credential with kind: SSH Username with private key
ID: demo
Username: git
Private Key: <pasted from rancher server: ~/.ssh/id_rsa>
Create a new Jenkins task: *myapp *with freestyle project
configure source code management with git
Repository URL: ssh://git@192.168.22.85/myapp/test.git
If there’s error “Jenkins Host key verification failed“ , refer to this solution.
1
2
3
4sudo su -s /bin/bash jenkins
git ls-remote -h -- ssh://git@192.168.22.85/myapp/test.git HEAD
cat ~/.ssh/known_hosts
Branches to build: */main
Install cloudbase plug-in
Add user: Jenkins into group: docker, to ensure read priviledge of docker/unix socket
1
2
3usermod -a -G docker jenkins
systemctl restart jenkins
newgrp dockerAdd docker authentication, ensure jenkins access harbor through http
1
2
3
4
5
6vi /etc/docker/daemon.json
{
"insecure-registries": ["http://192.168.22.86"]
}
systemctl daemon-reload && systemctl restart dockeron Jenkins console , add a new global credentials (kind: username with password)
- username: felix
- password:
Update existing Jenkins task: myapp, add “docker build and publish”
Repository Name: <harbor repository: “demo/test”>
Docker Host URL: unix:///var/run/docker.sock
Docker registry URL: http://192.168.22.86
Registry credentials: felix/****
03. CICD Testing
CI Testing with Jenkins, Gitlab, Harbor
push docker file, source code: start.sh to gitlab
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28git clone http://192.168.22.85/myapp/test.git
cd test/
chmod +x start.sh
[root@rancher test]# more Dockerfile
From centos
USER root
ENV PATH /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
ADD ./ /root/
CMD ["/root/start.sh"]
[root@rancher test]# more start.sh
#!/bin/bash
while true
do
processes=$(cat /proc/stat | awk '/processes/{print $2}')
curl -i -XPOST 'http://192.168.22.61:32002/write?db=test' --data-binary "performance,type=processes value=$processes"sleep 60
done
[root@rancher test]# git config --global user.name "felix"
[root@rancher test]# git config --global user.email "felix@demo.com"
[root@rancher test]# git add .
[root@rancher test]# git commit -m "add docker file"
[root@rancher test]# git push origin mainwhat’s docker file, refer to here:


Go to Jenkins > myapp, build now
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55Started by user felix
Running as SYSTEM
Building in workspace /var/lib/jenkins/workspace/myapp
The recommended git tool is: NONE
using credential a9ec308d-cc1a-4d6e-bfd5-003325dd5944
> git rev-parse --resolve-git-dir /var/lib/jenkins/workspace/myapp/.git # timeout=10
Fetching changes from the remote Git repository
> git config remote.origin.url ssh://git@192.168.22.85/myapp/test.git # timeout=10
Fetching upstream changes from ssh://git@192.168.22.85/myapp/test.git
> git --version # timeout=10
> git --version # 'git version 2.31.1'
using GIT_SSH to set credentials cicdtest
Verifying host key using known hosts file
> git fetch --tags --force --progress -- ssh://git@192.168.22.85/myapp/test.git +refs/heads/*:refs/remotes/origin/* # timeout=10
> git rev-parse refs/remotes/origin/main^{commit} # timeout=10
Checking out Revision 6119d6d79088ba5cfa8fb6eba368f51dcf5e12f1 (refs/remotes/origin/main)
> git config core.sparsecheckout # timeout=10
> git checkout -f 6119d6d79088ba5cfa8fb6eba368f51dcf5e12f1 # timeout=10
Commit message: "add start.sh"
> git rev-list --no-walk e73b7cbe817d832caef117e9adcd400906517626 # timeout=10
[myapp] $ docker build -t 192.168.22.86/demo/test --pull=true /var/lib/jenkins/workspace/myapp
WARNING: Support for the legacy ~/.dockercfg configuration file and file-format is deprecated and will be removed in an upcoming release
Sending build context to Docker daemon 69.63kB
Step 1/5 : From centos
latest: Pulling from library/centos
Digest: sha256:a27fd8080b517143cbbbab9dfb7c8571c40d67d534bbdee55bd6c473f432b177
Status: Image is up to date for centos:latest
---> 5d0da3dc9764
Step 2/5 : USER root
---> Using cache
---> 566a6bb50798
Step 3/5 : ENV PATH /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
---> Using cache
---> 976874dd9ea6
Step 4/5 : ADD ./ /root/
---> 89f3bef76911
Step 5/5 : CMD ["bash /root/start.sh"]
---> Running in 27a4eec3ac71
Removing intermediate container 27a4eec3ac71
---> d3821c155ee9
Successfully built d3821c155ee9
Successfully tagged 192.168.22.86/demo/test:latest
[myapp] $ docker inspect d3821c155ee9
WARNING: Support for the legacy ~/.dockercfg configuration file and file-format is deprecated and will be removed in an upcoming release
[myapp] $ docker push 192.168.22.86/demo/test
WARNING: Support for the legacy ~/.dockercfg configuration file and file-format is deprecated and will be removed in an upcoming release
Using default tag: latest
The push refers to repository [192.168.22.86/demo/test]
064ab7092af0: Preparing
74ddd0ec08fa: Preparing
74ddd0ec08fa: Layer already exists
064ab7092af0: Pushed
latest: digest: sha256:486091b0f6dab11302a808a99988b26ed52ed093beaa830c9a104c988b808805 size: 738
Finished: SUCCESSCheck harbor repository, confirm the image is created
CD Testing with Rancher
add below insecure access for all nodes of K8s cluster, ensure each node can access insure harbor server
1
2
3
4
5
6cat > /etc/docker/daemon.json <<EOF
{
"insecure-registries": ["http://192.168.22.86"]
}
EOF
systemctl daemon-reload && systemctl restart dockercreate deploy to deploy workload via Rancher
1
2
3
4
5[root@cicdlab001 ~]# kubectl get pods -n cicdtest -o wide
NAME READY STATUS RESTARTS AGE IP NODE GATES
test-58b7787d9d-9r2kn 1/1 Running 0 44s 10.44.0.1 cicd
test-58b7787d9d-sb6x5 1/1 Running 0 44s 10.36.0.2 cicdverify influx db - confirm the latest data already written in databa
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60[root@rancher test]# curl -G 'http://192.168.22.61:32002/query?pretty=true' --data-urlencode "db=test" --data-urlencode "q=select * from performance order by time desc"
{
"results": [
{
"statement_id": 0,
"series": [
{
"name": "performance",
"columns": [
"time",
"type",
"value"
],
"values": [
[
"2022-10-01T16:37:15.037646251Z",
"processes",
1530626
],
[
"2022-10-01T16:37:15.000946467Z",
"processes",
1529608
],
[
"2022-10-01T16:36:15.012783232Z",
"processes",
1530322
],
[
"2022-10-01T16:36:14.979504738Z",
"processes",
1529306
],
[
"2022-10-01T16:35:14.980991216Z",
"processes",
1530004
],
[
"2022-10-01T16:35:14.944116291Z",
"processes",
1528989
],
[
"2022-09-29T09:59:36.859605683Z",
"processes",
7109601
],
[
"2022-09-29T09:58:36.723087558Z",
"processes",
7109596
]
]
}
]
}
]
}
Configure Grafana to display the data
Configure grafana data source with influxdb


Create dashboard
