Building testing lab with Terraform and Ansible

[TOC]

Quick Start

Linux VM provision in Batch with Terraform

Step 1: Create the linux vm template

Step 2: Use Terraform to provision the VM in batch, sample main.tf file as below:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
[root@centos-0 ansible]# cat ../terraform/sre-vsphere-lab/main.tf 
# Configure the VMware vSphere Provider
provider "vsphere" {
user = "administrator@vsphere.local"
password = "xxxx"
vsphere_server = "xx.xx.xx.xx"

# if you have a self-signed cert
allow_unverified_ssl = true
}

# Deploy 3 linux VMs
module "centos-server-linuxvm" {
source = "Terraform-VMWare-Modules/vm/vsphere"
#source = ".terraform/modules/centos-server-linuxvm"
version = "3.5.0"
vmtemp = "centos7"
instances = 3
vmname = "CentOS"
vmnameformat = "%03d" #To use three decimal with leading zero vmnames will be AdvancedVM001,AdvancedVM002
vmrp = "USSRE/Resources/felix"
network = {
"VM Network" = ["10.52.184.221", "10.52.184.222", "10.52.184.223"] # To use DHCP create Empty list ["",""]; You can also use a CIDR annotation;
}
vmgateway = "xx.xx.xx.xx"
dc = "USRELAB"
datastore = "data"
ipv4submask = ["16", "16", "16"]
network_type = ["vmxnet3", "vmxnet3", "vmxnet3"]
dns_server_list = ["10.52.191.10", "10.52.191.10", "10.52.191.10"]
is_windows_image = false
}

output "vmnames" {
value = module.centos-server-linuxvm.VM
}

output "vmnameswip" {
value = module.centos-server-linuxvm.ip
}

Step 3: Run the terraform yaml file as below

1
2
3
terraform init
terraform plan
terraform apply

Use Ansible to manage the Linux VM and set up the k8s cluster

Topology:

  • 1 CentOS 7 VM for the Kubernetes Master (e.g., 192.168.0.10)
  • 2 CentOS 7 VMs for the Kubernetes Nodes (e.g., 192.168.0.11, 192.168.0.12)

Step 1: Prepare the Control Node:

  • Set up a separate CentOS 7 VM as your Ansible control node.

  • Install Ansible on the control node using the following command:

    1
    sudo yum install ansible
  • Create ansible.cfg file under the working folder (same folder with yml files), set host_key_checking=false

    1
    2
    3
    [root@centos-0 ansible]# cat ansible.cfg 
    [defaults]
    host_key_checking=false

    This is to ignore SSH authenticity checking, to avoid any human intervention in the middle of the script execution for below prompts:

    1
    2
    3
    4
    # GATHERING FACTS ***************************************************************
    # The authenticity of host 'xxx.xxx.xxx.xxx (xxx.xxx.xxx.xxx)' can't be established.
    # RSA key fingerprint is xx:yy:zz:....
    # Are you sure you want to continue connecting (yes/no)?

Step 2: Set up Inventory and Hosts File:

  • Create an inventory file on the control node, e.g., inventory.ini with the following content:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    [masters]
    192.168.0.10

    [workers]
    192.168.0.11
    192.168.0.12

    [all:vars]
    ansible_user=root
    ansible_ssh_pass=your_ssh_password
  • Test the SSH connection to the linux VM

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    [root@centos-0 ansible]# ansible -i inventory.ini all -m ping
    10.52.184.222 | SUCCESS => {
    "ansible_facts": {
    "discovered_interpreter_python": "/usr/bin/python"
    },
    "changed": false,
    "ping": "pong"
    }
    10.52.184.223 | SUCCESS => {
    "ansible_facts": {
    "discovered_interpreter_python": "/usr/bin/python"
    },
    "changed": false,
    "ping": "pong"
    }
    10.52.184.221 | SUCCESS => {
    "ansible_facts": {
    "discovered_interpreter_python": "/usr/bin/python"
    },
    "changed": false,
    "ping": "pong"
    }

Step 3: Create Ansible Playbook:

  • Create a playbook file, e.g., kubernetes-setup.yml with the following content:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    53
    54
    55
    56
    57
    58
    59
    60
    61
    62
    63
    64
    65
    66
    67
    68
    69
    70
    71
    72
    73
    74
    75
    76
    77
    78
    79
    80
    81
    82
    83
    84
    85
    86
    87
    88
    89
    90
    91
    92
    93
    94
    95
    96
    97
    98
    99
    100
    101
    102
    103
    104
    105
    106
    107
    108
    109
    110
    111
    112
    113
    114
    115
    116
    117
    118
    119
    120
    121
    122
    123
    124
    125
    126
    127
    128
    129
    130
    131
    132
    133
    134
    135
    136
    137
    138
    139
    140
    141
    142
    143
    144
    145
    146
    147
    148
    149
    150
    151
    152
    153
    154
    155
    156
    157
    158
    159
    160
    161
    162
    163
    164
    165
    166
    167
    168
    169
    170
    171
    172
    173
    174
    175
    176
    177
    178
    179
    180
    181
    182
    183
    184
    185
    186
    187
    188
    189
    190
    191
    192
    193
    194
    195
    196
    197
    198
    ---
    - hosts: all
    become: true
    vars:
    k8s_version: "1.28.2"
    kubelet_version: "1.28.2"
    container_runtime: "containerd"
    master_ip: "10.52.184.221"
    pod_subnet: "10.244.0.0/16"
    service_subnet: "your_service_subnet"

    tasks:
    - name: Configure Kubernetes Yum repository
    copy:
    dest: "/etc/yum.repos.d/kubernetes.repo"
    content: |
    [kubernetes]
    name=Kubernetes
    baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
    enabled=1
    gpgcheck=0
    repo_gpgcheck=0
    gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
    notify: Reload Repository

    - name: Set SELinux to permissive mode
    selinux:
    state: permissive
    policy: targeted

    - name: Disable firewalld
    systemd:
    name: firewalld
    state: stopped
    enabled: no

    - name: Allow ports 6443 and 10250 in firewall using iptables
    iptables:
    chain: INPUT
    protocol: tcp
    destination_port: "{{ item }}"
    jump: ACCEPT
    with_items:
    - 6443
    - 10250

    - name: Disable swap
    command: swapoff -a
    ignore_errors: yes

    - name: Comment out swap entry in /etc/fstab
    lineinfile:
    path: /etc/fstab
    regexp: '^/dev/mapper/.*swap.*$'
    line: '# {{ item }}'
    with_items:
    - "UUID=xxxxxxxxxx none swap sw 0 0"

    - name: Enable bridge and IP forwarding
    command: "{{ item }}"
    with_items:
    - "modprobe br_netfilter"
    - "sysctl net.bridge.bridge-nf-call-iptables=1"
    - "sysctl net.ipv4.ip_forward=1"

    - name: Update system packages
    yum:
    name: "*"
    state: latest
    update_cache: yes

    - name: Add Containerd repository
    yum_repository:
    name: containerd
    description: Containerd Repository
    baseurl: https://download.docker.com/linux/centos/7/$basearch/stable
    gpgkey: https://download.docker.com/linux/centos/gpg
    enabled: yes

    - name: Install Containerd
    yum:
    name: containerd.io
    state: present

    - name: Configure Containerd
    copy:
    content: |
    [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
    runtime_type = "io.containerd.runc.v2"

    [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.containerd]
    runtime_type = "io.containerd.runtime.v1.linux"
    dest: /etc/containerd/config.toml
    owner: root
    group: root
    mode: 0644

    - name: Start and enable Containerd service
    service:
    name: containerd
    state: started
    enabled: true

    - name: Install Kubernetes components
    yum:
    name:
    - kubelet-{{ k8s_version }}
    - kubeadm-{{ k8s_version }}
    - kubectl-{{ k8s_version }}
    state: present

    - name: Enable and start kubelet service
    service:
    name: kubelet
    state: started
    enabled: true

    handlers:
    - name: Reload Repository
    yum:
    name: "*"
    state: present
    enablerepo: kubernetes
    disablerepo: "*"
    update_cache: yes


    - hosts: masters
    become: true
    vars:
    k8s_version: "1.28.2"
    kubelet_version: "1.28.2"
    container_runtime: "containerd"
    master_ip: "10.52.184.221"
    pod_subnet: "10.244.0.0/16"
    service_subnet: "your_service_subnet"

    tasks:
    - name: Set bridge-nf-call-iptables to 1
    shell: sysctl net.bridge.bridge-nf-call-iptables=1

    - name: Initialize Kubernetes master
    shell: kubeadm init --kubernetes-version={{ k8s_version }} --pod-network-cidr={{ pod_subnet }} --apiserver-advertise-address={{ master_ip }}
    register: kubeadm_output

    - name: Set kubeadm output on master node
    set_fact:
    kubeadm_output: "{{ kubeadm_output.stdout_lines[-2:] | join(' ') | regex_replace('\t', '') | regex_replace('\\\\', '') }}"
    when: kubeadm_output.stdout_lines is defined

    - name: Remove existing kubeconfig file
    shell: rm -f $HOME/.kube/config
    ignore_errors: yes

    - name: Create kubeconfig directory
    file:
    path: $HOME/.kube
    state: directory
    owner: "{{ ansible_user }}"
    group: "{{ ansible_user }}"
    mode: '0755'

    - name: Copy kubeconfig to user's home directory
    copy:
    src: /etc/kubernetes/admin.conf
    dest: $HOME/.kube/config
    remote_src: true
    owner: "{{ ansible_user }}"
    group: "{{ ansible_user }}"
    mode: '0644'

    - name: Install Flannel pod network addon
    shell: kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

    - hosts: workers
    become: true
    vars:
    master_ip: "10.52.184.221"


    tasks:

    - name: Set net.ipv4.ip_forward to 1
    shell: sysctl net.ipv4.ip_forward=1

    - name: Join worker nodes to the cluster
    shell: "{{ hostvars[groups['masters'][0]].kubeadm_output }}"
    register: join_output

    - name: Debug join output
    debug:
    var: join_output.stdout
    when: join_output.stdout is defined

    - name: Debug join error
    debug:
    var: join_output.stderr
    when: join_output.stderr is defined

Step 4: Run the Ansible Playbook:

  • Run the Ansible playbook using the following command:

    1
    2
    3
    4
    ansible-playbook -i inventory.ini kubernetes-setup.yml

    ## running the playbook with debug model as needed
    # ansible-playbook -i inventory.ini kubernetes-setup.yml -vvv
  • The output is as below:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    53
    54
    55
    56
    57
    58
    59
    60
    61
    62
    63
    64
    65
    66
    67
    68
    69
    70
    71
    72
    73
    74
    75
    76
    77
    78
    79
    80
    81
    82
    83
    84
    85
    86
    87
    88
    89
    90
    91
    92
    93
    94
    95
    96
    97
    98
    99
    100
    101
    102
    103
    104
    105
    106
    107
    108
    109
    110
    111
    112
    113
    114
    115
    116
    117
    118
    119
    120
    121
    122
    123
    124
    125
    126
    127
    [root@centos-0 ansible]# ansible-playbook -i inventory.ini  k8s-cluster.yml

    PLAY [all] *******************************************************************************************************************************************************************************************************
    TASK [Gathering Facts] *******************************************************************************************************************************************************************************************ok: [10.52.184.221]
    ok: [10.52.184.222]
    ok: [10.52.184.223]

    TASK [Configure Kubernetes Yum repository] ***********************************************************************************************************************************************************************changed: [10.52.184.222]
    changed: [10.52.184.223]
    changed: [10.52.184.221]

    TASK [Set SELinux to permissive mode] ****************************************************************************************************************************************************************************changed: [10.52.184.223]
    changed: [10.52.184.222]
    changed: [10.52.184.221]

    TASK [Disable firewalld] *****************************************************************************************************************************************************************************************changed: [10.52.184.222]
    changed: [10.52.184.221]
    changed: [10.52.184.223]

    TASK [Allow ports 6443 and 10250 in firewall using iptables] *****************************************************************************************************************************************************changed: [10.52.184.221] => (item=6443)
    changed: [10.52.184.223] => (item=6443)
    changed: [10.52.184.222] => (item=6443)
    changed: [10.52.184.221] => (item=10250)
    [WARNING]: The value 6443 (type int) in a string field was converted to u'6443' (type string). If this does not look like what you expect, quote the entire value to ensure it does not change.
    [WARNING]: The value 10250 (type int) in a string field was converted to u'10250' (type string). If this does not look like what you expect, quote the entire value to ensure it does not change.
    changed: [10.52.184.223] => (item=10250)
    changed: [10.52.184.222] => (item=10250)

    TASK [Disable swap] **********************************************************************************************************************************************************************************************changed: [10.52.184.221]
    changed: [10.52.184.222]
    changed: [10.52.184.223]

    TASK [Comment out swap entry in /etc/fstab] **********************************************************************************************************************************************************************changed: [10.52.184.222] => (item=UUID=xxxxxxxxxx none swap sw 0 0)
    changed: [10.52.184.223] => (item=UUID=xxxxxxxxxx none swap sw 0 0)
    changed: [10.52.184.221] => (item=UUID=xxxxxxxxxx none swap sw 0 0)

    TASK [Enable bridge and IP forwarding] ***************************************************************************************************************************************************************************changed: [10.52.184.222] => (item=modprobe br_netfilter)
    changed: [10.52.184.223] => (item=modprobe br_netfilter)
    changed: [10.52.184.221] => (item=modprobe br_netfilter)
    changed: [10.52.184.222] => (item=sysctl net.bridge.bridge-nf-call-iptables=1)
    changed: [10.52.184.223] => (item=sysctl net.bridge.bridge-nf-call-iptables=1)
    changed: [10.52.184.221] => (item=sysctl net.bridge.bridge-nf-call-iptables=1)
    changed: [10.52.184.222] => (item=sysctl net.ipv4.ip_forward=1)
    changed: [10.52.184.223] => (item=sysctl net.ipv4.ip_forward=1)
    changed: [10.52.184.221] => (item=sysctl net.ipv4.ip_forward=1)

    TASK [Update system packages] ************************************************************************************************************************************************************************************ok: [10.52.184.223]
    ok: [10.52.184.222]
    ok: [10.52.184.221]

    TASK [Add Containerd repository] *********************************************************************************************************************************************************************************changed: [10.52.184.221]
    changed: [10.52.184.223]
    changed: [10.52.184.222]

    TASK [Install Containerd] ****************************************************************************************************************************************************************************************changed: [10.52.184.222]
    changed: [10.52.184.223]
    changed: [10.52.184.221]

    TASK [Configure Containerd] **************************************************************************************************************************************************************************************changed: [10.52.184.223]
    changed: [10.52.184.222]
    changed: [10.52.184.221]

    TASK [Start and enable Containerd service] ***********************************************************************************************************************************************************************changed: [10.52.184.223]
    changed: [10.52.184.222]
    changed: [10.52.184.221]

    TASK [Install Kubernetes components] *****************************************************************************************************************************************************************************changed: [10.52.184.221]
    changed: [10.52.184.223]
    changed: [10.52.184.222]

    TASK [Enable and start kubelet service] **************************************************************************************************************************************************************************changed: [10.52.184.222]
    changed: [10.52.184.221]
    changed: [10.52.184.223]

    RUNNING HANDLER [Reload Repository] ******************************************************************************************************************************************************************************ok: [10.52.184.222]
    ok: [10.52.184.221]
    ok: [10.52.184.223]

    PLAY [masters] ***************************************************************************************************************************************************************************************************
    TASK [Gathering Facts] *******************************************************************************************************************************************************************************************ok: [10.52.184.221]

    TASK [Set bridge-nf-call-iptables to 1] **************************************************************************************************************************************************************************changed: [10.52.184.221]

    TASK [Initialize Kubernetes master] ******************************************************************************************************************************************************************************changed: [10.52.184.221]

    TASK [Set kubeadm output on master node] *************************************************************************************************************************************************************************ok: [10.52.184.221]

    TASK [Print join command on master node] *************************************************************************************************************************************************************************skipping: [10.52.184.221]

    TASK [Remove existing kubeconfig file] ***************************************************************************************************************************************************************************[WARNING]: Consider using the file module with state=absent rather than running 'rm'. If you need to use command because file is insufficient you can add 'warn: false' to this command task or set
    'command_warnings=False' in ansible.cfg to get rid of this message.
    changed: [10.52.184.221]

    TASK [Create kubeconfig directory] *******************************************************************************************************************************************************************************changed: [10.52.184.221]

    TASK [Copy kubeconfig to user's home directory] ******************************************************************************************************************************************************************changed: [10.52.184.221]

    TASK [Install Flannel pod network addon] *************************************************************************************************************************************************************************changed: [10.52.184.221]

    PLAY [workers] ***************************************************************************************************************************************************************************************************
    TASK [Gathering Facts] *******************************************************************************************************************************************************************************************ok: [10.52.184.222]
    ok: [10.52.184.223]

    TASK [Set net.ipv4.ip_forward to 1] ******************************************************************************************************************************************************************************changed: [10.52.184.222]
    changed: [10.52.184.223]

    TASK [Join worker nodes to the cluster] **************************************************************************************************************************************************************************changed: [10.52.184.223]
    changed: [10.52.184.222]

    TASK [Debug join output] *****************************************************************************************************************************************************************************************ok: [10.52.184.222] => {
    "join_output.stdout": "[preflight] Running pre-flight checks\n[preflight] Reading configuration from the cluster...\n[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Starting the kubelet\n[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...\n\nThis node has joined the cluster:\n* Certificate signing request was sent to apiserver and a response was received.\n* The Kubelet was informed of the new secure connection details.\n\nRun 'kubectl get nodes' on the control-plane to see this node join the cluster."
    }
    ok: [10.52.184.223] => {
    "join_output.stdout": "[preflight] Running pre-flight checks\n[preflight] Reading configuration from the cluster...\n[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Starting the kubelet\n[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...\n\nThis node has joined the cluster:\n* Certificate signing request was sent to apiserver and a response was received.\n* The Kubelet was informed of the new secure connection details.\n\nRun 'kubectl get nodes' on the control-plane to see this node join the cluster."
    }

    TASK [Debug join error] ******************************************************************************************************************************************************************************************ok: [10.52.184.222] => {
    "join_output.stderr": ""
    }
    ok: [10.52.184.223] => {
    "join_output.stderr": ""
    }

    PLAY RECAP *******************************************************************************************************************************************************************************************************10.52.184.221 : ok=24 changed=19 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
    10.52.184.222 : ok=21 changed=15 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
    10.52.184.223 : ok=21 changed=15 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

  • Verify the k8s cluster status from master node

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    [root@CentOS001 ~]# kubectl get nodes
    NAME STATUS ROLES AGE VERSION
    centos001 Ready control-plane 4h18m v1.28.2
    centos002 Ready <none> 4h17m v1.28.2
    centos003 Ready <none> 4h17m v1.28.2

    [root@CentOS001 ~]# kubectl get pods -A
    NAMESPACE NAME READY STATUS RESTARTS AGE
    kube-flannel kube-flannel-ds-58ztz 1/1 Running 0 4h18m
    kube-flannel kube-flannel-ds-d7ktx 1/1 Running 0 4h18m
    kube-flannel kube-flannel-ds-jtdtf 1/1 Running 0 4h18m
    kube-system coredns-5dd5756b68-fqspp 1/1 Running 0 4h18m
    kube-system coredns-5dd5756b68-pvqkw 1/1 Running 0 4h18m
    kube-system etcd-centos001 1/1 Running 0 4h18m
    kube-system kube-apiserver-centos001 1/1 Running 0 4h18m
    kube-system kube-controller-manager-centos001 1/1 Running 0 4h18m
    kube-system kube-proxy-btpvq 1/1 Running 0 4h18m
    kube-system kube-proxy-j6fj4 1/1 Running 0 4h18m
    kube-system kube-proxy-qjkcn 1/1 Running 0 4h18m
    kube-system kube-scheduler-centos001 1/1 Running 0 4h18m

Use Ansible to install helm3

Step 1: create the playbook for helm3 installation

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
---
- hosts: masters
become: true
tasks:
- name: Download Helm installation script
get_url:
url: https://get.helm.sh/helm-v3.7.0-linux-amd64.tar.gz
dest: /tmp/helm.tar.gz

- name: Extract Helm package
unarchive:
src: /tmp/helm.tar.gz
dest: /tmp
remote_src: yes

- name: Move Helm binary to /usr/local/bin
command: chdir=/tmp/linux-amd64/ mv helm /usr/local/bin/

- name: Set Helm repository cache
command: helm repo cache rebuild

- name: Initialize Helm
command: helm repo add stable https://charts.helm.sh/stable

- name: Install Helm completion for Bash
shell: echo "source <(helm completion bash)" >> ~/.bashrc

Step 2: run the playbook

1
ansible-playbook -i inventory.ini helm3.yml

Tips: something more about helm

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
# add helm repo
[root@CentOS001 ~]# helm repo add stable http://mirror.azure.cn/kubernetes/charts
"stable" has been added to your repositories
[root@CentOS001 ~]# helm repo add aliyun https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
"aliyun" has been added to your repositories
[root@CentOS001 ~]# helm repo add jetstack https://charts.jetstack.io
"jetstack" has been added to your repositories

# update helm repo
[root@CentOS001 ~]# helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "jetstack" chart repository
...Successfully got an update from the "aliyun" chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ⎈ Happy Helming!⎈

# list helm repo
[root@CentOS001 ~]# helm repo list
NAME URL
stable http://mirror.azure.cn/kubernetes/charts
aliyun https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
jetstack https://charts.jetstack.io

# search chart
[root@CentOS001 ~]# helm search repo nginx
NAME CHART VERSION APP VERSION DESCRIPTION
aliyun/nginx-ingress 0.9.5 0.10.2 An nginx Ingress controller that uses ConfigMap...
aliyun/nginx-lego 0.3.1 Chart for nginx-ingress-controller and kube-lego
stable/nginx-ingress 1.41.3 v0.34.1 DEPRECATED! An nginx Ingress controller that us...
stable/nginx-ldapauth-proxy 0.1.6 1.13.5 DEPRECATED - nginx proxy with ldapauth
stable/nginx-lego 0.3.1 Chart for nginx-ingress-controller and kube-lego
aliyun/gcloud-endpoints 0.1.0 Develop, deploy, protect and monitor your APIs ...
stable/gcloud-endpoints 0.1.2 1 DEPRECATED Develop, deploy, protect and monitor..

Building testing lab with Terraform and Ansible
https://blog.excelsre.com/2023/12/10/build-testing-lab-in-local-dc-with-terraform-and-ansible/
作者
Felix Yang
发布于
2023年12月10日
许可协议