Kind创建k8s集群

零、前言

k8s官网
kind官网
在上一篇文件中,介绍的是使用三台虚拟机来实现k8s集群的搭建,但是弊端也是有的,就是对资源要求较高,不适合学习环境,而kind是k8s官网提供给学习者用来创建k8s的一种方式,适用于学习环境下,资源不充足的情况。实测使用kind在2H4G的服务器中,完全可以支撑三节点的k8s集群搭建。

  • 因为在前期,下载kind和kubectl可能会需要科学上网的情况,所以我会将kind和kubectl,放在网盘链接中,方便无法科学上网的读者下载,希望读者大大多多关注!!!

一、硬件要求

  • Linux:Ubuntu、Centos或者其他镜像都可以
  • 内存:官方并没有详细要求,我这里用的是4核5G内存
  • 前提条件:Linux上必须有Docker

二、Docker安装

在线安装docker环境

curl -fsSL https://get.docker.com | bash -s docker --mirror Aliyun
systemctl enable docker
#启动Docker服务:
systemctl start docker
#检查是否安装成功:
docker version

配置docker镜像源

mkdir -p /etc/docker
tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": [
    "https://ustc-edu-cn.mirror.aliyuncs.com/",
    "https://ccr.ccs.tencentyun.com/",
    "https://docker.m.daocloud.io/",
  ]
}
EOF

镜像源亲测可用!!!

三、Kind安装

进入到Kind的官网,就可以找到安装教程,有Linux、macOS、Windows

On Linux:

# For AMD64 / x86_64
# 下载速度非常慢,而且很有可能下载不下来,记得科学上网
curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.17.0/kind-linux-amd64
// 将下载下来的二进制文件给予执行权限
chmod +x ./kind
// 这是为了让我们使用kind的时候前面不用加 ./
sudo mv ./kind /usr/local/bin/kind

四、kubectl安装

下载完kind,我们需要下载k8s的命令操作工具,在k8s官网就可以找到该工具的下载方式

// 非常慢,很有可能失败,科学上网可以解决
curl -LO https://dl.k8s.io/release/v1.24.0/bin/linux/amd64/kubectl
// 将下载下来的二进制文件给予执行权限
chmod +x ./kubectl
// 这是为了让我们使用kubectl的时候前面不用加 ./
sudo mv ./kubectl /usr/local/bin/kubectl

这样我们的准备工作就做好了

五、kind创建集群

创建单体k8s

kind create cluster --image=指定镜像 --name=指定名称

如果在第一步拉取镜像失败了,那就是设置的国内镜像源没有该镜像,请换一个试试,或者直接在Dockerhub使用科学上网拉取

创建集群

kind创建集群,官网给我们的方式是使用yaml文件进行一键创建

  1. 创建kind-cluster-config.yaml

    kind: Cluster
    apiVersion: kind.x-k8s.io/v1alpha4
    containerdConfigPatches:
    - |-
      [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
        endpoint = ["https://ustc-edu-cn.mirror.aliyuncs.com", "https://ccr.ccs.tencentyun.com", "https://docker.m.daocloud.io"]
    nodes:
    - role: control-plane
      extraPortMappings:
      - containerPort: 30080
        hostPort: 30080
        protocol: TCP
      - containerPort: 30081
        hostPort: 30081
        protocol: TCP
      - containerPort: 30082
        hostPort: 30082
        protocol: TCP
      - containerPort: 30083
        hostPort: 30083
        protocol: TCP
      - containerPort: 30084
        hostPort: 30084
        protocol: TCP
      - containerPort: 30085
        hostPort: 30085
        protocol: TCP
      - containerPort: 31272
        hostPort: 31272
        protocol: TCP
    - role: worker
    - role: worker
    

    这里必须解释一下,我为什么要放开30080,30081,30082,30083,30084,30085,31272这几个端口,因为kind的本质是使用docker容器来实现的节点构建,所以如果你在创建kind的时候,如果不提前放开几个端口的话,后续启动的服务是无法实现外部访问的,这几个端口就是我为了方便设置的,你们也可以自己设置,但是开放端口的范围必须在30000-32767之间。

  2. kind创建

    kind create cluster --image=registry.cn-hangzhou.aliyuncs.com/morris-share/kindest-node:v1.25.3 --name=my-cluster --config kind-cluster-config.yaml
    
  3. 使用kubectl命令进入到我们的集群并且初始化

    kubectl cluster-info --context kind-my-cluster
    
  4. 使用命令查看集群

    kubectl get nodes
    // 出现以下情况就是创建成功
    NAME                       STATUS   ROLES           AGE   VERSION
    my-cluster-control-plane   Ready    control-plane   10h   v1.25.3
    my-cluster-worker          Ready    <none>          10h   v1.25.3
    my-cluster-worker2         Ready    <none>          10h   v1.25.3
    
  5. containerd配置文件,修改:/etc/containerd/config.toml,其实这一步设置与否都无伤大雅,可以忽略

    disabled_plugins = []
    imports = []
    oom_score = 0
    plugin_dir = ""
    required_plugins = []
    root = "/var/lib/containerd"
    state = "/run/containerd"
    version = 2
    
    [cgroup]
      path = ""
    
    [debug]
      address = ""
      format = ""
      gid = 0
      level = ""
      uid = 0
    
    [grpc]
      address = "/run/containerd/containerd.sock"
      gid = 0
      max_recv_message_size = 16777216
      max_send_message_size = 16777216
      tcp_address = ""
      tcp_tls_cert = ""
      tcp_tls_key = ""
      uid = 0
    
    [metrics]
      address = ""
      grpc_histogram = false
    
    [plugins]
    
      [plugins."io.containerd.gc.v1.scheduler"]
        deletion_threshold = 0
        mutation_threshold = 100
        pause_threshold = 0.02
        schedule_delay = "0s"
        startup_delay = "100ms"
    
      [plugins."io.containerd.grpc.v1.cri"]
        disable_apparmor = false
        disable_cgroup = false
        disable_hugetlb_controller = true
        disable_proc_mount = false
        disable_tcp_service = true
        enable_selinux = false
        enable_tls_streaming = false
        ignore_image_defined_volumes = false
        max_concurrent_downloads = 3
        max_container_log_line_size = 16384
        netns_mounts_under_state_dir = false
        restrict_oom_score_adj = false
        sandbox_image = "registry.k8s.io/pause:3.5"
        selinux_category_range = 1024
        stats_collect_period = 10
        stream_idle_timeout = "4h0m0s"
        stream_server_address = "127.0.0.1"
        stream_server_port = "0"
        systemd_cgroup = false
        tolerate_missing_hugetlb_controller = true
        unset_seccomp_profile = ""
    
        [plugins."io.containerd.grpc.v1.cri".cni]
          bin_dir = "/opt/cni/bin"
          conf_dir = "/etc/cni/net.d"
          conf_template = ""
          max_conf_num = 1
    
        [plugins."io.containerd.grpc.v1.cri".containerd]
          default_runtime_name = "runc"
          disable_snapshot_annotations = true
          discard_unpacked_layers = false
          no_pivot = false
          snapshotter = "overlayfs"
    
          [plugins."io.containerd.grpc.v1.cri".containerd.default_runtime]
            base_runtime_spec = ""
            container_annotations = []
            pod_annotations = []
            privileged_without_host_devices = false
            runtime_engine = ""
            runtime_root = ""
            runtime_type = ""
    
            [plugins."io.containerd.grpc.v1.cri".containerd.default_runtime.options]
    
          [plugins."io.containerd.grpc.v1.cri".containerd.runtimes]
    
            [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
              base_runtime_spec = ""
              container_annotations = []
              pod_annotations = []
              privileged_without_host_devices = false
              runtime_engine = ""
              runtime_root = ""
              runtime_type = "io.containerd.runc.v2"
    
              [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
                BinaryName = ""
                CriuImagePath = ""
                CriuPath = ""
                CriuWorkPath = ""
                IoGid = 0
                IoUid = 0
                NoNewKeyring = false
                NoPivotRoot = false
                Root = ""
                ShimCgroup = ""
                SystemdCgroup = true
    
          [plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime]
            base_runtime_spec = ""
            container_annotations = []
            pod_annotations = []
            privileged_without_host_devices = false
            runtime_engine = ""
            runtime_root = ""
            runtime_type = ""
    
            [plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime.options]
    
        [plugins."io.containerd.grpc.v1.cri".image_decryption]
          key_model = "node"
    
        [plugins."io.containerd.grpc.v1.cri".registry]
          config_path = ""
    
          [plugins."io.containerd.grpc.v1.cri".registry.auths]
    
          [plugins."io.containerd.grpc.v1.cri".registry.configs]
            [plugins."io.containerd.grpc.v1.cri".registry.configs."registry.killer.sh:5000".tls]
              insecure_skip_verify = true
    
          [plugins."io.containerd.grpc.v1.cri".registry.headers]
    
          [plugins."io.containerd.grpc.v1.cri".registry.mirrors]
            [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
              endpoint = ["https://ustc-edu-cn.mirror.aliyuncs.com/","https://ccr.ccs.tencentyun.com/","https://docker.m.daocloud.io/","https://mirror.gcr.io", "https://docker-mirror.killercoda.com", "https://docker-mirror.killer.sh", "https://registry-1.docker.io"]
    
        [plugins."io.containerd.grpc.v1.cri".x509_key_pair_streaming]
          tls_cert_file = ""
          tls_key_file = ""
    
      [plugins."io.containerd.internal.v1.opt"]
        path = "/opt/containerd"
    
      [plugins."io.containerd.internal.v1.restart"]
        interval = "10s"
    
      [plugins."io.containerd.metadata.v1.bolt"]
        content_sharing_policy = "shared"
    
      [plugins."io.containerd.monitor.v1.cgroups"]
        no_prometheus = false
    
      [plugins."io.containerd.runtime.v1.linux"]
        no_shim = false
        runtime = "runc"
        runtime_root = ""
        shim = "containerd-shim"
        shim_debug = false
    
      [plugins."io.containerd.runtime.v2.task"]
        platforms = ["linux/amd64"]
    
      [plugins."io.containerd.service.v1.diff-service"]
        default = ["walking"]
    
      [plugins."io.containerd.snapshotter.v1.aufs"]
        root_path = ""
    
      [plugins."io.containerd.snapshotter.v1.btrfs"]
        root_path = ""
    
      [plugins."io.containerd.snapshotter.v1.devmapper"]
        async_remove = false
        base_image_size = ""
        pool_name = ""
        root_path = ""
    
      [plugins."io.containerd.snapshotter.v1.native"]
        root_path = ""
    
      [plugins."io.containerd.snapshotter.v1.overlayfs"]
        root_path = ""
    
      [plugins."io.containerd.snapshotter.v1.zfs"]
        root_path = ""
    
    [proxy_plugins]
    
    [stream_processors]
    
      [stream_processors."io.containerd.ocicrypt.decoder.v1.tar"]
        accepts = ["application/vnd.oci.image.layer.v1.tar+encrypted"]
        args = ["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"]
        env = ["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"]
        path = "ctd-decoder"
        returns = "application/vnd.oci.image.layer.v1.tar"
    
      [stream_processors."io.containerd.ocicrypt.decoder.v1.tar.gzip"]
        accepts = ["application/vnd.oci.image.layer.v1.tar+gzip+encrypted"]
        args = ["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"]
        env = ["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"]
        path = "ctd-decoder"
        returns = "application/vnd.oci.image.layer.v1.tar+gzip"
    
    [timeouts]
      "io.containerd.timeout.shim.cleanup" = "5s"
      "io.containerd.timeout.shim.load" = "5s"
      "io.containerd.timeout.shim.shutdown" = "3s"
      "io.containerd.timeout.task.state" = "2s"
    
    [ttrpc]
      address = ""
      gid = 0
      uid = 0
    

六、测试

  1. 创建一个nginx-deployment.yaml文件

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: nginx-deploy
    spec:
      replicas: 2
      selector:
        matchLabels:
          app: nginx
      template:
        metadata:
          labels:
            app: nginx
        spec:
          containers:
          - name: nginx
            image: nginx:1.22
            ports:
            - containerPort: 80
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: nginx-nodeport
    spec:
      type: NodePort
      selector:
        app: nginx
      ports:
      - protocol: TCP
        port: 80
        targetPort: 80
        nodePort: 30080  # 可以指定一个端口号,如果不指定,会自动分配一个
    
  2. 开始创建

    kubectl apply -f nginx-deploy.yaml
    
  3. 查看创建结果

    kubectl get pod
    
    kubectl get deploy
    
Logo

华为开发者空间,是为全球开发者打造的专属开发空间,汇聚了华为优质开发资源及工具,致力于让每一位开发者拥有一台云主机,基于华为根生态开发、创新。

更多推荐