百度360必应搜狗淘宝本站头条
当前位置:网站首页 > 博客教程 > 正文

kubernetes安装

connygpt 2024-09-09 03:10 11 浏览

安装kubernetes

1. 配置主机名(三台分别执行)

# 根据规划设置主机名【master节点上操作】

hostnamectl set-hostname k8s-master

# 根据规划设置主机名【node01节点操作】

hostnamectl set-hostname k8s-node-1

# 根据规划设置主机名【node02节点操作】

hostnamectl set-hostname k8s-node-2

2. hosts文件添加内容(三台)

vi /etc/hosts

192.168.50.201 k8s-master
192.168.50.202 k8s-node-1
192.168.50.203 k8s-node-2

改成自己服务器的ip

3. 关闭firewalld,关闭selinux,关闭NetworkManager(三台)

systemctl status firewalld
systemctl stop firewalld
systemctl disable firewalld

systemctl status NetworkManager
systemctl stop NetworkManager
systemctl disable NetworkManager
##查看是否关闭selinux
getenforce
vim /etc/sysconfig/selinux

## SELINUX=enforcing修改为SELINUX=disabled

4. 时间同步配置(三台)

yum install chrony -y

systemctl start chronyd && systemctl enable chronyd && chronyc sources

5. 配置内核路由转发及网桥过滤(三台)

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

sudo modprobe overlay
sudo modprobe br_netfilter

cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward=1
EOF

## 使配置生效
sysctl --system

# 查询br_netfilter模块
lsmod |grep br_netfilter
lsmod | grep overlay

通过运行以下指令确认 net.bridge.bridge-nf-call-iptablesnet.bridge.bridge-nf-call-ip6tablesnet.ipv4.ip_forward 系统变量在你的 sysctl 配置中被设置为 1:

sysctl net.bridge.bridge-nf-call-iptables net.bridge.bridge-nf-call-ip6tables net.ipv4.ip_forward

6. 配置ipvs转发(三台)

yum -y install ipset ipvsadm

# 配置ipvsadm模块加载方式

# 添加需要加载的模块

mkdir -p /etc/sysconfig/ipvsadm

cat > /etc/sysconfig/ipvsadm/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
EOF

chmod +x /etc/sysconfig/ipvsadm/ipvs.modules
bash /etc/sysconfig/ipvsadm/ipvs.modules
lsmod |grep -e ip_vs -e nf_conntrack

7. 关闭swap分区(三台)

sed -ri 's/.*swap.*/#&/' /etc/fstab

swapoff -a

grep swap /etc/fstab

8. 安装docker(三台)

curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh ./get-docker.sh

# 配置cgroup驱动及镜像下载加速器:

vi /etc/docker/daemon.json

{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "registry-mirrors": [
    "https://rsbud4vc.mirror.aliyuncs.com",
    "https://registry.docker-cn.com",
    "https://docker.mirrors.ustc.edu.cn",
    "https://dockerhub.azk8s.cn",
    "http://hub-mirror.c.163.com"
  ]
}
systemctl enable docker
systemctl start docker
systemctl status docker

docker info|grep systemd

9. 安装cri-dockerd(三台)

# 下载安装最新版的cri-dockerd
wget https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.9/cri-dockerd-0.3.9-3.el7.x86_64.rpm

rpm -ivh cri-dockerd-0.3.9-3.el7.x86_64.rpm
rm -rf cri-dockerd-0.3.9-3.el7.x86_64.rpm
systemctl daemon-reload
systemctl enable cri-docker
systemctl start cri-docker
systemctl status cri-docker

10. 安装k8s(三台)

# 配置k8s源
cat << EOF > /etc/yum.repos.d/kubernetes.repo 
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

# 更新yum
yum check-update

## 安装kubelet kubeadm kubectl
yum install -y kubelet kubeadm kubectl

#注意:先不要启动,只是设置开机自启动
systemctl enable kubelet

#确定kubeadm等程序文件的版本
kubeadm version

11. 整合kubelet和cri-dockerd(三台)

vim /usr/lib/systemd/system/cri-docker.service

#ExecStart=/usr/bin/cri-dockerd --container-runtime-endpoint fd://
ExecStart=/usr/bin/cri-dockerd --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.9 --container-runtime-endpoint fd:// --network-plugin=cni --cni-bin-dir=/opt/cni/bin --cni-cache-dir=/var/lib/cni/cache --cni-conf-dir=/etc/cni/net.d


# 说明:
# 需要添加的各配置参数(各参数的值要与系统部署的CNI插件的实际路径相对应):
#   --pod-infra-container-image  重要,指定镜像仓库,建议国内,否则镜像拉取不下来或超时会出问题
#	--network-plugin:指定网络插件规范的类型,这里要使用CNI;
#	--cni-bin-dir:指定CNI插件二进制程序文件的搜索目录;
#	--cni-cache-dir:CNI插件使用的缓存目录;
#	--cni-conf-dir:CNI插件加载配置文件的目录;

# 配置完成后,重载并重启cri-docker.service服务。
systemctl daemon-reload && systemctl restart cri-docker.service
systemctl status cri-docker


# 配置kubelet
# 所有节点执行:

# 配置kubelet,为其指定cri-dockerd在本地打开的Unix Sock文件的路径,该路径一般默认为“/run/cri-dockerd.sock“
vim /etc/sysconfig/kubelet
KUBELET_KUBEADM_ARGS="--cgroup-driver=systemd --container-runtime-endpoint=/run/cri-dockerd.sock"
cat /etc/sysconfig/kubelet
KUBELET_KUBEADM_ARGS="--cgroup-driver=systemd --container-runtime-endpoint=/run/cri-dockerd.sock"

#说明:该配置也可不进行,而是直接在后面的各kubeadm命令上使用“--cri-socket unix:///run/cri-dockerd.sock”选项

12. 初始化集群(只在主节点执行)

kubeadm init \
--apiserver-advertise-address=192.168.50.201 \
--kubernetes-version=v1.28.2 \
--image-repository=registry.aliyuncs.com/google_containers \
--pod-network-cidr=10.244.0.0/16 \
--service-cidr=10.96.0.0/12 \
--cri-socket=unix:///run/cri-dockerd.sock

apiserver-advertise-address 主节点IP地址

kubernetes-version k8s版本

image-repository registry.aliyuncs.com/google_containers 镜像仓库

pod-network-cidr pod网段

重要 初始化主节点完成后按照控制台输出提示操作,记录节点加入集群命令

13. 其它节点加入集群(node节点)

初始化主节点完成后的控制台输出:

kubeadm join 192.168.50.201:6443 --token kx0q6a.uhg9et3ooqkaxwpd \
	--discovery-token-ca-cert-hash sha256:f6d1f54f4f887f42446684fd139b96b7021d6020cc79fef334535377107e232a

加上 --cri-socket=unix:///run/cri-dockerd.sock

kubeadm join 192.168.50.201:6443 --token kx0q6a.uhg9et3ooqkaxwpd \
	--discovery-token-ca-cert-hash sha256:f6d1f54f4f887f42446684fd139b96b7021d6020cc79fef334535377107e232a \
	--cri-socket=unix:///run/cri-dockerd.sock

14. 部署容器网络(主)

##wget https://github.com/projectcalico/calico/blob/v3.27.0/manifests/calico.yaml

wget https://docs.projectcalico.org/manifests/calico.yaml

下载完后还需要修改里面定义Pod网络(CALICO_IPV4POOL_CIDR),与前面kubeadm init的 --pod-network-cidr指定的一样
sed -i -e 's#192.168.0.0/16#10.244.0.0/16#g' calico.yaml

kubectl apply -f calico.yaml


## 经过一段时间后
kubectl get pods -A -o wide

NAME                                       READY   STATUS    RESTARTS   AGE     IP               NODE         NOMINATED NODE   READINESS GATES
calico-kube-controllers-658d97c59c-nng82   1/1     Running   0          17m     10.244.140.66    k8s-node-2   <none>           <none>
calico-node-55tlg                          1/1     Running   0          17m     192.168.50.202   k8s-node-1   <none>           <none>
calico-node-tb8qs                          1/1     Running   0          17m     192.168.50.203   k8s-node-2   <none>           <none>
calico-node-xm2fh                          1/1     Running   0          9m24s   192.168.50.201   k8s-master   <none>           <none>
coredns-66f779496c-7sjs4                   1/1     Running   0          13h     10.244.235.193   k8s-master   <none>           <none>
coredns-66f779496c-9d67s                   1/1     Running   0          13h     10.244.235.194   k8s-master   <none>           <none>
etcd-k8s-master                            1/1     Running   0          13h     192.168.50.201   k8s-master   <none>           <none>
kube-apiserver-k8s-master                  1/1     Running   0          13h     192.168.50.201   k8s-master   <none>           <none>
kube-controller-manager-k8s-master         1/1     Running   0          13h     192.168.50.201   k8s-master   <none>           <none>
kube-proxy-2zqk5                           1/1     Running   0          13h     192.168.50.201   k8s-master   <none>           <none>
kube-proxy-4mgls                           1/1     Running   0          94m     192.168.50.202   k8s-node-1   <none>           <none>
kube-proxy-c2l4q                           1/1     Running   0          96m     192.168.50.203   k8s-node-2   <none>           <none>
kube-scheduler-k8s-master                  1/1     Running   0          13h     192.168.50.201   k8s-master   <none>           <none>

kubectl get nodes -o wide

NAME         STATUS   ROLES           AGE    VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION                CONTAINER-RUNTIME
k8s-master   Ready    control-plane   14h    v1.28.2   192.168.50.201   <none>        CentOS Linux 7 (Core)   3.10.0-1160.71.1.el7.x86_64   docker://25.0.3
k8s-node-1   Ready    <none>          113m   v1.28.2   192.168.50.202   <none>        CentOS Linux 7 (Core)   3.10.0-1160.71.1.el7.x86_64   docker://25.0.3
k8s-node-2   Ready    <none>          115m   v1.28.2   192.168.50.203   <none>        CentOS Linux 7 (Core)   3.10.0-1160.71.1.el7.x86_64   docker://25.0.3

15. 安装dashboard

15.1 helm安装dashboard

安装helm

1.脚本安装

curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh

2.二进制安装

# 下载 
wget https://get.helm.sh/helm-v3.14.2-linux-amd64.tar.gz
# 解压
tar -zxvf helm-v3.14.2-linux-amd64.tar.gz

# 在解压目录中找到 helm 二进制文件,并将其移动到所需的目的地
mv linux-amd64/helm /usr/local/bin/helm
# Add kubernetes-dashboard repository
helm repo add kubernetes-dashboard https://kubernetes.github.io/dashboard/
# Deploy a Helm Release named "kubernetes-dashboard" using the kubernetes-dashboard chart
helm upgrade --install kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard --create-namespace --namespace kubernetes-dashboard

To uninstall/delete the kubernetes-dashboard deployment:

helm delete kubernetes-dashboard --namespace kubernetes-dashboard

15.2 yaml安装dashboard(推荐)

wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml

vim recommended.yaml

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  type: NodePort
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30000
  selector:
    k8s-app: kubernetes-dashboard

## 修改,新增type: NodePort,新增nodePort: 30000,这样可以通过宿主机ip:30000访问dashboard

kubectl apply -f recommended.yaml

15.3 创建dashboard用户

Creating sample user

In this guide, we will find out how to create a new user using the Service Account mechanism of Kubernetes, grant this user admin permissions and login to Dashboard using a bearer token tied to this user.

For each of the following snippets for ServiceAccount and ClusterRoleBinding, you should copy them to new manifest files like dashboard-adminuser.yaml and use kubectl apply -f dashboard-adminuser.yaml to create them.

Creating a Service Account

We are creating Service Account with the name admin-user in namespace kubernetes-dashboard first.

apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard

Creating a ClusterRoleBinding

In most cases after provisioning the cluster using kops, kubeadm or any other popular tool, the ClusterRole cluster-admin already exists in the cluster. We can use it and create only a ClusterRoleBinding for our ServiceAccount. If it does not exist then you need to create this role first and grant required privileges manually.

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard

Getting a Bearer Token for ServiceAccount

Now we need to find the token we can use to log in. Execute the following command:

kubectl -n kubernetes-dashboard create token admin-user

Check Kubernetes docs for more information about API tokens for a ServiceAccount.

Getting a long-lived Bearer Token for ServiceAccount

We can also create a token with the secret which bound the service account and the token will be saved in the Secret:

apiVersion: v1
kind: Secret
metadata:
  name: admin-user
  namespace: kubernetes-dashboard
  annotations:
    kubernetes.io/service-account.name: "admin-user"   
type: kubernetes.io/service-account-token  

After Secret is created, we can execute the following command to get the token which saved in the Secret:

kubectl get secret admin-user -n kubernetes-dashboard -o jsonpath={".data.token"} | base64 -d

Check Kubernetes docs for more information about long-lived API tokens for a ServiceAccount.

整合:dashboard-adminuser.yaml

kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard

---

apiVersion: v1
kind: Secret
metadata:
  name: admin-user
  namespace: kubernetes-dashboard
  annotations:
    kubernetes.io/service-account.name: "admin-user"
type: kubernetes.io/service-account-token
kubectl apply -f dashboard-adminuser.yaml
##获取admin-user登录token
kubectl get secret admin-user -n kubernetes-dashboard -o jsonpath={".data.token"} | base64 -d
eyJhbGciOiJSUzI1NiIsImtpZCI6InhkblNiMEk1aTQzNXRVZjNWRFRBR0ZWUE1YODFwMC1OYjNqd2FWVzI1MW8ifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJmYWY1NGM1NC1hZTc5LTRkODgtYTk1ZC1jN2NlOGJhNzA4ZTIiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.bYaz4ZUedkhWzEuBB__E_lKOLQZTFNWtc-8tU5NEDMUXRi6kpiS3eFfktF0eCyLrjCq45bHlpc0M-FG2J82WWD-DNZLkSDIXYAjTrCp2fT35ZS3ekPyktszNiA-xGyLTXs6haZiBlis6WVfrbmGMemlpm5sX3_PhpjKfC1KFC1OBo8UtwpGAcCUJEJUBPh361gfOypYN7AOiszv1LeI-aZzw9mlP4cGxt6M6nUDxDAeIunWm4IKdDmqxvgXd6sPxfk2N4JluM-eTlgEtVc_BAjbknsp1jCxzb8w707ijRd2odBisqLG6x6rMFvq4zman0PX3wngIdDrxHZf41pwvvg

相关推荐

3分钟让你的项目支持AI问答模块,完全开源!

hello,大家好,我是徐小夕。之前和大家分享了很多可视化,零代码和前端工程化的最佳实践,今天继续分享一下最近开源的Next-Admin的最新更新。最近对这个项目做了一些优化,并集成了大家比较关注...

干货|程序员的副业挂,12个平台分享

1、D2adminD2Admin是一个完全开源免费的企业中后台产品前端集成方案,使用最新的前端技术栈,小于60kb的本地首屏js加载,已经做好大部分项目前期准备工作,并且带有大量示例代码,助...

Github标星超200K,这10个可视化面板你知道几个

在Github上有很多开源免费的后台控制面板可以选择,但是哪些才是最好、最受欢迎的可视化控制面板呢?今天就和大家推荐Github上10个好看又流行的可视化面板:1.AdminLTEAdminLTE是...

开箱即用的炫酷中后台前端开源框架第二篇

#头条创作挑战赛#1、SoybeanAdmin(1)介绍:SoybeanAdmin是一个基于Vue3、Vite3、TypeScript、NaiveUI、Pinia和UnoCSS的清新优...

搭建React+AntDeign的开发环境和框架

搭建React+AntDeign的开发环境和框架随着前端技术的不断发展,React和AntDesign已经成为越来越多Web应用程序的首选开发框架。React是一个用于构建用户界面的JavaScrip...

基于.NET 5实现的开源通用权限管理平台

??大家好,我是为广大程序员兄弟操碎了心的小编,每天推荐一个小工具/源码,装满你的收藏夹,每天分享一个小技巧,让你轻松节省开发效率,实现不加班不熬夜不掉头发,是我的目标!??今天小编推荐一款基于.NE...

StreamPark - 大数据流计算引擎

使用Docker完成StreamPark的部署??1.基于h2和docker-compose进行StreamPark部署wgethttps://raw.githubusercontent.com/a...

教你使用UmiJS框架开发React

1、什么是Umi.js?umi,中文可发音为乌米,是一个可插拔的企业级react应用框架。你可以将它简单地理解为一个专注性能的类next.js前端框架,并通过约定、自动生成和解析代码等方式来辅助...

简单在线流程图工具在用例设计中的运用

敏捷模式下,测试团队的用例逐渐简化以适应快速的发版节奏,大家很早就开始运用思维导图工具比如xmind来编写测试方法、测试点。如今不少已经不少利用开源的思维导图组件(如百度脑图...)来构建测试测试...

【开源分享】神奇的大数据实时平台框架,让Flink&amp;Spark开发更简单

这是一个神奇的框架,让Flink|Spark开发更简单,一站式大数据实时平台!他就是StreamX!什么是StreamX大数据技术如今发展的如火如荼,已经呈现百花齐放欣欣向荣的景象,实时处理流域...

聊聊规则引擎的调研及实现全过程

摘要本期主要以规则引擎业务实现为例,陈述在陌生业务前如何进行业务深入、调研、技术选型、设计及实现全过程分析,如果你对规则引擎不感冒、也可以从中了解一些抽象实现过程。诉求从硬件采集到的数据提供的形式多种...

【开源推荐】Diboot 2.0.5 发布,自动化开发助理

一、前言Diboot2.0.5版本已于近日发布,在此次发布中,我们新增了file-starter组件,完善了iam-starter组件,对core核心进行了相关优化,让devtools也支持对IAM...

微软推出Copilot Actions,使用人工智能自动执行重复性任务

IT之家11月19日消息,微软在今天举办的Ignite大会上宣布了一系列新功能,旨在进一步提升Microsoft365Copilot的智能化水平。其中最引人注目的是Copilot...

Electron 使用Selenium和WebDriver

本节我们来学习如何在Electron下使用Selenium和WebDriver。SeleniumSelenium是ThoughtWorks提供的一个强大的基于浏览器的开源自动化测试工具...

Quick &#39;n Easy Web Builder 11.1.0设计和构建功能齐全的网页的工具

一个实用而有效的应用程序,能够让您轻松构建、创建和设计个人的HTML网站。Quick'nEasyWebBuilder是一款全面且轻巧的软件,为用户提供了一种简单的方式来创建、编辑...