Velero安装与使用手册
Velero安装与使用手册1.Velero简介1.1 概览Velero(以前称为Heptio Ark)可以为您提供了备份和还原Kubernetes集群资源和持久卷的能力,你可以在公有云或本地搭建的私有云环境安装Velero,可以为你提供以下能力:备份集群数据,并在集群故障的情况下进行还原;将集群资源迁移到其他集群;将您的生产集群复制到开发和测试集群;Velero包含:在集群上运行的服务器端;在本地
Velero安装与使用手册
1.Velero简介
1.1 概览
Velero(以前称为Heptio Ark)可以为您提供了备份和还原Kubernetes集群资源和持久卷的能力,你可以在公有云或本地搭建的私有云环境安装Velero,可以为你提供以下能力:
- 备份集群数据,并在集群故障的情况下进行还原;
- 将集群资源迁移到其他集群;
- 将您的生产集群复制到开发和测试集群;
Velero包含:
- 在集群上运行的服务器端;
- 在本地运行的命令行客户端;
1.2 Velero工作原理
每个Velero的操作(如按需备份,计划备份,还原)都是自定义资源,使用Kubernetes 自定义资源定义(CRD)定义并存储在 etcd中,Velero还包括处理自定义资源以执行备份,还原以及所有相关操作的控制器,可以备份或还原群集中的所有对象,也可以按类型,命名空间或标签过滤对象。
Velero是kubernetes用来灾难恢复的理想选择,也是在集群上执行系统操作(如升级)之前对应用程序状态进行快照的理想选择。
1.2.1 按需备份
该备份操作:
- 将复制的Kubernetes对象的压缩文件上传到云对象存储中。
- 调用云提供程序API以创建持久卷的磁盘快照(如果指定)。
您可以选择指定在备份期间执行的备份挂钩。例如,您可能需要在拍摄快照之前告诉数据库将其内存中的缓冲区刷新到磁盘。
请注意,集群备份不是严格的原子备份,如果在备份时创建或编辑Kubernetes对象,则它们可能不包含在备份中,虽然捕获不一致信息的几率很低,但是有可能会发生这种现象。
1.2.2 定时备份
通过定时操作,您可以定期重复备份数据,第一次创建日程表时将执行第一次备份,随后的备份将按日程表的指定间隔进行,这些间隔由Cron表达式指定。
定时备份保存的名称<SCHEDULE NAME>-<TIMESTAMP>
,其中<TIMESTAMP>
被格式化为YYYYMMDDhhmmss。
1.2.3 备份还原
通过还原操作,您可以从以前创建的备份中还原所有对象和持久卷,您还可以仅还原对象和持久卷的过滤子集,Velero支持多个命名空间重新映射。例如,在一次还原操作中,可以在命名空间“ def”下重新创建命名空间“ abc”中的对象,或在“ 456”之下重新创建名称空间“ 123”中的对象。
还原的默认名称为<BACKUP NAME>-<TIMESTAMP>
,<TIMESTAMP>
格式为YYYYMMDDhhmmss,您还可以指定自定义名称,恢复的对象还包括带有键velero.io/restore-name
和值的标签<RESTORE NAME>
。
默认情况下,备份存储位置以读写模式创建。但是,在还原期间,您可以将备份存储位置配置为只读模式,这将禁用该存储位置的备份创建和删除,这对于确保在还原方案期间不会无意间创建或删除任何备份非常有用。
您可以选择指定在还原期间或还原资源后执行的还原钩子。例如,您可能需要在数据库应用程序容器启动之前执行自定义数据库还原操作。
1.2.4 备份工作流程
运行时velero backup create test-backup
:
- Velero客户端调用Kubernetes API服务器以创建
Backup
对象; - 该
BackupController
将收到通知有新的Backup
对象被创建并执行验证; BackupController
开始备份过程,它通过查询API服务器以获取资源来收集数据以进行备份;BackupController
将调用对象存储服务,例如,AWS S3 -上传备份文件。
默认情况下,velero backup create
支持任何持久卷的磁盘快照,您可以通过指定其他标志来调整快照,运行velero backup create --help
可以查看可用的标志,可以使用--snapshot-volumes=false
选项禁用快照。
1.2.5 备份的API版本
Velero使用Kubernetes API服务器的首选版本为group/resource备份资源,还原资源时,目标集群中必须存在相同的API group/vesion,以便还原成功。
例如,如果集群正在备份things
API组下的gizmos
资源,该资源包括things/v1alpha1
,things/v1beta1
以及things/v1
三个API版本,并且服务器的首选版本为things/v1
,那么gizmos
的备份数据将从things/v1
API端点获取;从该集群还原备份时,目标集群必须具有things/v1
端点才能将gizmos
的备份数据还原。请注意,things/v1
并不需要为目标集群中的首选版本,它只需要存在。
1.2.6 设置备份过期时间
创建备份时,可以通过添加标志来指定TTL(生存时间)--ttl
,如果Velero检测到有备份资源已过期,它将删除以下相应备份数据:
- 备份资源
- 来自云对象存储的备份文件
- 所有PersistentVolume快照
- 所有关联的还原
TTL标志使用户可以使用表格中以小时,分钟和秒为单位指定的值来指定备份保留期--ttl 24h0m0s
。如果未指定,则将应用默认的TTL值30天
1.2.7 同步对象存储
Velero将对象存储视为资源标准的来源,它不断检查以确保始终存在正确的备份资源,如果存储桶中有格式正确的备份文件,但Kubernetes API中没有相应的备份资源,则Velero会将信息从对象存储同步到Kubernetes,这使还原功能可以在集群迁移方案中工作,在该方案中,新集群中不存在原始的备份对象。同样,如果备份对象存在于Kubernetes中,但不存在于对象存储中,则由于备份压缩包不再存在,它将从Kubernetes中删除。
1.3 备份存储位置和卷快照位置
Velero有两个自定义资源BackupStorageLocation
和VolumeSnapshotLocation
,用于配置Velero备份及其关联的持久卷快照的存储位置。
BackupStorageLocation
:定义为存储区,存储所有Velero数据的存储区中的前缀以及一组其他特定于提供程序的字段,后面部分会详细介绍该部分所包含的字段。
VolumeSnapshotLocation
:完全由提供程序提供的特定的字段(例如AWS区域,Azure资源组,Portworx快照类型等)定义,后面部分会详细介绍该部分所包含的字段。
用户可以预先配置一个或多个可能的BackupStorageLocations
对象,也可以预先配置一个或多个VolumeSnapshotLocations
对象,并且可以在创建备份时选择应该存储备份和相关快照的位置。
此配置设计支持许多不同的用法,包括:
- 在单个Velero备份中创建不止一种持久卷的快照。例如,在同时具有EBS卷和Portworx卷的集群中
- 在不同地区将数据备份到不同的存储中
- 对于支持它的卷提供程序(例如Portworx),您可以将一些快照存储在本地集群中,而将其他快照存储在云中
1.3.1 缺陷和注意事项
- Velero对每个提供商仅支持一组凭据,如果后端存储使用同一提供者,则不可能在不同的位置使用不同的凭据;
- 卷快照仍然受提供商允许您创建快照的位置的限制,不支持跨公有云供应商备份带有卷的集群数据。例如,AWS和Azure不允许您在卷所在的区域中不同的可用区创建卷快照,如果您尝试使用卷快照位置(与集群卷所在的区域不同)来进行Velero备份,则备份将失败。
- 每个Velero备份都只能有一个
BackupStorageLocation
,VolumeSnapshotLocation
,不可能(到目前为止)将单个Velero备份同时发送到多个备份存储位置,或者将单个卷快照同时发送到多个位置。但是,如果跨位置的备份冗余很重要,则始终可以设置多个计划的备份,这些备份仅在所使用的存储位置有所不同。 - 不支持跨提供商快照,如果您的集群具有多种类型的卷,例如EBS和Portworx,但
VolumeSnapshotLocation
仅配置了EBS,则Velero将仅对EBS卷进行快照。 - 恢复数据存储在主Velero存储桶的
prefix/subdirectory
下,并在备份创建时将BackupStorageLocation
c存储到与用户选择的存储桶相对应的存储桶。
1.3.2 使用用例
(1)在单个Velero备份中创建不止一种持久卷的快照
创建快照存储位置
velero snapshot-location create ebs-us-east-1 \
--provider aws \
--config region=us-east-1
velero snapshot-location create portworx-cloud \
--provider portworx \
--config type=cloud
创建备份任务
velero backup create full-cluster-backup \
--volume-snapshot-locations ebs-us-east-1,portworx-cloud
由于在此示例中,为我们提供后端存储的的两个供应商(ebs-us-east-1
foraws
和portworx-cloud
for portworx
)各自仅配置了一个可能的卷快照位置,因此Velero在创建备份时也可以不明确指定它们:
velero backup create full-cluster-backup
(2)在不同的地区将备份存储到不同的对象存储桶中
创建备份存储位置
velero backup-location create default \
--provider aws \
--bucket velero-backups \
--config region=us-east-1
velero backup-location create s3-alt-region \
--provider aws \
--bucket velero-backups-alt \
--config region=us-west-1
创建备份任务
# The Velero server will automatically store backups in the backup storage location named "default" if
# one is not specified when creating the backup. You can alter which backup storage location is used
# by default by setting the --default-backup-storage-location flag on the `velero server` command (run
# by the Velero deployment) to the name of a different backup storage location.
velero backup create full-cluster-backup
或者
velero backup create full-cluster-alternate-location-backup \
--storage-location s3-alt-region
(3)对于公有云提供的存储卷,将一部分快照存储在本地,一部分存储在公有云
创建快照存储位置、
velero snapshot-location create portworx-local \
--provider portworx \
--config type=local
velero snapshot-location create portworx-cloud \
--provider portworx \
--config type=cloud
创建备份
# Note that since in this example you have two possible volume snapshot locations for the Portworx
# provider, you need to explicitly specify which one to use when creating a backup. Alternately,
# you can set the --default-volume-snapshot-locations flag on the `velero server` command (run by
# the Velero deployment) to specify which location should be used for each provider by default, in
# which case you don't need to specify it when creating a backup.
velero backup create local-snapshot-backup \
--volume-snapshot-locations portworx-local
或者
velero backup create cloud-snapshot-backup \
--volume-snapshot-locations portworx-cloud
(4)使用存储位置
创建存储位置
velero backup-location create default \
--provider aws \
--bucket velero-backups \
--config region=us-west-1
velero snapshot-location create ebs-us-west-1 \
--provider aws \
--config region=us-west-1
创建备份任务
# Velero will automatically use your configured backup storage location and volume snapshot location.
# Nothing needs to be specified when creating a backup.
velero backup create full-cluster-backup
2.安装Velero
2.1 安装条件
- 在启用DNS和容器联网的情况下访问Kubernetes集群v1.10或更高版本。
- 已在本地安装
kubectl
Velero使用对象存储来存储备份和关联的工件,它还可以选择与受支持的块存储系统集成,以对您的持久卷进行快照。
2.2 安装Velero
2.2.1 下载velero二进制包
可通过在**github**下载指定版本的velero二进制客户端安装包,本例下载的为v1.5.2
版本
wget https://github.com/vmware-tanzu/velero/releases/download/v1.5.2/velero-v1.5.2-linux-amd64.tar.gz
tar -zxvf velero-v1.5.2-linux-amd64.tar.gz
2.2.2 安装对象存储minio
(1)kubernetes集群内安装minio
minio 官方推荐安装在k8s集群中,在上步解压的压缩包中里的examples/minio/00-minio-deployment.yaml
包含了在k8s中安装minio的yaml文件,内容如下,可按照如下步骤修改minio的service类型为NodePort
,进行安装:
---
apiVersion: v1
kind: Namespace
metadata:
name: velero
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: velero
name: minio
labels:
component: minio
spec:
strategy:
type: Recreate
selector:
matchLabels:
component: minio
template:
metadata:
labels:
component: minio
spec:
volumes:
- name: storage
emptyDir: {}
- name: config
emptyDir: {}
containers:
- name: minio
image: minio/minio:latest
imagePullPolicy: IfNotPresent
args:
- server
- /storage
- --config-dir=/config
env:
- name: MINIO_ACCESS_KEY
value: "minio"
- name: MINIO_SECRET_KEY
value: "minio123"
ports:
- containerPort: 9000
volumeMounts:
- name: storage
mountPath: "/storage"
- name: config
mountPath: "/config"
---
apiVersion: v1
kind: Service
metadata:
namespace: velero
name: minio
labels:
component: minio
spec:
# ClusterIP is recommended for production environments.
# Change to NodePort if needed per documentation,
# but only if you run Minio in a test/trial environment, for example with Minikube.
type: NodePort
ports:
- port: 9000
targetPort: 9000
protocol: TCP
selector:
component: minio
---
apiVersion: batch/v1
kind: Job
metadata:
namespace: velero
name: minio-setup
labels:
component: minio
spec:
template:
metadata:
name: minio-setup
spec:
restartPolicy: OnFailure
volumes:
- name: config
emptyDir: {}
containers:
- name: mc
image: minio/mc:latest
imagePullPolicy: IfNotPresent
command:
- /bin/sh
- -c
- "mc --config-dir=/config config host add velero http://minio:9000 minio minio123 && mc --config-dir=/config mb -p velero/velero"
volumeMounts:
- name: config
mountPath: "/config"
执行如下命令安装minio
kubectl apply -f examples/minio/00-minio-deployment.yaml
查看pod状态,等待minio的状态为running。
(2)kubernetes集群外安装minio
若需要在不同kubernetes
和存储池集群备份与恢复数据,需要将minio
服务端安装在kubernetes
集群外,保证在集群发生灾难性故障时,不会对备份数据产生影响,以下为通过二进制的方式在kubernetes
集群外安装minio
:
在待安装minio的服务器上下载二进制包
wget https://dl.minio.io/server/minio/release/linux-amd64/minio
chmod +x minio
sudo mv minio /usr/local/bin/
查看版本信息
minio --version
准备对象存储的磁盘,在此跳过此步骤
使用systemd管理Minio服务,对于使用systemd init系统运行系统的人,请创建用于运行Minio服务的用户和组:
sudo groupadd --system minio
sudo useradd -s /sbin/nologin --system -g minio minio
为/data
(上述步骤准备好的磁盘挂载位置)目录提供minio用户所有权:
sudo chown -R minio:minio /data/
为Minio创建systemd服务单元文件:
vim /etc/systemd/system/minio.service
[Unit]
Description=Minio
Documentation=https://docs.minio.io
Wants=network-online.target
After=network-online.target
AssertFileIsExecutable=/usr/local/bin/minio
[Service]
WorkingDirectory=/data
User=minio
Group=minio
EnvironmentFile=-/etc/default/minio
ExecStartPre=/bin/bash -c "if [ -z \"${MINIO_VOLUMES}\" ]; then echo \"Variable MINIO_VOLUMES not set in /etc/default/minio\"; exit 1; fi"
ExecStart=/usr/local/bin/minio server $MINIO_OPTS $MINIO_VOLUMES
# Let systemd restart this service always
Restart=always
# Specifies the maximum file descriptor number that can be opened by this process
LimitNOFILE=65536
# Disable timeout logic and wait until process is stopped
TimeoutStopSec=infinity
SendSIGKILL=no
[Install]
WantedBy=multi-user.target
创建Minio环境文件/etc/default/minio:
# Volume to be used for Minio server.
MINIO_VOLUMES="/data"
# Use if you want to run Minio on a custom port.
MINIO_OPTS="--address :9000"
# Access Key of the server.
MINIO_ACCESS_KEY=minio
# Secret key of the server.
MINIO_SECRET_KEY=minio123
MINIO_ACCESS_KEY:长度至少为3个字符的访问密钥;
MINIO_SECRET_KEY:最少8个字符的密钥。
重新加载systemd并启动minio服务:
sudo systemctl daemon-reload
sudo systemctl start minio
2.2.3 安装velero服务端
(1)minio安装在kubernetes集群内时按照如下步骤安装velero服务端
velero install \
--image velero/velero:v1.3.0 \
--plugins velero/velero-plugin-for-aws:v1.0.0 \
--provider aws \
--bucket velero \
--namespace velero \
--secret-file ./credentials-velero \
--velero-pod-cpu-request 200m \
--velero-pod-mem-request 200Mi \
--velero-pod-cpu-limit 1000m \
--velero-pod-mem-limit 1000Mi \
--use-volume-snapshots=false \
--use-restic \
--restic-pod-cpu-request 200m \
--restic-pod-mem-request 200Mi \
--restic-pod-cpu-limit 1000m \
--restic-pod-mem-limit 1000Mi \
--backup-location-config region=minio,s3ForcePathStyle="true",s3Url=http://minio.velero.svc:9000
(2)minio安装在kubernetes集群外时按照如下步骤安装velero服务端
由于minio安装在集群外,pod无法访问外部服务,需要创建一个external类型的服务,用来访问外部minio,yaml内容如下:
---
apiVersion: v1
kind: Namespace
metadata:
name: velero
---
apiVersion: v1
kind: Service
metadata:
name: minio
namespace: velero
spec:
ports:
- port: 9000
---
kind: Endpoints
apiVersion: v1
metadata:
name: minio
namespace: velero
subsets:
- addresses:
- ip: 192.168.10.149
ports:
- port: 9000
保存为minio-service.yaml,执行如下命令创建service
kubectl apply -f minio-service.yaml
minio安装在集群外时,需手动创建名为velero
的桶,可访问minioIp:port
(192.168.10.149:9000),通过web页面创建velero
存储桶。
再按照minio安装在进群内的方式安装velero服务端,等待pod成功运行
2.2.4 卸载velero
如果您想从集群中完全卸载Velero,则以下命令将删除由velero install
创建的所有资源:
kubectl delete namespace/velero clusterrolebinding/velero
kubectl delete crds -l component=velero
3. 使用velero
3.1 灾难恢复
可通过定时和只读备份定期备份集群数据,在集群发生故障或升级失败时及时恢复。
若在在发生某些意外情况(例如服务中断)的情况下需要回退到先前的状态,可使用Velero进行以下操作:
(1)在集群上首次运行Velero服务器之后,请设置每日备份(<SCHEDULE NAME>
根据需要替换命令):
velero schedule create <SCHEDULE NAME> --schedule "0 7 * * *"
这将创建一个名为的备份对象<SCHEDULE NAME>-<TIMESTAMP>
,默认备份保留期限以TTL(有效期)表示,为30天(720小时);您可以通过--ttl
参数更改备份过器时间。
(2)发生故障或升级失败时,需要重新根据备份数据创建资源
(3)将备份存储位置更新为只读模式(这可以防止在还原过程中在备份存储中创建或删除备份对象):
kubectl patch backupstoragelocation <STORAGE LOCATION NAME> \
--namespace velero \
--type merge \
--patch '{"spec":{"accessMode":"ReadOnly"}}'
(4)使用最新的Velero备份还原还原数据:
velero restore create --from-backup <SCHEDULE NAME>-<TIMESTAMP>
(5)还原任务完成后,将备份位置修改为读写模式
kubectl patch backupstoragelocation <STORAGE LOCATION NAME> \
--namespace velero \
--type merge \
--patch '{"spec":{"accessMode":"ReadWrite"}}'
3.2 集群迁移
只要您将每个Velero实例指向相同的对象存储,Velero就能帮助您将资源从一个群集迁移到另一个群集。此方案假定您的群集由同一云提供商托管,请注意,Velero本身不支持跨云供应商迁移持久卷快照,如果要在云平台之间迁移卷数据,请启用 restic,它将在文件系统级别备份卷内容。
(1)*(集群1)*如果你尚未对集群进行备份操作,则需要首先备份整个集群(<BACKUP-NAME>
根据需要替换):
velero backup create <BACKUP-NAME>
默认备份保留期限以TTL(有效期)表示,为30天(720小时);您可以通过--ttl
参数更改备份过器时间。
(2)(集群2)使用和配置BackupStorageLocations
和VolumeSnapshotLocations
指向集群1使用的minio位置,并将使用模式配置为只读,可在创建存储位置时以--access-mode=ReadOnly
参数指定模式。
(3)*(集群2)*确保已在集群1创建Velero Backup对象,Velero资源与云存储中的备份文件已同步。
velero backup describe <BACKUP-NAME>
**注意:**默认同步间隔是1分钟,您可以使用--backup-sync-period
来设置Velero服务端的同步时间间隔。
(4)*(集群2)*确认<BACKUP-NAME>
的备份存在且任务执行完成后,就可以使用以下方法还原所有内容:
velero restore create --from-backup <BACKUP-NAME>
(5)验证迁移结果
检查集群2的还原任务是否完成
velero restore get
velero restore describe <RESTORE-NAME-FROM-GET-COMMAND>
如果遇到问题,请确保Velero在两个集群中所在的命名空间相同。
3.3 过滤备份对象
在备份资源时,velero
支持按照不同的方式筛选备份对象,主要有以下两种方式:
- 包括的备份对象:
--include-namespaces:备份该命名空间下的所有资源,不包括集群资源
--include-resources:要备份的资源类型
--include-cluster-resources:是否备份集群资源
此选项可以具有三个可能的值:
true:包括所有群集范围的资源;
false:不包括群集范围内的资源;
nil (“自动”或不提供)
备份或还原所有命名空间时,将包括集群范围的资源,默认值:true;
使用命名空间过滤时,不包括集群范围的资源,默认值:false;
有些特定的命名空间下的资源(例如PV),在备份pvc时仍会触发备份PV操作,除非使用--include-cluster-resources=false指明不备份集群资源
--selector:通过标签选择匹配的资源备份
备份时仅包括特定资源,不包括所有其他资源,如果同时包含通配符和特定资源,则通配符优先。
(1)备份命名空间及其对象:
velero backup create <backup-name> --include-namespaces <namespace>
(2)恢复两个名称空间及其对象。
velero restore create <backup-name> --include-namespaces <namespace1>,<namespace2>
(3)备份集群中的所有deployment
velero backup create <backup-name> --include-resources deployments
(4)还原集群中的所有deployment和configmap
velero restore create <backup-name> --include-resources deployments,configmaps
(5)备份特定命名空间中的deployment
velero backup create <backup-name> --include-resources deployments --include-namespaces <namespace>
(6)备份整个集群,包括集群范围内的资源
velero backup create <backup-name>
(7)仅还原集群中的命名空间资源
velero restore create <backup-name> --include-cluster-resources=false
(8)备份名称空间并包括群集范围的资源
velero backup create <backup-name> --include-namespaces <namespace> --include-cluster-resources=true
(9)备份与标签选择器匹配的资源
velero backup create <backup-name> --selector <key>=<value>
- 不包括的备份对象
--exclude-namespaces:备份时该命名空间下的资源不进行备份
--exclude-resources:备份时该类型的资源不进行备份
--velero.io/exclude-from-backup=true:当标签选择器匹配到该资源时,若该资源带有此标签,也不进行备份
从备份中排除特定资源,通配符匹配的将不会被排除
(10)从集群备份中排除命名空间kube-system
velero backup create <backup-name> --exclude-namespaces kube-system
(11)还原期间排除两个命名空间
velero restore create <backup-name> --exclude-namespaces <namespace1>,<namespace2>
(12)备份时排除Secret
velero backup create <backup-name> --exclude-resources secrets
(13)备份时排除secret
和rolebindings
velero backup create <backup-name> --exclude-resources secrets,rolebindings
(14)从备份中排除特定项目
即使单个项目与备份规范中定义的资源/名称空间/标签选择器匹配,也可以将其排除在备份之外,可通过如下命令进行标记:
kubectl label -n <ITEM_NAMESPACE> <RESOURCE>/<NAME> velero.io/exclude-from-backup=true
(15)指定特定种类资源的备份顺序
可通过使用–ordered-resources
参数,按特定顺序备份特定种类的资源,需要指定资源名称和该资源的对象名称列表,资源对象名称以逗号分隔,其名称格式为“命名空间/资源名称”,对于集群范围资源,只需使用资源名称。映射中的键值对以分号分隔,资源类型是复数形式。
velero backup create backupName --include-cluster-resources=true --ordered-resources 'pods=ns1/pod1,ns1/pod2;persistentvolumes=pv4,pv8' --include-namespaces=ns1
velero backup create backupName --ordered-resources 'statefulsets=ns1/sts1,ns1/sts0' --include-namespaces=ns1
3.4 备份hooks
Velero支持在备份任务执行之前和执行后在容器中执行一些预先设定好的命令。
执行备份时,可以指定一个或多个命令,以在待备份的pod
的容器中执行,可以将要执行的命令配置为在任何自定义动作处理之前(pre hook)运行,或者在所有自定义动作完成并且完成了备份动作指定的任何其他项(post hook)之后运行。
注意,钩子不在容器的shell
内执行。
有两种方法可以指定钩子:pod
本身的注释声明和在定义Backup
任务时的Spec
中声明。
3.4.1 在pod的注释中声明
您可以在Pod上使用以下注释,以使Velero在备份Pod时执行钩子任务:
-
Pre hooks
-
pre.hook.backup.velero.io/container:将要执行命令的容器,默认为pod中的第一个容器,可选的。
-
pre.hook.backup.velero.io/command:要执行的命令,如果需要多个参数,请将该命令指定为JSON数组。例如:["/usr/bin/uname", "-a"]
-
pre.hook.backup.velero.io/on-error:如果命令返回非零退出代码如何处理。默认为“Fail”,有效值为“Fail”和“Continue”,可选的。
-
pre.hook.backup.velero.io/timeout:等待命令执行的时间,如果命令超过超时,则认为该挂钩失败的。默认为30秒,可选的。
-
-
Post hooks
-
post.hook.backup.velero.io/container:将要执行命令的容器,默认为pod中的第一个容器,可选的。
-
post.hook.backup.velero.io/command:要执行的命令,如果需要多个参数,请将该命令指定为JSON数组。例如:["/usr/bin/uname", "-a"]
-
post.hook.backup.velero.io/on-error:如果命令返回非零退出代码如何处理。默认为“Fail”,有效值为“Fail”和“Continue”,可选的。
-
post.hook.backup.velero.io/timeout:等待命令执行的时间,如果命令超过超时,则认为该挂钩失败的。默认为30秒,可选的。
-
以下示例将引导您同时使用前挂钩和后挂钩来冻结文件系统,冻结文件系统对于确保在制作快照之前完成所有待处理的磁盘I / O操作很有用。
(1)创建待备份的对象
---
apiVersion: v1
kind: Namespace
metadata:
name: nginx-example
labels:
app: nginx
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: nginx-logs
namespace: nginx-example
labels:
app: nginx
spec:
# Optional:
# storageClassName: <YOUR_STORAGE_CLASS_NAME>
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 50Mi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
namespace: nginx-example
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
volumes:
- name: nginx-logs
persistentVolumeClaim:
claimName: nginx-logs
containers:
- image: nginx:1.17.6
name: nginx
ports:
- containerPort: 80
volumeMounts:
- mountPath: "/var/log/nginx"
name: nginx-logs
readOnly: false
- image: ubuntu:bionic
name: fsfreeze
securityContext:
privileged: true
volumeMounts:
- mountPath: "/var/log/nginx"
name: nginx-logs
readOnly: false
command:
- "/bin/bash"
- "-c"
- "sleep infinity"
---
apiVersion: v1
kind: Service
metadata:
labels:
app: nginx
name: my-nginx
namespace: nginx-example
spec:
ports:
- port: 80
targetPort: 80
selector:
app: nginx
type: LoadBalancer
(2)为待备份的对象添加钩子任务的注解
kubectl annotate pod -n nginx-example -l app=nginx \
pre.hook.backup.velero.io/command='["/sbin/fsfreeze", "--freeze", "/var/log/nginx"]' \
pre.hook.backup.velero.io/container=fsfreeze \
post.hook.backup.velero.io/command='["/sbin/fsfreeze", "--unfreeze", "/var/log/nginx"]' \
post.hook.backup.velero.io/container=fsfreeze
(3)创建备份来测试pre hooks和post hooks,您可以使用Velero日志来验证任务是否正在运行并退出而没有错误。
velero backup create nginx-hook-test
velero backup get nginx-hook-test
velero backup logs nginx-hook-test | grep hookCommand
需要在任务中执行多个命令时,可参考以下形式添加注解,同一个命令间的多个参数使用,
分割,多个命令之间通过&&
分割
pre.hook.backup.velero.io/command='["/bin/bash", "-c", "echo hello > hello.txt && echo goodbye > goodbye.txt"]'
3.4.2 在定义备份任务时声明
请参考相关CRD资源信息里的Backup
的定义示例
3.5 还原备份数据
3.5.1 将备份数据还原到与备份时不同的命名空间中
Velero可以将资源还原到与其备份来源不同的命名空间中。可以使用--namespace-mappings
参数来指定:
velero restore create RESTORE_NAME \
--from-backup BACKUP_NAME \
--namespace-mappings old-ns-1:new-ns-1,old-ns-2:new-ns-2
5.5.2 用户删除restore对象时会发生什么
还原对象表示还原操作,还原对象有两种删除类型:
velero restore delete
:此命令将删除代表它的自定义资源对象,以及其对应的日志和结果文件。但是,它不会从集群中删除由它创建的任何对象。kubectl -n velero delete restore
:该命令将删除代表还原的自定义资源对象,但不会从对象存储或集群中删除由还原期间创建的任何日志/结果文件。
3.5.3 还原命令行选项
要查看所有和还原相关的命令,请运行:velero restore --help
;要查看与特定命令关联的所有选项,请为该命令提供--help
标志。例如, velero restore create --help
将显示与create
命令关联的所有选项。
3.5.4 执行还原操作后,已有的NodePort类型的service如何处理
默认情况下会删除自动分配的NodePort ,并且服务在还原后会获得新的自动分配的端口。
可以使用last-applied-config
注释自动检测到明确指定的NodePort,并在还原后保留。可以在服务定义上将NodePorts显式指定为.spec.ports [*]。
由于操作复杂性,并非总是可能在某些大型集群上显式设置nodePorts。Kubernetes官方文档指出,明确指定nodePorts时,用户需要注意防止端口冲突。
未明确指定nodePort的集群在发生灾难时仍可能需要还原原始的NodePort,原有的自动分配的节点端口很可能已经在位于集群前端的负载均衡器上定义。如果更改了nodePort,则在灾难发生后更改负载均衡器上的所有这些nodePort是另一种复杂操作。
Velero有一个参数,可让用户决定保留原来的nodePorts。velero restore create
子命令具有 --preserve-nodeports
标志保护服务nodePorts。此标志用于从备份中保留原始的nodePorts,可用作--preserve-nodeports
或--preserve-nodeports=true
如果给定此标志,则Velero在还原Service时不会删除nodePorts,而是尝试使用备份时写入的nodePorts。
在以下情况下恢复时,尝试保留nodePorts可能会导致端口冲突:
-
如果备份中的nodePort已经在目标集群上分配,则Velero如下所示打印错误日志,并继续还原操作。
time="2020-11-23T12:58:31+03:00" level=info msg="Executing item action for services" logSource="pkg/restore/restore.go:1002" restore=velero/test-with-3-svc-20201123125825 time="2020-11-23T12:58:31+03:00" level=info msg="Restoring Services with original NodePort(s)" cmd=_output/bin/linux/amd64/velero logSource="pkg/restore/service_action.go:61" pluginName=velero restore=velero/test-with-3-svc-20201123125825 time="2020-11-23T12:58:31+03:00" level=info msg="Attempting to restore Service: hello-service" logSource="pkg/restore/restore.go:1107" restore=velero/test-with-3-svc-20201123125825 time="2020-11-23T12:58:31+03:00" level=error msg="error restoring hello-service: Service \"hello-service\" is invalid: spec.ports[0].nodePort: Invalid value: 31536: provided port is already allocated" logSource="pkg/restore/restore.go:1170" restore=velero/test-with-3-svc-20201123125825
-
如果备份中的nodePort不在目标集群的nodePort范围内,则Velero如下打印错误日志并继续还原操作。Kubernetes的默认nodePort范围是30000-32767,但是在示例集群上,nodePort范围是20000-22767,并尝试使用nodePort 31536恢复服务
time="2020-11-23T13:09:17+03:00" level=info msg="Executing item action for services" logSource="pkg/restore/restore.go:1002" restore=velero/test-with-3-svc-20201123130915 time="2020-11-23T13:09:17+03:00" level=info msg="Restoring Services with original NodePort(s)" cmd=_output/bin/linux/amd64/velero logSource="pkg/restore/service_action.go:61" pluginName=velero restore=velero/test-with-3-svc-20201123130915 time="2020-11-23T13:09:17+03:00" level=info msg="Attempting to restore Service: hello-service" logSource="pkg/restore/restore.go:1107" restore=velero/test-with-3-svc-20201123130915 time="2020-11-23T13:09:17+03:00" level=error msg="error restoring hello-service: Service \"hello-service\" is invalid: spec.ports[0].nodePort: Invalid value: 31536: provided port is not in the valid range. The range of valid ports is 20000-22767" logSource="pkg/restore/restore.go:1170" restore=velero/test-with-3-svc-20201123130915
3.5.5 更改pv/pvc的StorageClass
Velero可以在还原过程中更改持久卷和持久卷声明的存储类别,要提前定义配置存储类映射,请在Velero命名空间中创建一个配置映射,如下所示:
apiVersion: v1
kind: ConfigMap
metadata:
# any name can be used; Velero uses the labels (below)
# to identify it rather than the name
name: change-storage-class-config
# must be in the velero namespace
namespace: velero
# the below labels should be used verbatim in your
# ConfigMap.
labels:
# this value-less label identifies the ConfigMap as
# config for a plugin (i.e. the built-in restore item action plugin)
velero.io/plugin-config: ""
# this label identifies the name and kind of plugin
# that this ConfigMap is for.
velero.io/change-storage-class: RestoreItemAction
data:
# add 1+ key-value pairs here, where the key is the old
# storage class name and the value is the new storage
# class name.
<old-storage-class>: <new-storage-class>
Velero可以在还原过程中更新持久卷声明的选定节点注释,如果群集中不存在选定节点,则它将从PersistentVolumeClaim
中删除选定节点注释。需要按照如下内容提前在Velero命名空间创建节点映射的配置,如下所示:
apiVersion: v1
kind: ConfigMap
metadata:
# any name can be used; Velero uses the labels (below)
# to identify it rather than the name
name: change-pvc-node-selector-config
# must be in the velero namespace
namespace: velero
# the below labels should be used verbatim in your
# ConfigMap.
labels:
# this value-less label identifies the ConfigMap as
# config for a plugin (i.e. the built-in restore item action plugin)
velero.io/plugin-config: ""
# this label identifies the name and kind of plugin
# that this ConfigMap is for.
velero.io/change-pvc-node-selector: RestoreItemAction
data:
# add 1+ key-value pairs here, where the key is the old
# node name and the value is the new node name.
<old-node-name>: <new-node-name>
3.7 还原hooks
Velero支持还原hooks,可以在还原任务执行前或还原过程之后执行的自定义操作。有以下两种定义形式:
- InitContainer Restore Hooks:这些将在待还原的Pod的应用程序容器启动之前将init容器添加到还原的pod中,以执行任何必要的设置。
- Exec Restore Hooks:可用于在已还原的Kubernetes
pod
的容器中执行自定义命令或脚本。
3.7.1 InitContainer Restore Hooks
InitContainer
在还原之前,将使用初始化容器把hooks
添加到容器中,您可以使用这些init容器来运行将Pod从其备份状态恢复到运行状态所需的任何操作。由resroe hook
添加的InitContainer将是待还原的pod
的podSpec
中第一个init容器。如果pod已使用restic备份了卷,则将在InitContainer之后添加一个名称为restic-wait
的InitContainer用来还原备份的卷 。
注意:可以通过在集群中安装的任何定制的webhook来更改此顺序。
有两种指定InitContainer Restore Hooks
的方法:
(1)在pod
注释中指定
以下是可以将InitContainer Restore Hooks
添加到pod
的注解
-
init.hook.restore.velero.io/container-image:要添加的init容器的容器镜像
-
init.hook.restore.velero.io/container-name:要添加的init容器的名称
-
init.hook.restore.velero.io/command:将要在初始化容器中执行的任务或命令
例:
进行备份之前,请使用以下命令将注释添加到Pod:
kubectl annotate pod -n <POD_NAMESPACE> <POD_NAME> \
init.hook.restore.velero.io/container-name=restore-hook \
init.hook.restore.velero.io/container-image=alpine:latest \
init.hook.restore.velero.io/command='["/bin/ash", "-c", "date"]'
通过上面的注释,Velero在还原后将以下初始化容器添加到Pod:
{
"command": [
"/bin/ash",
"-c",
"date"
],
"image": "alpine:latest",
"imagePullPolicy": "Always",
"name": "restore-hook"
...
}
(2)在Restore
的定义规范中声明
InitContainer Restore Hooks
也可以通过在RestoreSpec
指定。请参阅有关还原API类型的文档, 以了解如何在还原规范中指定挂钩。
例:
以下是在RestoreSpec
指定InitContainer Restore Hooks
的示例
apiVersion: velero.io/v1
kind: Restore
metadata:
name: r2
namespace: velero
spec:
backupName: b2
excludedResources:
...
includedNamespaces:
- '*'
hooks:
resources:
- name: restore-hook-1
includedNamespaces:
- app
postHooks:
- init:
initContainers:
- name: restore-hook-init1
image: alpine:latest
volumeMounts:
- mountPath: /restores/pvc1-vm
name: pvc1-vm
command:
- /bin/ash
- -c
- echo -n "FOOBARBAZ" >> /restores/pvc1-vm/foobarbaz
- name: restore-hook-init2
image: alpine:latest
volumeMounts:
- mountPath: /restores/pvc2-vm
name: pvc2-vm
command:
- /bin/ash
- -c
- echo -n "DEADFEED" >> /restores/pvc2-vm/deadfeed
上面所述的Restore
实例创建后,将会有以下两个init容器添加到app
命名空间中的每个pod中
{
"command": [
"/bin/ash",
"-c",
"echo -n \"FOOBARBAZ\" >> /restores/pvc1-vm/foobarbaz"
],
"image": "alpine:latest",
"imagePullPolicy": "Always",
"name": "restore-hook-init1",
"resources": {},
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"volumeMounts": [
{
"mountPath": "/restores/pvc1-vm",
"name": "pvc1-vm"
}
]
...
}
{
"command": [
"/bin/ash",
"-c",
"echo -n \"DEADFEED\" >> /restores/pvc2-vm/deadfeed"
],
"image": "alpine:latest",
"imagePullPolicy": "Always",
"name": "restore-hook-init2",
"resources": {},
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"volumeMounts": [
{
"mountPath": "/restores/pvc2-vm",
"name": "pvc2-vm"
}
]
...
}
3.7.2 Exec Restore Hooks
此任务启动后,会使用Exec Restore Hook
在已还原的pod的容器中执行命令,如果pod具有注释post.hook.restore.velero.io/command
则这是将在容器中执行的唯一挂钩,在该Pod中不会执行来自RestoreSpec
的任何挂钩。
有两种指定Exec Restore Hooks
的方法:
(1)在pod
的注解中指定
以下是可以向到pod中添加指定Exec Restore Hooks
的注释:
-
post.hook.restore.velero.io/container:;执行hook的容器名称,默认为第一个容器,可选
-
post.hook.restore.velero.io/command:将在容器中执行的命令,必填
-
post.hook.restore.velero.io/on-error:如何处理执行失败,有效值为Fail和Continue,默认为Continue,使用Continue模式,仅记录执行失败;使用Fail模式时,将不会在自行其他的hook,还原的状态将为PartiallyFailed,可选
-
post.hook.restore.velero.io/exec-timeout:开始执行后要等待多长时间,默认为30秒,可选
-
post.hook.restore.velero.io/wait-timeout:等待容器准备就绪的时间,该时间应足够长,以使容器能够启动,并且同一容器中的所有先前的hook也要完成;等待超时在还原容器后开始,可能需要一段时间才能拉出镜像并安装卷,如果未设置,则还原将无限期等待,可选
例:
进行备份之前,请使用以下命令将注释添加到Pod
kubectl annotate pod -n <POD_NAMESPACE> <POD_NAME> \
post.hook.restore.velero.io/container=postgres \
post.hook.restore.velero.io/command='["/bin/bash", "-c", "psql < /backup/backup.sql"]' \
post.hook.restore.velero.io/wait-timeout=5m \
post.hook.restore.velero.io/exec-timeout=45s \
post.hook.restore.velero.io/on-error=Continue
(2)在RestoreSpec
中指定
还可以在RestoreSpec
中指定Exec Restore Hooks
,请参阅有关还原API类型的文档, 以了解如何在RestoreSpec
中指定hooks。
以下是在RestoreSpec
中指定多个Exec Restore Hooks
的示例,如本示例所示,在使用RestoreSpec
时,可以为单个pod
指定多个hook
。
apiVersion: velero.io/v1
kind: Restore
metadata:
name: r2
namespace: velero
spec:
backupName: b2
excludedResources:
...
includedNamespaces:
- '*'
hooks:
resources:
- name: restore-hook-1
includedNamespaces:
- app
postHooks:
- exec:
execTimeout: 1m
waitTimeout: 5m
onError: Fail
container: postgres
command:
- /bin/bash
- '-c'
- 'while ! pg_isready; do sleep 1; done'
- exec:
container: postgres
waitTimeout: 6m
execTimeout: 1m
command:
- /bin/bash
- '-c'
- 'psql < /backup/backup.sql'
- exec:
container: sidecar
command:
- /bin/bash
- '-c'
- 'date > /start'
还原任务开始执行后,所有hook
将在匹配到的容器中顺序执行,在单个容器中执行的钩子的顺序遵循RestoreSpec
的顺序;在此示例中,pg_isready
将在psql
之前运行,因为它们都适用于同一容器,并且pg_isready
先被定义。
如果pod
中版含有多个Exec Restore Hooks
,则在执行hook
之前,所有待执行的hook
的容器都会运行起来,但由于在执行过程中,所有的任务会按照顺序执行,则最后执行的date
可能会等待几分钟才会执行。
Velero保证单个pod
中没有两个hook
任务可以并行执行,但是在不同pod
中执行的hook
可以并行运行。
3.8 在不同命名空间中恢复备份
3.9 Velero中的容器存储接口快照支持
此功能正在开发中,文档可能不是最新的,某些功能可能无法按预期运行。
将容器存储接口(CSI)快照支持集成到Velero中,可使Velero使用Kubernetes CSI Snapshot Beta API备份和还原CSI支持的卷 。
通过支持CSI快照API,Velero可以支持任何具有CSI驱动程序的卷提供商,而无需使用特定的Velero的插件。
以下是使用Velero通过容器存储接口(CSI)制作快照的先决条件:
- 该集群是Kubernetes 1.17或更高版本;
- 集群运行的CSI驱动程序能够支持卷快照 ;
- 跨集群还原CSI卷快照时,目标集群中CSI驱动程序的名称与源集群上的CSI驱动程序名称必须相同,以确保CSI卷快照的跨集群可迁移性。
确保Velero服务端已启用EnableCSI
功能,此外,Velero CSI插件( Docker Hub)对于与CSI卷快照API集成是必需的。
这两个都可以使用velero install
命令添加。
velero install \
--features=EnableCSI \
--plugins=<object storage plugin>,velero/velero-plugin-for-csi:v0.1.0 \
...
要在velero backup describe
输出中包括与Velero备份关联的CSI对象的状态,请执行velero client config set features=EnableCSI
以下是Velero CSI
对volumesnapshot
和volumesnapshotconten
的保存策略,以及修改该策略的方法:
- 由Velero CSI创建的
volumesnapshotclass
中的关于卷快照的DeletionPolicy
会被设置为Retain
,因此当在删除备份的时候并不会删除备份过程中产生的快照,需要先volumesnapshotclass
的策略修改为Delete
,此时,删除volumenapshot对象将级联删除volumenapshotcontent和存储供应商中的快照。 - 在velero备份期间创建的处于未绑定到volumesnapshot对象的volumesnapshotcontent对象也将通过标签被发现,并在备份删除时被删除。
- Velero CSI插件(用于备份CSI支持的PVC)将在集群中选择具有相同驱动程序名称并在其上设置关于
VolumeSnapshotClass
的标签velero.io/csi-volumesnapshot-class
,例如
velero.io/csi-volumesnapshot-class: "true"
工作原理
Velero的CSI支持不依赖Velero VolumeSnapshotter插件接口。
相反,Velero使用了BackupItemAction
插件集合,这些插件首先对PersistentVolumeClaims
起作用。
当此BackupItemAction
检测到PersistentVolumeClaims
指向由CSI驱动程序支持的PersistentVolume
时,它将选择具有相同驱动程序名称的VolumeSnapshotClass
,该驱动程序名称带有velero.io/csi-volumesnapshot-class
标签,然后使用PersistentVolumeClaim
作为源来创建CSI VolumeSnapshot对象,该VolumeSnapshot
对象与用作源的PersistentVolumeClaim
位于相同的命名空间中。
然后,CSI外部快照控制器将看到VolumeSnapshot,并创建一个VolumeSnapshotContent对象,该对象是集群范围内的资源,它将指向存储系统中实际的基于磁盘的快照。external-snapshotter
插件将调用CSI驱动程序的快照方法,该驱动程序将调用存储系统的API生成快照,一旦生成ID,并且存储系统将快照标记为可用于还原,则VolumeSnapshotContent对象的status.snapshotHandle
会被更新为一个字符串,并且status.readyToUse
会被设置为true
。
Velero将在备份生成的tar包中包含VolumeSnapshot
和VolumeSnapshotContent
对象,并将JSON文件中的所有VolumeSnapshots
和VolumeSnapshotContents
对象上传到对象存储系统。当Velero将备份同步到新集群中时,VolumeSnapshotContent
对象也将同步到集群中,以便Velero可以适当地管理备份过期。
VolumeSnapshotContent
的DeletionPolicy
与VolumeSnapshotClass
的是相同的,在VolumeSnapshotClass
上将DeletionPolicy
设置为Retain
时,将在Velero备份的整个生命周期内将卷快照保留在存储系统中,并且在发生灾难的情况下(删除带有VolumeSnapshot对象的命名空间)可防止在存储系统中删除卷快照。
当Velero备份到期,VolumeSnapshot对象将被删除,VolumeSnapshotContent对象将被更新为具有DeletionPolicy
的Delete
,释放存储系统上的空间。
3.10 更改RBAC权限
默认情况下,Velero使用ClusterRole
的cluster-admin
RBAC策略运行,这是为了确保Velero可以备份或还原集群中的任何内容。但是cluster-admin
访问是完全开放的,它使Velero组件可以访问集群中的所有内容,你可以根据您的环境和安全需求,考虑是否配置具有更多限制性访问权限的其他RBAC策略。
注意:Role
和RoleBindings
与命名空间下的资源,PersistentVolume
是集群资源,这意味着使用限制性角色和角色绑定对的任何备份或还原只能管理属于名称空间的资源。如果使用了限制性的Role
和RoleBindings
则无法对PersistentVolume
进行备份,此时你可以通过另外的一个RBAC
策略来只进行集群资源的备份与恢复。
以下为设置角色和角色绑定的示例:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: YOUR_NAMESPACE_HERE
name: ROLE_NAME_HERE
labels:
component: velero
rules:
- apiGroups:
- velero.io
verbs:
- "*"
resources:
- "*"
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: ROLEBINDING_NAME_HERE
subjects:
- kind: ServiceAccount
name: YOUR_SERVICEACCOUNT_HERE
roleRef:
kind: Role
name: ROLE_NAME_HERE
apiGroup: rbac.authorization.k8s.io
4 相关的CRD资源信息
以下是具有某些功能的API类型的列表,您可以通过velero
命令行使用json / yaml
进行相应的修改与配置
4.1 Backup
通过调用Backup
API来请求Velero服务器执行备份,创建备份后,Velero服务器立即启动备份过程;备份属于的API组和版本为velero.io/v1
。
以下为定义一个Backup
对象的示例,其中包括了所有可能的字段:
# Standard Kubernetes API Version declaration. Required.
apiVersion: velero.io/v1
# Standard Kubernetes Kind declaration. Required.
kind: Backup
# Standard Kubernetes metadata. Required.
metadata:
# Backup name. May be any valid Kubernetes object name. Required.
name: a
# Backup namespace. Must be the namespace of the Velero server. Required.
namespace: velero
# Parameters about the backup. Required.
spec:
# Array of namespaces to include in the backup. If unspecified, all namespaces are included.
# Optional.
includedNamespaces:
- '*'
# Array of namespaces to exclude from the backup. Optional.
excludedNamespaces:
- some-namespace
# Array of resources to include in the backup. Resources may be shortcuts (for example 'po' for 'pods')
# or fully-qualified. If unspecified, all resources are included. Optional.
includedResources:
- '*'
# Array of resources to exclude from the backup. Resources may be shortcuts (for example 'po' for 'pods')
# or fully-qualified. Optional.
excludedResources:
- storageclasses.storage.k8s.io
# Whether or not to include cluster-scoped resources. Valid values are true, false, and
# null/unset. If true, all cluster-scoped resources are included (subject to included/excluded
# resources and the label selector). If false, no cluster-scoped resources are included. If unset,
# all cluster-scoped resources are included if and only if all namespaces are included and there are
# no excluded namespaces. Otherwise, if there is at least one namespace specified in either
# includedNamespaces or excludedNamespaces, then the only cluster-scoped resources that are backed
# up are those associated with namespace-scoped resources included in the backup. For example, if a
# PersistentVolumeClaim is included in the backup, its associated PersistentVolume (which is
# cluster-scoped) would also be backed up.
includeClusterResources: null
# Individual objects must match this label selector to be included in the backup. Optional.
labelSelector:
matchLabels:
app: velero
component: server
# Whether or not to snapshot volumes. This only applies to PersistentVolumes for Azure, GCE, and
# AWS. Valid values are true, false, and null/unset. If unset, Velero performs snapshots as long as
# a persistent volume provider is configured for Velero.
snapshotVolumes: null
# Where to store the tarball and logs.
storageLocation: aws-primary
# The list of locations in which to store volume snapshots created for this backup.
volumeSnapshotLocations:
- aws-primary
- gcp-primary
# The amount of time before this backup is eligible for garbage collection. If not specified,
# a default value of 30 days will be used. The default can be configured on the velero server
# by passing the flag --default-backup-ttl.
ttl: 24h0m0s
# Whether restic should be used to take a backup of all pod volumes by default.
defaultVolumesToRestic: true
# Actions to perform at different times during a backup. The only hook supported is
# executing a command in a container in a pod using the pod exec API. Optional.
hooks:
# Array of hooks that are applicable to specific resources. Optional.
resources:
-
# Name of the hook. Will be displayed in backup log.
name: my-hook
# Array of namespaces to which this hook applies. If unspecified, the hook applies to all
# namespaces. Optional.
includedNamespaces:
- '*'
# Array of namespaces to which this hook does not apply. Optional.
excludedNamespaces:
- some-namespace
# Array of resources to which this hook applies. The only resource supported at this time is
# pods.
includedResources:
- pods
# Array of resources to which this hook does not apply. Optional.
excludedResources: []
# This hook only applies to objects matching this label selector. Optional.
labelSelector:
matchLabels:
app: velero
component: server
# An array of hooks to run before executing custom actions. Only "exec" hooks are supported.
pre:
-
# The type of hook. This must be "exec".
exec:
# The name of the container where the command will be executed. If unspecified, the
# first container in the pod will be used. Optional.
container: my-container
# The command to execute, specified as an array. Required.
command:
- /bin/uname
- -a
# How to handle an error executing the command. Valid values are Fail and Continue.
# Defaults to Fail. Optional.
onError: Fail
# How long to wait for the command to finish executing. Defaults to 30 seconds. Optional.
timeout: 10s
# An array of hooks to run after all custom actions and additional items have been
# processed. Only "exec" hooks are supported.
post:
# Same content as pre above.
# Status about the Backup. Users should not set any data here.
status:
# The version of this Backup. The only version supported is 1.
version: 1
# The date and time when the Backup is eligible for garbage collection.
expiration: null
# The current phase. Valid values are New, FailedValidation, InProgress, Completed, PartiallyFailed, Failed.
phase: ""
# An array of any validation errors encountered.
validationErrors: null
# Date/time when the backup started being processed.
startTimestamp: 2019-04-29T15:58:43Z
# Date/time when the backup finished being processed.
completionTimestamp: 2019-04-29T15:58:56Z
# Number of volume snapshots that Velero tried to create for this backup.
volumeSnapshotsAttempted: 2
# Number of volume snapshots that Velero successfully created for this backup.
volumeSnapshotsCompleted: 1
# Number of warnings that were logged by the backup.
warnings: 2
# Number of errors that were logged by the backup.
errors: 0
4.2 Restore
通过此API可以创建还原对象的实例,用来通过备份文件还原对应的备份数据。创建后,Velero服务器将立即启动还原过程;还原属于的API组和版本为velero.io/v1
。
以下为定义一个还原对象的示例文件,其中包含了每一个可能的字段;
# Standard Kubernetes API Version declaration. Required.
apiVersion: velero.io/v1
# Standard Kubernetes Kind declaration. Required.
kind: Restore
# Standard Kubernetes metadata. Required.
metadata:
# Restore name. May be any valid Kubernetes object name. Required.
name: a-very-special-backup-0000111122223333
# Restore namespace. Must be the namespace of the Velero server. Required.
namespace: velero
# Parameters about the restore. Required.
spec:
# BackupName is the unique name of the Velero backup to restore from.
backupName: a-very-special-backup
# Array of namespaces to include in the restore. If unspecified, all namespaces are included.
# Optional.
includedNamespaces:
- '*'
# Array of namespaces to exclude from the restore. Optional.
excludedNamespaces:
- some-namespace
# Array of resources to include in the restore. Resources may be shortcuts (for example 'po' for 'pods')
# or fully-qualified. If unspecified, all resources are included. Optional.
includedResources:
- '*'
# Array of resources to exclude from the restore. Resources may be shortcuts (for example 'po' for 'pods')
# or fully-qualified. Optional.
excludedResources:
- storageclasses.storage.k8s.io
# Whether or not to include cluster-scoped resources. Valid values are true, false, and
# null/unset. If true, all cluster-scoped resources are included (subject to included/excluded
# resources and the label selector). If false, no cluster-scoped resources are included. If unset,
# all cluster-scoped resources are included if and only if all namespaces are included and there are
# no excluded namespaces. Otherwise, if there is at least one namespace specified in either
# includedNamespaces or excludedNamespaces, then the only cluster-scoped resources that are backed
# up are those associated with namespace-scoped resources included in the restore. For example, if a
# PersistentVolumeClaim is included in the restore, its associated PersistentVolume (which is
# cluster-scoped) would also be backed up.
includeClusterResources: null
# Individual objects must match this label selector to be included in the restore. Optional.
labelSelector:
matchLabels:
app: velero
component: server
# NamespaceMapping is a map of source namespace names to
# target namespace names to restore into. Any source namespaces not
# included in the map will be restored into namespaces of the same name.
namespaceMapping:
namespace-backup-from: namespace-to-restore-to
# RestorePVs specifies whether to restore all included PVs
# from snapshot (via the cloudprovider).
restorePVs: true
# ScheduleName is the unique name of the Velero schedule
# to restore from. If specified, and BackupName is empty, Velero will
# restore from the most recent successful backup created from this schedule.
scheduleName: my-scheduled-backup-name
# Actions to perform during or post restore. The only hooks currently supported are
# adding an init container to a pod before it can be restored and executing a command in a
# restored pod's container. Optional.
hooks:
# Array of hooks that are applicable to specific resources. Optional.
resources:
# Name is the name of this hook.
- name: restore-hook-1
# Array of namespaces to which this hook applies. If unspecified, the hook applies to all
# namespaces. Optional.
includedNamespaces:
- ns1
# Array of namespaces to which this hook does not apply. Optional.
excludedNamespaces:
- ns3
# Array of resources to which this hook applies. The only resource supported at this time is
# pods.
includedResources:
- pods
# Array of resources to which this hook does not apply. Optional.
excludedResources: []
# This hook only applies to objects matching this label selector. Optional.
labelSelector:
matchLabels:
app: velero
component: server
# An array of hooks to run during or after restores. Currently only "init" and "exec" hooks
# are supported.
postHooks:
# The type of the hook. This must be "init" or "exec".
- init:
# An array of container specs to be added as init containers to pods to which this hook applies to.
initContainers:
- name: restore-hook-init1
image: alpine:latest
# Mounting volumes from the podSpec to which this hooks applies to.
volumeMounts:
- mountPath: /restores/pvc1-vm
# Volume name from the podSpec
name: pvc1-vm
command:
- /bin/ash
- -c
- echo -n "FOOBARBAZ" >> /restores/pvc1-vm/foobarbaz
- name: restore-hook-init2
image: alpine:latest
# Mounting volumes from the podSpec to which this hooks applies to.
volumeMounts:
- mountPath: /restores/pvc2-vm
# Volume name from the podSpec
name: pvc2-vm
command:
- /bin/ash
- -c
- echo -n "DEADFEED" >> /restores/pvc2-vm/deadfeed
- exec:
# The container name where the hook will be executed. Defaults to the first container.
# Optional.
container: foo
# The command that will be executed in the container. Required.
command:
- /bin/bash
- -c
- "psql < /backup/backup.sql"
# How long to wait for a container to become ready. This should be long enough for the
# container to start plus any preceding hooks in the same container to complete. The wait
# timeout begins when the container is restored and may require time for the image to pull
# and volumes to mount. If not set the restore will wait indefinitely. Optional.
waitTimeout: 5m
# How long to wait once execution begins. Defaults to 30 seconds. Optional.
execTimeout: 1m
# How to handle execution failures. Valid values are `Fail` and `Continue`. Defaults to
# `Continue`. With `Continue` mode, execution failures are logged only. With `Fail` mode,
# no more restore hooks will be executed in any container in any pod and the status of the
# Restore will be `PartiallyFailed`. Optional.
onError: Continue
# RestoreStatus captures the current status of a Velero restore. Users should not set any data here.
status:
# The current phase. Valid values are New, FailedValidation, InProgress, Completed, PartiallyFailed, Failed.
phase: ""
# An array of any validation errors encountered.
validationErrors: null
# Number of warnings that were logged by the restore.
warnings: 2
# Errors is a count of all error messages that were generated
# during execution of the restore. The actual errors are stored in object
# storage.
errors: 0
# FailureReason is an error that caused the entire restore
# to fail.
failureReason:
4.3 Schedule
通过此API可以通过给定的符合cron
规则的符号,创建可重复执行的备份任务。创建后,Velero服务器将开始备份过程;然后它将等待给定cron表达式的下一个有效点,并重复执行备份过程。Schedule属于的API组和版本为velero.io/v1
。
以下为定义一个Schedule
备份任务对象的示例,包括了所有可能的字段:
# Standard Kubernetes API Version declaration. Required.
apiVersion: velero.io/v1
# Standard Kubernetes Kind declaration. Required.
kind: Schedule
# Standard Kubernetes metadata. Required.
metadata:
# Schedule name. May be any valid Kubernetes object name. Required.
name: a
# Schedule namespace. Must be the namespace of the Velero server. Required.
namespace: velero
# Parameters about the scheduled backup. Required.
spec:
# Schedule is a Cron expression defining when to run the Backup
schedule: 0 7 * * *
# Template is the spec that should be used for each backup triggered by this schedule.
template:
# Array of namespaces to include in the scheduled backup. If unspecified, all namespaces are included.
# Optional.
includedNamespaces:
- '*'
# Array of namespaces to exclude from the scheduled backup. Optional.
excludedNamespaces:
- some-namespace
# Array of resources to include in the scheduled backup. Resources may be shortcuts (for example 'po' for 'pods')
# or fully-qualified. If unspecified, all resources are included. Optional.
includedResources:
- '*'
# Array of resources to exclude from the scheduled backup. Resources may be shortcuts (for example 'po' for 'pods')
# or fully-qualified. Optional.
excludedResources:
- storageclasses.storage.k8s.io
# Whether or not to include cluster-scoped resources. Valid values are true, false, and
# null/unset. If true, all cluster-scoped resources are included (subject to included/excluded
# resources and the label selector). If false, no cluster-scoped resources are included. If unset,
# all cluster-scoped resources are included if and only if all namespaces are included and there are
# no excluded namespaces. Otherwise, if there is at least one namespace specified in either
# includedNamespaces or excludedNamespaces, then the only cluster-scoped resources that are backed
# up are those associated with namespace-scoped resources included in the scheduled backup. For example, if a
# PersistentVolumeClaim is included in the backup, its associated PersistentVolume (which is
# cluster-scoped) would also be backed up.
includeClusterResources: null
# Individual objects must match this label selector to be included in the scheduled backup. Optional.
labelSelector:
matchLabels:
app: velero
component: server
# Whether or not to snapshot volumes. This only applies to PersistentVolumes for Azure, GCE, and
# AWS. Valid values are true, false, and null/unset. If unset, Velero performs snapshots as long as
# a persistent volume provider is configured for Velero.
snapshotVolumes: null
# Where to store the tarball and logs.
storageLocation: aws-primary
# The list of locations in which to store volume snapshots created for backups under this schedule.
volumeSnapshotLocations:
- aws-primary
- gcp-primary
# The amount of time before backups created on this schedule are eligible for garbage collection. If not specified,
# a default value of 30 days will be used. The default can be configured on the velero server
# by passing the flag --default-backup-ttl.
ttl: 24h0m0s
# Actions to perform at different times during a backup. The only hook supported is
# executing a command in a container in a pod using the pod exec API. Optional.
hooks:
# Array of hooks that are applicable to specific resources. Optional.
resources:
-
# Name of the hook. Will be displayed in backup log.
name: my-hook
# Array of namespaces to which this hook applies. If unspecified, the hook applies to all
# namespaces. Optional.
includedNamespaces:
- '*'
# Array of namespaces to which this hook does not apply. Optional.
excludedNamespaces:
- some-namespace
# Array of resources to which this hook applies. The only resource supported at this time is
# pods.
includedResources:
- pods
# Array of resources to which this hook does not apply. Optional.
excludedResources: []
# This hook only applies to objects matching this label selector. Optional.
labelSelector:
matchLabels:
app: velero
component: server
# An array of hooks to run before executing custom actions. Only "exec" hooks are supported.
pre:
-
# The type of hook. This must be "exec".
exec:
# The name of the container where the command will be executed. If unspecified, the
# first container in the pod will be used. Optional.
container: my-container
# The command to execute, specified as an array. Required.
command:
- /bin/uname
- -a
# How to handle an error executing the command. Valid values are Fail and Continue.
# Defaults to Fail. Optional.
onError: Fail
# How long to wait for the command to finish executing. Defaults to 30 seconds. Optional.
timeout: 10s
# An array of hooks to run after all custom actions and additional items have been
# processed. Only "exec" hooks are supported.
post:
# Same content as pre above.
status:
# The current phase of the latest scheduled backup. Valid values are New, FailedValidation, InProgress, Completed, PartiallyFailed, Failed.
phase: ""
# Date/time of the last backup for a given schedule
lastBackup:
# An array of any validation errors encountered.
validationErrors:
4.4 BackupStorageLocation
Velero可以将备份存储在多个位置,可以通过集群中的一个叫做BackupStorageLocation
的CRD资源来进行声明;Velero必须至少有一个BackupStorageLocation
。默认情况下会在velero
命名空间下创建一个名为default
的实例,用来声明备份存储位置,可以通过指定--default-backup-storage-location
参数来更改服务端的默认备份存储位置。
以下是创建BackupStorageLocation
的亿的简单示例:
apiVersion: velero.io/v1
kind: BackupStorageLocation
metadata:
name: default
namespace: velero
spec:
backupSyncPeriod: 2m0s
provider: aws
objectStorage:
bucket: myBucket
config:
region: us-west-2
profile: "default"
以下为可配置的参数列表;
key | type | Default | Meaning |
---|---|---|---|
provider | String | 必填 | 对象存储提供商驱动名称。请参阅 对象存储提供商的插件文档以获取适当的 |
objectStorage | ObjectStorageLocation | 必填 | 给定提供者的对象存储的规范。 |
objectStorage/bucket | String | 必填 | 备份文件要上传的存储桶 |
objectStorage/prefix | String | 选填 | 备份文件要上传到存储桶的目录。 |
objectStorage/caCert | String | 选填 | 验证TLS连接时要使用的base64编码的CA证书 |
config | map[string]string | 无(选填) | 对象存储供应商所需的特殊的key/value 键值对。有关详细信息,请参见 对象存储提供商的插件文档。 |
accessMode | String | ReadWrite | Velero访问备份存储位置的模式。有效值为ReadWrite ,ReadOnly 。 |
backupSyncPeriod | metav1.Duration | 选填 | Velero应多久同步一次对象存储中的备份。默认值为Velero的服务器备份同步时间;设置0s 为禁用同步。 |
validationFrequency | metav1.Duration | 选填 | Velero多久验证一次对象存储。默认值为Velero的服务器验证频率;设置0s 为禁用验证。默认值1分钟。 |
4.5 VolumeSnapshotLocation
VolumeSnapshotLocation
是用于定义存储为备份创建的卷快照的位置,可以将Velero配置为对来自多个提供程序的卷进行快照。Velero还允许您为每个提供商配置对应的VolumeSnapshotLocation
,但是在备份时每个提供商只能选择一个位置。
每个VolumeSnapshotLocation
是通过在集群中的CRD资源描述的关于提供程序和存储位置。每个云提供商必须至少有一个。
以下为创建在VolumeSnapshotLocation
的示例:
apiVersion: velero.io/v1
kind: VolumeSnapshotLocation
metadata:
name: aws-default
namespace: velero
spec:
provider: aws
config:
region: us-west-2
profile: "default"
可配置参数如下:
Key | Type | Default | Meaning |
---|---|---|---|
provider | String | Required Field | 将使用那种存储提供者的名称来创建存储卷快照。请参阅 您的卷快照提供程序的插件文档以获取适当的值。 |
config | map string string | None (Optional) | 卷提供程序创建快照所需要传递的配置项。有关详细信息,请参见 您的卷快照提供程序的插件文档。 |
ing | 必填 | 备份文件要上传的存储桶 |
| objectStorage/prefix
| String | 选填 | 备份文件要上传到存储桶的目录。 |
| objectStorage/caCert
| String | 选填 | 验证TLS连接时要使用的base64编码的CA证书 |
| config
| map[string]string | 无(选填) | 对象存储供应商所需的特殊的key/value
键值对。有关详细信息,请参见 对象存储提供商的插件文档。 |
| accessMode
| String | ReadWrite
| Velero访问备份存储位置的模式。有效值为ReadWrite
,ReadOnly
。 |
| backupSyncPeriod
| metav1.Duration | 选填 | Velero应多久同步一次对象存储中的备份。默认值为Velero的服务器备份同步时间;设置0s
为禁用同步。 |
| validationFrequency
| metav1.Duration | 选填 | Velero多久验证一次对象存储。默认值为Velero的服务器验证频率;设置0s
为禁用验证。默认值1分钟。 |
4.5 VolumeSnapshotLocation
VolumeSnapshotLocation
是用于定义存储为备份创建的卷快照的位置,可以将Velero配置为对来自多个提供程序的卷进行快照。Velero还允许您为每个提供商配置对应的VolumeSnapshotLocation
,但是在备份时每个提供商只能选择一个位置。
每个VolumeSnapshotLocation
是通过在集群中的CRD资源描述的关于提供程序和存储位置。每个云提供商必须至少有一个。
以下为创建在VolumeSnapshotLocation
的示例:
apiVersion: velero.io/v1
kind: VolumeSnapshotLocation
metadata:
name: aws-default
namespace: velero
spec:
provider: aws
config:
region: us-west-2
profile: "default"
可配置参数如下:
Key | Type | Default | Meaning |
---|---|---|---|
provider | String | Required Field | 将使用那种存储提供者的名称来创建存储卷快照。请参阅 您的卷快照提供程序的插件文档以获取适当的值。 |
config | map string string | None (Optional) | 卷提供程序创建快照所需要传递的配置项。有关详细信息,请参见 您的卷快照提供程序的插件文档。 |
[注]:本文翻译自官方文档,如有又看不懂的点可以添加评论,或者去官网查看原文
更多推荐
所有评论(0)