Migrate Cluster from another platform to VKS
To migrate a Cluster from the Cloud Provider or On-premise system to the VKS system, follow the steps in this document.
Prerequisites
Perform download helper bash script and grand execute permission for this file ( velero_helper.sh )
(Optional) Deploy some services to check the correctness of the migration. Suppose, at the source Cluster, I have deployed an nginx service as follows:
Deployment files:
Now, when you access Node's Public IP, you will see "Hello, MyVNGCloud".
Prepare target cluster (Prepare target resource)
On the VKS system, you need to initialize a Cluster according to the instructions here . Make sure that the destination cluster's configuration is the same as the source cluster's configuration.
Attention:
For the migration to be successful, on the target Cluster, you need to ensure the following requirements:
The amount of resources needed such as number of nodes, node instance configuration,...
Node labels and node taints are the same as the old cluster.
Corresponding or alternative Storage Class.
[Optional] Migrate private resources outside cluster
Migrating private resources outside cluster (moving private resources outside the cluster) is the process of moving private resources outside the source Cluster to a place that the destination Cluster can use. For example, you may have private resources such as images, databases, etc. Now, before starting to migrate, you need to migrate these resources yourself. For example, if you need:
Migrate Container Images: you can migrate images to VNGCloud Container Registry through instructions here .
Migrate Databases: you can use Relational Database Service (RDS) and Object Storage Service (OBS) depending on your needs. After the migration is complete, remember to reconfigure the database for your applications on VKS Cluster.
Migrate Storage: you can use vServer's NFS Server .
Attention:
After you migrate resources outside the Cluster, you need to ensure the target Cluster can connect to these migrated resources.
Install Velero on both source and destination clusters (Install Velero tool)
After you have migrated private resources outside the cluster, you can use the migration tool to backup and restore the application on the source cluster and target cluster.
Create a vStorage Project, Container to receive the cluster's backup data according to instructions here .
Create an S3 key corresponding to this vStorage Project according to the instructions here .
For example, I have initialized a vStorage Project, Container with the following information: Region: HCM03, Container: mycontainer, Endpoint: https://hcm03.vstorage.vngcloud.vn .
On both Clusters (source and target)
Create file credentials-velero with the following content:
Install Velero CLI:
Install Velero on your 2 clusters with the command:
For Clusters on Amazon Elastic Kubernetes Service (EKS)
At the source Cluster
Annotate the Persistent Volumes that need to be backed up. By default, Velero will not backup volume. You can run the command below to annotate backup of all volumes.
Additionally, you can mark not to backup system resources with the following command:
Perform backup according to the syntax:
Attention:
You must create 2 backup versions for Cluster Resource and Namespace Resource.
At the destination Cluster
Create a Storage Class mapping file between source and destination Cluster:
Perform restore according to the command:
For Cluster on Google Kubernetes Engine (GKE)
At the source Cluster
Annotate the Persistent Volumes and label resources that need to be excluded from the backup
Additionally, you can mark not to backup system resources with the following command:
Perform backup according to the syntax:
Attention:
You must create 2 backup versions for Cluster Resource and Namespace Resource.
At the destination Cluster
Create a Storage Class mapping file between source and destination Cluster:
Perform restore according to the command:
Attention:
Google Kubernetes Engine (GKE) does not allow daemonset deployment on all nodes. However, Velero only needs to deploy the daemonset on the node with the PV mount. The solution to this problem is that you can adjust the taint and toleration of the daemonset to only deploy it on the node with the PV mount.
You can change the default resource request
cpu:500m
andmem:512M
in the installation step or make adjustments when deploying the yaml.
Last updated