一、分片shard节点缩容

Mongodb分片集群shard节点缩容相对是比较简单的,可以利用MongoDB自身的平衡器来将预下线中的分片中存储的数据进行转移,待预下线shard节点中无任何数据库,进行下线处理。所有的下线操作通过mongos进行管理实现。

1、查看分片集群是否开启平衡器

mongos> sh.getBalancerState()
true

2、发起删除分片节点命令,平衡器开始自动迁移数据

-- 查看当前集群状态
mongos> sh.status()
--- Sharding Status ---
  sharding version: {
  	"_id" : 1,
  	"minCompatibleVersion" : 5,
  	"currentVersion" : 6,
  	"clusterId" : ObjectId("6069b489d65227986c7fbe7a")
  }
  shards:
        {  "_id" : "shard1",  "host" : "shard1/172.16.104.12:27001,172.16.104.13:27001,172.16.104.14:27001",  "state" : 1 }
        {  "_id" : "shard2",  "host" : "shard2/172.16.104.12:27002,172.16.104.13:27002,172.16.104.14:27002",  "state" : 1 }
        {  "_id" : "shard3",  "host" : "shard3/172.16.104.12:27003,172.16.104.13:27003,172.16.104.14:27003",  "state" : 1 }             //预下线shard节点
  active mongoses:
        "4.0.22" : 3
  autosplit:
        Currently enabled: yes
  balancer:
        Currently enabled:  yes
        Currently running:  no
        Failed balancer rounds in last 5 attempts:  0
        Migration Results for the last 24 hours:
                682 : Success
  databases:
        {  "_id" : "config",  "primary" : "config",  "partitioned" : true }
                config.system.sessions
                        shard key: { "_id" : 1 }
                        unique: false
                        balancing: true
                        chunks:
                                shard1	342
                                shard2	341
                                shard3	341
                        too many chunks to print, use verbose if you want to force print
        {  "_id" : "db1",  "primary" : "shard2",  "partitioned" : true,  "version" : {  "uuid" : UUID("bd30b8c8-a9f5-4bdd-917e-2734e4a64ae1"),  "lastMod" : 1 } }
                db1.test1
                        shard key: { "_id" : "hashed" }
                        unique: false
                        balancing: true
                        chunks:
                                shard1	2
                                shard2	2
                                shard3	2
                        { "_id" : { "$minKey" : 1 } } -->> { "_id" : NumberLong("-6148914691236517204") } on : shard1 Timestamp(1, 0)
                        { "_id" : NumberLong("-6148914691236517204") } -->> { "_id" : NumberLong("-3074457345618258602") } on : shard1 Timestamp(1, 1)
                        { "_id" : NumberLong("-3074457345618258602") } -->> { "_id" : NumberLong(0) } on : shard2 Timestamp(1, 2)
                        { "_id" : NumberLong(0) } -->> { "_id" : NumberLong("3074457345618258602") } on : shard2 Timestamp(1, 3)
                        { "_id" : NumberLong("3074457345618258602") } -->> { "_id" : NumberLong("6148914691236517204") } on : shard3 Timestamp(1, 4)
                        { "_id" : NumberLong("6148914691236517204") } -->> { "_id" : { "$maxKey" : 1 } } on : shard3 Timestamp(1, 5)




-- 发起分片节点删除,平衡器开始自动将数据进行迁移
mongos> use admin                                           # 必须进入admin库下
switched to db admin
mongos> db.runCommand({removeShard:"shard3"})               # 发起删除shard命令
{
	"msg" : "draining started successfully",
	"state" : "started",
	"shard" : "shard3",
	"note" : "you need to drop or movePrimary these databases",
	"dbsToMove" : [ ],
	"ok" : 1,
	"operationTime" : Timestamp(1617550950, 2),
	"$clusterTime" : {
		"clusterTime" : Timestamp(1617550950, 2),
		"signature" : {
			"hash" : BinData(0,"Kb0MDb4APzAlY6/leqWrfUw6KsA="),
			"keyId" : NumberLong("6947282400699219974")
		}
	}
}

3、等待平衡器将需要删除的分片节点中数据全部迁移完毕

mongos> sh.status()
--- Sharding Status ---
  sharding version: {
  	"_id" : 1,
  	"minCompatibleVersion" : 5,
  	"currentVersion" : 6,
  	"clusterId" : ObjectId("6069b489d65227986c7fbe7a")
  }
  shards:
        {  "_id" : "shard1",  "host" : "shard1/172.16.104.12:27001,172.16.104.13:27001,172.16.104.14:27001",  "state" : 1 }
        {  "_id" : "shard2",  "host" : "shard2/172.16.104.12:27002,172.16.104.13:27002,172.16.104.14:27002",  "state" : 1 }
        {  "_id" : "shard3",  "host" : "shard3/172.16.104.12:27003,172.16.104.13:27003,172.16.104.14:27003",  "state" : 1,  "draining" : true }     //预下线shard节点数据迁移完毕,处于一个可直接下线状态
  active mongoses:
        "4.0.22" : 3
  autosplit:
        Currently enabled: yes
  balancer:
        Currently enabled:  yes
        Currently running:  no
        Failed balancer rounds in last 5 attempts:  0
        Migration Results for the last 24 hours:
                1025 : Success
  databases:
        {  "_id" : "config",  "primary" : "config",  "partitioned" : true }
                config.system.sessions
                        shard key: { "_id" : 1 }
                        unique: false
                        balancing: true
                        chunks:
                                shard1	512
                                shard2	512
                        too many chunks to print, use verbose if you want to force print
        {  "_id" : "db1",  "primary" : "shard2",  "partitioned" : true,  "version" : {  "uuid" : UUID("bd30b8c8-a9f5-4bdd-917e-2734e4a64ae1"),  "lastMod" : 1 } }
                db1.test1
                        shard key: { "_id" : "hashed" }
                        unique: false
                        balancing: true
                        chunks:
                                shard1	3
                                shard2	3
                        { "_id" : { "$minKey" : 1 } } -->> { "_id" : NumberLong("-6148914691236517204") } on : shard1 Timestamp(1, 0)
                        { "_id" : NumberLong("-6148914691236517204") } -->> { "_id" : NumberLong("-3074457345618258602") } on : shard1 Timestamp(1, 1)
                        { "_id" : NumberLong("-3074457345618258602") } -->> { "_id" : NumberLong(0) } on : shard2 Timestamp(1, 2)
                        { "_id" : NumberLong(0) } -->> { "_id" : NumberLong("3074457345618258602") } on : shard2 Timestamp(1, 3)
                        { "_id" : NumberLong("3074457345618258602") } -->> { "_id" : NumberLong("6148914691236517204") } on : shard2 Timestamp(2, 0)
                        { "_id" : NumberLong("6148914691236517204") } -->> { "_id" : { "$maxKey" : 1 } } on : shard1 Timestamp(3, 0)

mongos>

4、真正从分片集群中删除shard副本集信息

mongos> use admin
switched to db admin
mongos> db.runCommand({removeShard:"shard3"})                                       # 下线shard节点
{
	"msg" : "removeshard completed successfully",
	"state" : "completed",
	"shard" : "shard3",
	"ok" : 1,
	"operationTime" : Timestamp(1617551668, 3),
	"$clusterTime" : {
		"clusterTime" : Timestamp(1617551668, 3),
		"signature" : {
			"hash" : BinData(0,"zVvEDlDNJajf386J/wat3ePCwYs="),
			"keyId" : NumberLong("6947282400699219974")
		}
	}
}

5、检查分片缩容后的分片集群状态

mongos> sh.status()
--- Sharding Status ---
  sharding version: {
  	"_id" : 1,
  	"minCompatibleVersion" : 5,
  	"currentVersion" : 6,
  	"clusterId" : ObjectId("6069b489d65227986c7fbe7a")
  }
  shards:
        {  "_id" : "shard1",  "host" : "shard1/172.16.104.12:27001,172.16.104.13:27001,172.16.104.14:27001",  "state" : 1 }
        {  "_id" : "shard2",  "host" : "shard2/172.16.104.12:27002,172.16.104.13:27002,172.16.104.14:27002",  "state" : 1 }
  active mongoses:
        "4.0.22" : 3
  autosplit:
        Currently enabled: yes
  balancer:
        Currently enabled:  yes
        Currently running:  no
        Failed balancer rounds in last 5 attempts:  0
        Migration Results for the last 24 hours:
                1025 : Success
  databases:
        {  "_id" : "config",  "primary" : "config",  "partitioned" : true }
                config.system.sessions
                        shard key: { "_id" : 1 }
                        unique: false
                        balancing: true
                        chunks:
                                shard1	512
                                shard2	512
                        too many chunks to print, use verbose if you want to force print
        {  "_id" : "db1",  "primary" : "shard2",  "partitioned" : true,  "version" : {  "uuid" : UUID("bd30b8c8-a9f5-4bdd-917e-2734e4a64ae1"),  "lastMod" : 1 } }
                db1.test1
                        shard key: { "_id" : "hashed" }
                        unique: false
                        balancing: true
                        chunks:
                                shard1	3
                                shard2	3
                        { "_id" : { "$minKey" : 1 } } -->> { "_id" : NumberLong("-6148914691236517204") } on : shard1 Timestamp(1, 0)
                        { "_id" : NumberLong("-6148914691236517204") } -->> { "_id" : NumberLong("-3074457345618258602") } on : shard1 Timestamp(1, 1)
                        { "_id" : NumberLong("-3074457345618258602") } -->> { "_id" : NumberLong(0) } on : shard2 Timestamp(1, 2)
                        { "_id" : NumberLong(0) } -->> { "_id" : NumberLong("3074457345618258602") } on : shard2 Timestamp(1, 3)
                        { "_id" : NumberLong("3074457345618258602") } -->> { "_id" : NumberLong("6148914691236517204") } on : shard2 Timestamp(2, 0)
                        { "_id" : NumberLong("6148914691236517204") } -->> { "_id" : { "$maxKey" : 1 } } on : shard1 Timestamp(3, 0)

mongos>

二、分片shard节点扩容

MongoDB分片集群shard节点的扩容也相对比较简单。首先我们需要将需要扩容的shard副本集提前部署好,然后通过mongos将预上线shard节点添加至分片集群,MongoDB的平衡器自动将数据迁移至新增shard节点进行数据平衡。

1、搭建部署需要扩容的分片集群

1)初始化相关目录

# /data/mongodb40/shard3/{data,logs,conf}

2)拷贝相关的keyfile文件

# cp -r /data/mongodb40/shard2/conf/KeyFile.file  /data/mongodb40/shard3/conf

3)修改配置

-- 配置文件

[root@sdw3 ~]# cat /data/mongodb40/shard3/conf/config.conf
systemLog:
   verbosity: 0
   quiet: false
   traceAllExceptions: false
   path: "/data/mongodb40/shard3/logs/config.log"
   logAppend: true
   logRotate: reopen
   destination: file
processManagement:
   fork: true
   pidFilePath: "/data/mongodb40/shard3/conf/config.pid"
net:
   port: 27003                                                          # 端口
   bindIp: 0.0.0.0
   maxIncomingConnections: 2000
   wireObjectCheck: true
security:
   keyFile: "/data/mongodb40/shard3/conf/KeyFile.file"
   clusterAuthMode: keyFile
   authorization: enabled
   #authorization: disabled
storage:
   dbPath: /data/mongodb40/shard3/data
   #indexBuildRetry: true
   journal:
      enabled: true
      commitIntervalMs: 100
   directoryPerDB: true
   syncPeriodSecs: 60
   engine: wiredTiger
   wiredTiger:
      engineConfig:
         cacheSizeGB: 256
         #journalCompressor: snappy
         directoryForIndexes: false
      collectionConfig:
         blockCompressor: snappy
      indexConfig:
         prefixCompression: false

operationProfiling:
   slowOpThresholdMs: 100
   mode: slowOp
replication:
   #oplogSizeMB: 1024
   replSetName: shard3
   secondaryIndexPrefetch: all
   enableMajorityReadConcern: false
sharding:
   clusterRole: shardsvr                                                # 标识为shard角色

3)启动shard3副本集中各个节点并进行副本集初始化

> cnf = {"_id":"shard3","members":[{"_id":1,"host":"172.16.104.12:27003","priority":1},{"_id":2, "host":"172.16.104.13:27003","priority":1},{"_id":3, "host":"172.16.104.14:27003","priority":10}]}
> rs.initiate(cnf)

4)创建超级用户信息

shard3:PRIMARY> use admin
shard3:PRIMARY> db.createUser({"user":"root","pwd":"123","roles":[{role:"root",db:"admin"}]})

2、查看当前分片集群信息

mongos> sh.status()
--- Sharding Status ---
  sharding version: {
  	"_id" : 1,
  	"minCompatibleVersion" : 5,
  	"currentVersion" : 6,
  	"clusterId" : ObjectId("6069b489d65227986c7fbe7a")
  }
  shards:
        {  "_id" : "shard1",  "host" : "shard1/172.16.104.12:27001,172.16.104.13:27001,172.16.104.14:27001",  "state" : 1 }
        {  "_id" : "shard2",  "host" : "shard2/172.16.104.12:27002,172.16.104.13:27002,172.16.104.14:27002",  "state" : 1 }
  active mongoses:
        "4.0.22" : 3
  autosplit:
        Currently enabled: yes
  balancer:
        Currently enabled:  yes
        Currently running:  no
        Failed balancer rounds in last 5 attempts:  0
        Migration Results for the last 24 hours:
                1025 : Success
  databases:
        {  "_id" : "config",  "primary" : "config",  "partitioned" : true }
                config.system.sessions
                        shard key: { "_id" : 1 }
                        unique: false
                        balancing: true
                        chunks:
                                shard1	512
                                shard2	512
                        too many chunks to print, use verbose if you want to force print
        {  "_id" : "db1",  "primary" : "shard2",  "partitioned" : true,  "version" : {  "uuid" : UUID("bd30b8c8-a9f5-4bdd-917e-2734e4a64ae1"),  "lastMod" : 1 } }
                db1.test1
                        shard key: { "_id" : "hashed" }
                        unique: false
                        balancing: true
                        chunks:
                                shard1	3
                                shard2	3
                        { "_id" : { "$minKey" : 1 } } -->> { "_id" : NumberLong("-6148914691236517204") } on : shard1 Timestamp(1, 0)
                        { "_id" : NumberLong("-6148914691236517204") } -->> { "_id" : NumberLong("-3074457345618258602") } on : shard1 Timestamp(1, 1)
                        { "_id" : NumberLong("-3074457345618258602") } -->> { "_id" : NumberLong(0) } on : shard2 Timestamp(1, 2)
                        { "_id" : NumberLong(0) } -->> { "_id" : NumberLong("3074457345618258602") } on : shard2 Timestamp(1, 3)
                        { "_id" : NumberLong("3074457345618258602") } -->> { "_id" : NumberLong("6148914691236517204") } on : shard2 Timestamp(2, 0)
                        { "_id" : NumberLong("6148914691236517204") } -->> { "_id" : { "$maxKey" : 1 } } on : shard1 Timestamp(3, 0)

mongos>

3、通过mongos添加新增分片副本集信息

mongos> use admin
switched to db admin
mongos> sh.addShard("shard3/172.16.104.12:27003,172.16.104.13:27003,172.16.104.14:27003")
{
	"shardAdded" : "shard3",
	"ok" : 1,
	"operationTime" : Timestamp(1617552097, 2),
	"$clusterTime" : {
		"clusterTime" : Timestamp(1617552097, 2),
		"signature" : {
			"hash" : BinData(0,"wVW+P1UwUk4ZW3Cti2bcIGO3djc="),
			"keyId" : NumberLong("6947282400699219974")
		}
	}
}

4、此时mongos平衡器开始讲之前一存在的分片中数据向新增节点中进行迁移,等待一段时间后可在通过sh.status()查看是否迁移完毕

mongos> sh.status()
--- Sharding Status ---
  sharding version: {
  	"_id" : 1,
  	"minCompatibleVersion" : 5,
  	"currentVersion" : 6,
  	"clusterId" : ObjectId("6069b489d65227986c7fbe7a")
  }
  shards:
        {  "_id" : "shard1",  "host" : "shard1/172.16.104.12:27001,172.16.104.13:27001,172.16.104.14:27001",  "state" : 1 }
        {  "_id" : "shard2",  "host" : "shard2/172.16.104.12:27002,172.16.104.13:27002,172.16.104.14:27002",  "state" : 1 }
        {  "_id" : "shard3",  "host" : "shard3/172.16.104.12:27003,172.16.104.13:27003,172.16.104.14:27003",  "state" : 1 }
  active mongoses:
        "4.0.22" : 3
  autosplit:
        Currently enabled: yes
  balancer:
        Currently enabled:  yes
        Currently running:  no
        Failed balancer rounds in last 5 attempts:  0
        Migration Results for the last 24 hours:
                1367 : Success
  databases:
        {  "_id" : "config",  "primary" : "config",  "partitioned" : true }
                config.system.sessions
                        shard key: { "_id" : 1 }
                        unique: false
                        balancing: true
                        chunks:
                                shard1	342
                                shard2	342
                                shard3	340
                        too many chunks to print, use verbose if you want to force print
        {  "_id" : "db1",  "primary" : "shard2",  "partitioned" : true,  "version" : {  "uuid" : UUID("bd30b8c8-a9f5-4bdd-917e-2734e4a64ae1"),  "lastMod" : 1 } }
                db1.test1
                        shard key: { "_id" : "hashed" }
                        unique: false
                        balancing: true
                        chunks:
                                shard1	2
                                shard2	2
                                shard3	2
                        { "_id" : { "$minKey" : 1 } } -->> { "_id" : NumberLong("-6148914691236517204") } on : shard3 Timestamp(4, 0)
                        { "_id" : NumberLong("-6148914691236517204") } -->> { "_id" : NumberLong("-3074457345618258602") } on : shard1 Timestamp(4, 1)
                        { "_id" : NumberLong("-3074457345618258602") } -->> { "_id" : NumberLong(0) } on : shard3 Timestamp(5, 0)
                        { "_id" : NumberLong(0) } -->> { "_id" : NumberLong("3074457345618258602") } on : shard2 Timestamp(5, 1)
                        { "_id" : NumberLong("3074457345618258602") } -->> { "_id" : NumberLong("6148914691236517204") } on : shard2 Timestamp(2, 0)
                        { "_id" : NumberLong("6148914691236517204") } -->> { "_id" : { "$maxKey" : 1 } } on : shard1 Timestamp(3, 0)

mongos>
Logo

为开发者提供学习成长、分享交流、生态实践、资源工具等服务,帮助开发者快速成长。

更多推荐