Mongodb shard cluster

Mongodb shard cluster deploy

Mongodb shard cluster
Page content

Intro

MongoDB shard(分片)

分片是MongoDB shard 集群中的一个概念,它将跨多个MongoDB实例的大型数据集拆分为 小型数据集存储在不同的分片(shard)中。 有时,MongoDB中的数据会非常庞大​​,以至于对如此大的数据集进行查询会导致服务器上 大量的CPU被占用。为了解决这种情况,MongoDB提出了分片的概念,基本上是将数据集 拆分到多个MongoDB实例中。实际上,数据集合被分为为多个集合或碎片分别存储在不同 分片中;逻辑上,所有分片都作为数据库中某个表(集合)的不可分割的一部分支持数据的 写入和查询工作。

MongoDB shard 集群有三种角色

  1. shardServer, 分片服务器: 就是一个保存数据子集的MongoDB实例。在生产环境中, 所有分片都必须是副本集的一部分,也就是说每个分片都得实现数据的高可用,有一 个或多个副本。
  2. configServer, 配置服务器: 一个mongodb实例,用来保存有关集群的元数据,基本 上是有关将保存分片数据的各种mongodb实例的信息。
  3. mongos 路由器: 一个mongodb实例,主要负责将客户端发送的命令路由到正确的shardServer服务器

配置MongoDB shard集群的基本步骤

DOCS: https://docs.mongodb.com/manual/tutorial/deploy-shard-cluster/

  1. 启动各个配置服务器mongoDB实例,并初始化配置服务器复制集群(rs.initiate())
  2. 启动各个分片服务器mongoDB实例,并初始化分片服务器复制集群(rs.initiate())
  3. 在mongos中添加分片服务器复制集群(mongos> sh.addShard())

ENV

/etc/hosts:

192.168.88.10	mongos.local
192.168.88.11	shard-node1.local
192.168.88.12	shard-node2.local
192.168.88.13   configServer-node1.local
192.168.88.14   configServer-node2.local

一共5台主机,两台主机组成 shardServer 复制集群, 两台主机组成configServer 复制集群, 一台主机为mongos。在生产环境中各复制集群最少需要3台。

init system:

~# systemctl disable firewalld
~# sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
~# vim /etc/security/limits.conf
...
* soft nofile 65535
* hard nofile 65535
* soft nproc 65535
* hard nproc 65535
~# echo "" >> /etc/rc.local
~# echo "echo never > /sys/kernel/mm/transparent_hugepage/enabled" >> /etc/rc.local
~# echo "echo never > /sys/kernel/mm/transparent_hugepage/defrag" >> /etc/rc.local
~# chmod +x /etc/rc.d/rc.local
~# vim /etc/hosts
...
192.168.88.10	mongos.local
192.168.88.11	shard-node1.local
192.168.88.12	shard-node2.local
192.168.88.13   configServer-node1.local
192.168.88.14   configServer-node2.local

在192.168.88.10上:

~# for ip in 11 12 13 14; do scp /etc/hosts root@192.168.88.$ip:/etc/hosts ; done
~# for ip in 11 12 13 14; do scp -r /etc/profile.d/mongo.sh root@192.168.88.$ip:/etc/profile.d/mongo.sh ; done
~# for ip in 11 12 13 14; do scp -r /usr/local/mongdb root@192.168.88.$ip:/usr/local/ ; done
~# for ip in 11 12 13 14; do ssh root@192.168.88.$ip "source /etc/profile.d/mongo.sh" ; done

install

~# mkdir -p /data/mongodb/{data,log,conf}

~# wget -O /usr/local/src/mongo-3.4.24.tgz https://fastdl.mongodb.org/linux/mongodb-linux-x86_64-3.4.24.tgz
~# tar –xzvf mongo-3.4.10.tgz

~# mv mongo-3.4.10 /usr/local/mongodb

~# echo 'export PATH=$PATH:/usr/local/mongodb/bin' > /etc/profile.d/mongo.sh
~# source /etc/profile.d/mongo.sh

configServer replSet conf(13,14):

[root@configserver-node1 ~]# cat /data/mongodb/conf/mongodb.conf 
systemLog:
  verbosity: 0
  destination: file
  path: /data/mongodb/log/mongodb.log
storage:
  journal:
    enabled: true
  dbPath: /data/mongodb/data
  directoryPerDB: true
  engine: wiredTiger
net:
  port: 27000
  maxIncomingConnections: 5000
processManagement:
  fork: true
sharding:
  clusterRole: configsvr
replication:
  replSetName: configReplSet

shardServer replSet conf(11,12):

[root@shard-node1 ~]# cat /data/mongodb/conf/mongodb.conf 
systemLog:
  verbosity: 0
  destination: file
  path: /data/mongodb/log/mongodb.log
storage:
  journal:
    enabled: true
  dbPath: /data/mongodb/data
  directoryPerDB: true
  engine: wiredTiger
  wiredTiger:
    engineConfig:
      directoryForIndexes: true
      cacheSizeGB: 1
    collectionConfig:
      blockCompressor: snappy
    indexConfig:
      prefixCompression: true
net:
  port: 27001
  maxIncomingConnections: 5000
processManagement:
  fork: true
sharding:
  clusterRole: shardsvr
replication:
  replSetName: shardReplSet

mongos conf(10):

[root@mongos ~]# cat /data/mongodb/conf/mongodb.conf 
systemLog:
  verbosity: 1
  destination: file
  path: /data/mongodb/log/mongodb.log
net:
  port: 28000
processManagement:
  fork: true
sharding:
  configDB: "configReplSet/configServer-node1.local:27000,configServer-node2.local:27000"

init configServer replSet cluster

[root@configserver-node1 ~]# mongod -f /data/mongodb/conf/mongodb.conf
[root@configserver-node2 ~]# mongod -f /data/mongodb/conf/mongodb.conf

[root@configserver-node1 ~]# mongo --port 27000
> rs.initiate({_id: "configReplSet", configsvr: true, members: [{_id: 0, host: "configServer-node1.local:27000"}, {_id: 1, host: "configServer-node2.local:27000"}]});
{ "ok" : 1 }
configReplSet:PRIMARY> rs.status()

init shardServer replSet cluster

[root@shard-node1 ~]# mongod -f /data/mongodb/conf/mongodb.conf
[root@shard-node2 ~]# mongod -f /data/mongodb/conf/mongodb.conf

[root@shard-node1 ~]# mongo --port 27001
> rs.initiate({_id: "shardReplSet", members: [{_id: 0, host: "shard-node1.local:27001"}, {_id: 1, host: "shard-node2.local:27001"}]});
{ "ok" : 1 }
shardReplSet:SECONDARY> rs.status()
......

start mongos and add shard

root@mongos ~]# mongos -f /data/mongodb/conf/mongodb.conf 
[root@mongos ~]# mongo --port 28000
mongos> sh.addShard("shardReplSet/shard-node1.local:27001,shard-node2.local:27001");
{ "shardAdded" : "shardReplSet", "ok" : 1 }

shardServer 主机名改变后恢复集群

[root@shard-node2 ~]# mongo --port 27001
shardReplSet:PRIMARY> config={_id: "shardReplSet", members: [{_id: 0, host: "shard-node1.local:27001"}, {_id: 1, host: "shard-node2.local:27001"}]}
shardReplSet:PRIMARY> rs.reconfig(config, {force: true})


[root@mongos ~]# mongo --port 28000
mongos> use config
...
mongos> db
config
mongos> db.shards.find()
{ "_id" : "shardReplSet", "host" : "shardReplSet/shard-node1.local:27001,shard-node2.local:27001", "state" : 1 }

mongos> db.shards.update({_id:"shardReplSet"}, {$set:{host:"shardReplSet/HOSTNAME1:27001,HOSTNAME2:27001"}})