Welcome 微信登录

首页 / 数据库 / MySQL / MongoDB2.6部署副本集+分区

部署规划操作系统:RedHat6.4 64位
 ConfigRoute分片1分片2分片3
使用端口2800027017270182701927020
IP地址     
192.168.1.30/etc/config.conf/etc/route.conf/etc/sd1.conf(主)/etc/sd2.conf(仲裁)/etc/sd3.conf(备)
192.168.1.52/etc/config.conf/etc/route.conf/etc/sd1.conf(备)/etc/sd2.conf(主)/etc/sd3.conf(仲裁)
192.168.1.108/etc/config.conf/etc/route.conf/etc/sd1.conf(仲裁)/etc/sd2.conf(备)/etc/sd3.conf(主)
一、在三个节点上创建如下目录,做测试的话建议确保在/目录有15G左右的剩余空间[root@orcl ~]# mkdir -p /var/config[root@orcl ~]# mkdir -p /var/sd1[root@orcl ~]# mkdir -p /var/sd2[root@orcl ~]# mkdir -p /var/sd3二、查看配置文件[root@orcl ~]# cat /etc/config.conf port=28000dbpath=/var/configlogpath=/var/config/config.loglogappend=truefork=trueconfigsvr=true[root@orcl ~]# cat /etc/route.conf port=27017configdb=192.168.1.30:28000,192.168.1.52:28000,192.168.1.108:28000logpath=/var/log/mongos.loglogappend=truefork=true[root@orcl ~]# cat /etc/sd1.conf port = 27018dbpath=/var/sd1logpath =/var/sd1/shard1.loglogappend =trueshardsvr =truereplSet =set1fork =true[root@orcl ~]# cat /etc/sd2.conf  port = 27019dbpath=/var/sd2logpath =/var/sd2/shard2.loglogappend =trueshardsvr =truereplSet =set2fork =true[root@orcl ~]# cat /etc/sd3.conf   port = 27020dbpath=/var/sd3logpath =/var/sd3/shard1.loglogappend =trueshardsvr =truereplSet =set3fork =true三、在三个节点上同步时间四、在三个节点上启动用config服务器节点1[root@orcl ~]# mongod -f /etc/config.conf about to fork child process, waiting until server is ready for connections.forked process: 3472child process started successfully, parent exiting[root@orcl ~]# ps -ef |grep mongoroot      3472     1  1 19:15 ?        00:00:01 mongod -f /etc/config.confroot      3499  2858  0 19:17 pts/0    00:00:00 grep mongo[root@orcl ~]# netstat -anltp|grep 28000tcp        0      0 0.0.0.0:28000               0.0.0.0:*                   LISTEN      3472/mongod 节点2[root@localhost ~]# mongod -f /etc/config.confabout to fork child process, waiting until server is ready for connections.forked process: 2998child process started successfully, parent exiting[root@localhost ~]# ps -ef |grep mongoroot      2998     1  8 19:15 ?        00:00:08 mongod -f /etc/config.confroot      3014  2546  0 19:17 pts/0    00:00:00 grep mongo[root@localhost ~]# netstat -anltp|grep 28000tcp        0      0 0.0.0.0:28000               0.0.0.0:*                   LISTEN      2998/mongod  节点3[root@db10g ~]# mongod -f /etc/config.confabout to fork child process, waiting until server is ready for connections.forked process: 4086child process started successfully, parent exiting[root@db10g ~]# ps -ef |grep mongoroot      4086     1  2 19:25 ?        00:00:00 mongod -f /etc/config.confroot      4100  3786  0 19:25 pts/0    00:00:00 grep mongo[root@db10g ~]# netstat -anltp|grep 28000tcp        0      0 0.0.0.0:28000               0.0.0.0:*                   LISTEN      4086/mongod  五、在三个节点上启动用路由服务器节点1[root@orcl ~]#  mongos -f /etc/route.confabout to fork child process, waiting until server is ready for connections.forked process: 3575child process started successfully, parent exiting[root@orcl ~]# netstat -anltp|grep 2701tcp        0      0 0.0.0.0:27017               0.0.0.0:*                   LISTEN      3575/mongos   节点2[root@localhost ~]#  mongos -f /etc/route.confabout to fork child process, waiting until server is ready for connections.forked process: 3057child process started successfully, parent exiting[root@localhost ~]# netstat -anltp|grep 2701tcp        0      0 0.0.0.0:27017   节点3[root@db10g ~]# mongos -f /etc/route.confabout to fork child process, waiting until server is ready for connections.forked process: 4108child process started successfully, parent exiting[root@db10g ~]# netstat -anltp|grep 27017tcp        0      0 0.0.0.0:27017               0.0.0.0:*                   LISTEN      4108/mongos 六、在三个节点启用shardmongod -f /etc/sd1.confmongod -f /etc/sd2.confmongod -f /etc/sd3.conf节点1[root@orcl ~]# ps -ef |grep mongoroot      3472     1  2 19:15 ?        00:02:18 mongod -f /etc/config.confroot      3575     1  0 19:28 ?        00:00:48 mongos -f /etc/route.confroot      4135     1  0 20:52 ?        00:00:07 mongod -f /etc/sd1.confroot      4205     1  0 20:55 ?        00:00:05 mongod -f /etc/sd2.confroot      4265     1  0 20:58 ?        00:00:04 mongod -f /etc/sd3.conf节点2[root@localhost ~]# ps -ef |grep mongoroot      2998     1  1 19:15 ?        00:02:02 mongod -f /etc/config.confroot      3057     1  1 19:28 ?        00:01:02 mongos -f /etc/route.confroot      3277     1  1 20:52 ?        00:00:20 mongod -f /etc/sd1.confroot      3334     1  6 20:56 ?        00:00:52 mongod -f /etc/sd2.confroot      3470     1  1 21:01 ?        00:00:07 mongod -f /etc/sd3.conf节点3[root@db10g data]# ps -ef |grep mongoroot      4086     1  1 19:25 ?        00:01:58 mongod -f /etc/config.confroot      4108     1  0 19:27 ?        00:00:55 mongos -f /etc/route.confroot      4592     1  0 20:54 ?        00:00:07 mongod -f /etc/sd1.confroot      4646     1  3 20:56 ?        00:00:30 mongod -f /etc/sd2.confroot      4763     1  4 21:04 ?        00:00:12 mongod -f /etc/sd3.conf七、配置副本集192.168.1.30[root@orcl ~]# mongo --port 27018MongoDB shell version: 2.6.4connecting to: 127.0.0.1:27018/test> use adminswitched to db admin> rs1={_id:"set1",members:[{_id:0,host:"192.168.1.30:27018",priority:2},{_id:1,host:"192.168.1.52:27018"},{_id:2,host:"192.168.1.108:27018",arbiterOnly:true}]}{        "_id" : "set1",        "members" : [                {                        "_id" : 0,                        "host" : "192.168.1.30:27018",                        "priority" : 2                },                {                        "_id" : 1,                        "host" : "192.168.1.52:27018"                },                {                        "_id" : 2,                        "host" : "192.168.1.108:27018",                        "arbiterOnly" : true                }        ]}> rs.initiate(rs1){        "info" : "Config now saved locally.  Should come online in about a minute.",        "ok" : 1}192.168.1.52[root@orcl ~]# mongo --port 27019MongoDB shell version: 2.6.4connecting to: 127.0.0.1:27019/test> use adminswitched to db admin> rs2={_id:"set2",members:[{_id:0,host:"192.168.1.52:27019",priority:2},{_id:1,host:"192.168.1.108:27019"},{_id:2,host:"192.168.1.30:27019",arbiterOnly:true}]}{        "_id" : "set2",        "members" : [                {                        "_id" : 0,                        "host" : "192.168.1.52:27019",                        "priority" : 2                },                {                        "_id" : 1,                        "host" : "192.168.1.108:27019"                },                {                        "_id" : 2,                        "host" : "192.168.1.30:27019",                        "arbiterOnly" : true                }        ]}> rs.initiate(rs2);{        "info" : "Config now saved locally.  Should come online in about a minute.",        "ok" : 1}192.168.1.108[root@localhost sd3]# mongo --port 27020MongoDB shell version: 2.6.4connecting to: 127.0.0.1:27020/test> use adminswitched to db admin> rs3={_id:"set3",members:[{_id:0,host:"192.168.1.108:27020",priority:2},{_id:1,host:"192.168.1.30:27020"},{_id:2,host:"192.168.1.52:27020",arbiterOnly:true}]}{        "_id" : "set3",        "members" : [                {                        "_id" : 0,                        "host" : "192.168.1.108:27020",                        "priority" : 2                },                {                        "_id" : 1,                        "host" : "192.168.1.30:27020"                },                {                        "_id" : 2,                        "host" : "192.168.1.52:27020",                        "arbiterOnly" : true                }        ]}> rs.initiate(rs3);{        "info" : "Config now saved locally.  Should come online in about a minute.",        "ok" : 1}八、添加分片在三个节点上任一个节点都可以操作192.168.1.30[root@orcl sd3]# mongo --port 27017MongoDB shell version: 2.6.4connecting to: 127.0.0.1:27017/testmongos> use adminswitched to db adminmongos> db.runCommand({addshard:"set1/192.168.1.30:27018,192.168.1.52:27018,192.168.1.108:27018"}){ "shardAdded" : "set1", "ok" : 1 }mongos> db.runCommand({addshard:"set2/192.168.1.30:27019,192.168.1.52:27019,192.168.1.108:27019"}){ "shardAdded" : "set2", "ok" : 1 }mongos> db.runCommand({addshard:"set3/192.168.1.30:27020,192.168.1.52:27020,192.168.1.108:27020"}){ "shardAdded" : "set3", "ok" : 1 }九、查看分片信息mongos> db.runCommand({listshards : 1}){        "shards" : [                {                        "_id" : "set1",                        "host" : "set1/192.168.1.30:27018,192.168.1.52:27018"                },                {                        "_id" : "set2",                        "host" : "set2/192.168.1.108:27019,192.168.1.52:27019"                },                {                        "_id" : "set3",                        "host" : "set3/192.168.1.108:27020,192.168.1.30:27020"                }        ],        "ok" : 1}十、删除分片mongos> db.runCommand({removeshard:"set3"}){        "msg" : "draining started successfully",        "state" : "started",        "shard" : "set3",        "ok" : 1}十一、管理分片mongos> use configswitched to db configmongos> db.shards.find();{ "_id" : "set1", "host" : "set1/192.168.1.30:27018,192.168.1.52:27018" }{ "_id" : "set2", "host" : "set2/192.168.1.108:27019,192.168.1.52:27019" }{ "_id" : "set3", "host" : "set3/192.168.1.108:27020,192.168.1.30:27020" }十二、对要分片的库和表声明切换到admin库mongos> use admin      声明test库允许分片mongos> db.runCommand({enablesharding:"test"}){ "ok" : 1 } 声明users表要分片   mongos> db.runCommand({shardcollection:"test.lineqi",key:{id:"hashed"}}){ "collectionsharded" : "test.lineqi", "ok" : 1 }十三、测试脚本切换到testmongos>use testmongos> for (var i = 1; i <= 100000; i++) db.lineqi.save({id:i,name:"12345678",sex:"male",age:27,value:"test"}); WriteResult({ "nInserted" : 1 })十四、测试结果查看分片信息mongos> use configswitched to db configmongos> db.chunks.find();{ "_id" : "test.users-id_MinKey", "lastmod" : Timestamp(2, 0), "lastmodEpoch" : ObjectId("55ddb3a70f613da70e8ce303"), "ns" : "test.users", "min" : { "id" : { "$minKey" : 1 } }, "max" : { "id" : 1 }, "shard" : "set1" }{ "_id" : "test.users-id_1.0", "lastmod" : Timestamp(3, 1), "lastmodEpoch" : ObjectId("55ddb3a70f613da70e8ce303"), "ns" : "test.users", "min" : { "id" : 1 }, "max" : { "id" : 4752 }, "shard" : "set2" }{ "_id" : "test.users-id_4752.0", "lastmod" : Timestamp(3, 0), "lastmodEpoch" : ObjectId("55ddb3a70f613da70e8ce303"), "ns" : "test.users", "min" : { "id" : 4752 }, "max" : { "id" : { "$maxKey" : 1 } }, "shard" : "set3" }{ "_id" : "test.lineqi-id_MinKey", "lastmod" : Timestamp(3, 2), "lastmodEpoch" : ObjectId("55ddb7460f613da70e8ce380"), "ns" : "test.lineqi", "min" : { "id" : { "$minKey" : 1 } }, "max" : { "id" : NumberLong("-6148914691236517204") }, "shard" : "set2" }{ "_id" : "test.lineqi-id_-3074457345618258602", "lastmod" : Timestamp(3, 4), "lastmodEpoch" : ObjectId("55ddb7460f613da70e8ce380"), "ns" : "test.lineqi", "min" : { "id" : NumberLong("-3074457345618258602") }, "max" : { "id" : NumberLong(0) }, "shard" : "set3" }{ "_id" : "test.lineqi-id_3074457345618258602", "lastmod" : Timestamp(3, 6), "lastmodEpoch" : ObjectId("55ddb7460f613da70e8ce380"), "ns" : "test.lineqi", "min" : { "id" : NumberLong("3074457345618258602") }, "max" : { "id" : NumberLong("6148914691236517204") }, "shard" : "set1" }{ "_id" : "test.lineqi-id_-6148914691236517204", "lastmod" : Timestamp(3, 3), "lastmodEpoch" : ObjectId("55ddb7460f613da70e8ce380"), "ns" : "test.lineqi", "min" : { "id" : NumberLong("-6148914691236517204") }, "max" : { "id" : NumberLong("-3074457345618258602") }, "shard" : "set2" }{ "_id" : "test.lineqi-id_0", "lastmod" : Timestamp(3, 5), "lastmodEpoch" : ObjectId("55ddb7460f613da70e8ce380"), "ns" : "test.lineqi", "min" : { "id" : NumberLong(0) }, "max" : { "id" : NumberLong("3074457345618258602") }, "shard" : "set3" }{ "_id" : "test.lineqi-id_6148914691236517204", "lastmod" : Timestamp(3, 7), "lastmodEpoch" : ObjectId("55ddb7460f613da70e8ce380"), "ns" : "test.lineqi", "min" : { "id" : NumberLong("6148914691236517204") }, "max" : { "id" : { "$maxKey" : 1 } }, "shard" : "set1" }查看users表的存储信息mongos>use testmongos> db.lineqi.stats();{        "sharded" : true,        "systemFlags" : 1,        "userFlags" : 1,        "ns" : "test.lineqi",        "count" : 100000,        "numExtents" : 18,        "size" : 11200000,        "storageSize" : 33546240,        "totalIndexSize" : 8086064,        "indexSizes" : {                "_id_" : 3262224,                "id_hashed" : 4823840        },        "avgObjSize" : 112,        "nindexes" : 2,        "nchunks" : 6,        "shards" : {                "set1" : {                        "ns" : "test.lineqi",                        "count" : 33102,                        "size" : 3707424,                        "avgObjSize" : 112,                        "storageSize" : 11182080,                        "numExtents" : 6,                        "nindexes" : 2,                        "lastExtentSize" : 8388608,                        "paddingFactor" : 1,                        "systemFlags" : 1,                        "userFlags" : 1,                        "totalIndexSize" : 2649024,                        "indexSizes" : {                                "_id_" : 1079232,                                "id_hashed" : 1569792                        },                        "ok" : 1                },                "set2" : {                        "ns" : "test.lineqi",                        "count" : 33755,                        "size" : 3780560,                        "avgObjSize" : 112,                        "storageSize" : 11182080,                        "numExtents" : 6,                        "nindexes" : 2,                        "lastExtentSize" : 8388608,                        "paddingFactor" : 1,                        "systemFlags" : 1,                        "userFlags" : 1,                        "totalIndexSize" : 2755312,                        "indexSizes" : {                                "_id_" : 1103760,                                "id_hashed" : 1651552                        },                        "ok" : 1                },                "set3" : {                        "ns" : "test.lineqi",                        "count" : 33143,                        "size" : 3712016,                        "avgObjSize" : 112,                        "storageSize" : 11182080,                        "numExtents" : 6,                        "nindexes" : 2,                        "lastExtentSize" : 8388608,                        "paddingFactor" : 1,                        "systemFlags" : 1,                        "userFlags" : 1,                        "totalIndexSize" : 2681728,                        "indexSizes" : {                                "_id_" : 1079232,                                "id_hashed" : 1602496                        },                        "ok" : 1                }        },        "ok" : 1}更多MongoDB相关内容可以看看以下的有用链接: MongoDB 3.0 正式版发布下载  http://www.linuxidc.com/Linux/2015-03/114414.htmCentOS编译安装MongoDB http://www.linuxidc.com/Linux/2012-02/53834.htmCentOS 编译安装 MongoDB与mongoDB的php扩展 http://www.linuxidc.com/Linux/2012-02/53833.htmCentOS 6 使用 yum 安装MongoDB及服务器端配置 http://www.linuxidc.com/Linux/2012-08/68196.htmUbuntu 13.04下安装MongoDB2.4.3 http://www.linuxidc.com/Linux/2013-05/84227.htmMongoDB入门必读(概念与实战并重) http://www.linuxidc.com/Linux/2013-07/87105.htmUbunu 14.04下MongoDB的安装指南 http://www.linuxidc.com/Linux/2014-08/105364.htm《MongoDB 权威指南》(MongoDB: The Definitive Guide)英文文字版[PDF] http://www.linuxidc.com/Linux/2012-07/66735.htmNagios监控MongoDB分片集群服务实战 http://www.linuxidc.com/Linux/2014-10/107826.htm基于CentOS 6.5操作系统搭建MongoDB服务 http://www.linuxidc.com/Linux/2014-11/108900.htmMongoDB 的详细介绍:请点这里
MongoDB 的下载地址:请点这里本文永久更新链接地址