感谢@avanti,@MarkusWMalhberg - 思考如何回应这些评论将我推向了正确的方向。这需要一些时间来整合,所以我会稍微详细地解释配置。
Overview
着眼于用户体验,我们希望创建一个 Mongo 数据库配置,允许在最接近用户的位置进行读写。
假设
- 用户几乎总是在自己的区域中读取和写入文档,并且不介意不频繁读取其他区域的数据是否会较慢。
- 每个文档都包含一个指示其区域的键(为了简单/清晰)
许多分片文档都重点关注 HA/DR。考虑到用户体验和区域合规性,重点是位置而不是负载分布。
此示例将完全忽略 HA/DR、读取首选项和写入问题,但如果 POC 成熟,则需要解决这些问题。该示例忽略了这些,以便清楚地实现目标:本地读/写。
参考
- 运营细分 http://docs.mongodb.org/master/core/operational-segregation/?_ga=1.117145798.134938872.1438785509
- 管理分片标签 http://docs.mongodb.org/master/tutorial/administer-shard-tags/
- 副本集配置 http://docs.mongodb.org/master/reference/command/replSetGetConfig/#replsetgetconfig-output
- 不同的 mongos 配置数据库错误 http://docs.mongodb.org/master/tutorial/troubleshoot-sharded-clusters/
Tricks
We know
- 我们需要一个应用程序数据库,以便所有数据都可用
- 我们希望用户能够在本地进行读/写,因此我们需要在每个用户组附近建立一个数据库;我们需要一个副本集
- 写入只能对主副本集节点进行,因此,为了获得每个用户组旁边的主节点,我们需要多个副本;分片集群
在标准 ReplicaSet 和 Sharding 知识中,此配置有 2 个关键点:
- 为区域本地 ReplicaSet 节点分配优先级,以确保其成为主节点。
- 使用位置感知分片键标记来确保数据写入本地分片
分片键可以是任何东西:我们只关心用户能够在本地读/写,而不是有效的负载共享。
每个集合都必须进行分片,否则写入将进入分片零。
所需配置
配置
#!/usr/bin/env bash
echo ">>> Clean up processes and files from previous runs"
echo ">>> killAll mongod mongos"
killall mongod mongos
echo ">>> Remove db files and logs"
rm -rf data
rm -rf log
# Create the common log directory
mkdir log
echo ">>> Start replica set for shard US-East"
mkdir -p data/shard-US-East/rsMemberEast data/shard-US-East/rsMemberWest
mongod --replSet shard-US-East --logpath "log/shard-US-East-rsMemberEast.log" --dbpath data/shard-US-East/rsMemberEast --port 37017 --fork --shardsvr --smallfiles
mongod --replSet shard-US-East --logpath "log/shard-US-East-rsMemberWest.log" --dbpath data/shard-US-East/rsMemberWest --port 37018 --fork --shardsvr --smallfiles
echo ">>> Sleep 15s to allow US-East replica set to start"
sleep 15
# The US-East replica set member is assigned priority 2 so that it becomes primary
echo ">>> Configure replica set for shard US-East"
mongo --port 37017 << 'EOF'
config = { _id: "shard-US-East", members:[
{ _id : 0, host : "localhost:37017", priority: 2 },
{ _id : 1, host : "localhost:37018" }]};
rs.initiate(config)
EOF
echo ">>> Start replica set for shard-US-West"
mkdir -p data/shard-US-West/rsMemberEast data/shard-US-West/rsMemberWest
mongod --replSet shard-US-West --logpath "log/shard-US-West-rsMemberEast.log" --dbpath data/shard-US-West/rsMemberEast --port 47017 --fork --shardsvr --smallfiles
mongod --replSet shard-US-West --logpath "log/shard-US-West-rsMemberWest.log" --dbpath data/shard-US-West/rsMemberWest --port 47018 --fork --shardsvr --smallfiles
echo ">>> Sleep 15s to allow US-West replica set to start"
sleep 15
# The US-West replica set member is assigned priority 2 so that it becomes primary
echo ">>> Configure replica set for shard-US-West"
mongo --port 47017 << 'EOF'
config = { _id: "shard-US-West", members:[
{ _id : 0, host : "localhost:47017" },
{ _id : 1, host : "localhost:47018", priority: 2 }]};
rs.initiate(config)
EOF
# Shard config servers: should be 3 and all must be up to deploy a shard cluster
# These are the mongos backing store for routing information
echo ">>> Start config servers"
mkdir -p data/config/config-us-east data/config/config-us-west data/config/config-redundant
mongod --logpath "log/cfg-us-east.log" --dbpath data/config/config-us-east --port 57040 --fork --configsvr --smallfiles
mongod --logpath "log/cfg-us-west.log" --dbpath data/config/config-us-west --port 57041 --fork --configsvr --smallfiles
mongod --logpath "log/cfg-redundant.log" --dbpath data/config/config-redundant --port 57042 --fork --configsvr --smallfiles
echo ">>> Sleep 5 to allow config servers to start and stabilize"
sleep 5
# All mongos's must point at the same config server, a coordinator dispatches writes to each
echo ">>> Start mongos"
mongos --logpath "log/mongos-us-east.log" --configdb localhost:57040,localhost:57041,localhost:57042 --port 27017 --fork
mongos --logpath "log/mongos-us-west.log" --configdb localhost:57040,localhost:57041,localhost:57042 --port 27018 --fork
echo ">>> Wait 60 seconds for the replica sets to stabilize"
sleep 60
# Enable sharding on the 'sales' database and 'sales.users' collection
# Every collection in 'sales' must be sharded or the writes will go to shard 0
# Add a shard tag so we can associate shard keys with the tag (region)
# Shard tag range main and max cannot be the same so we use a region id for US-East = 1
# and US-West = 2. sh.addTagRange() is inclusive of minKey and exclusive of maxKey.
# We only need to configure one mongos - config will be propogated to all mongos through
# the config server
echo ">>> Add shards to mongos"
mongo --port 27017 <<'EOF'
db.adminCommand( { addshard : "shard-US-East/"+"localhost:37017" } );
db.adminCommand( { addshard : "shard-US-West/"+"localhost:47017" } );
db.adminCommand({enableSharding: "sales"})
db.adminCommand({shardCollection: "sales.users", key: {region:1}});
sh.addShardTag("shard-US-East", "US-East")
sh.addShardTag("shard-US-West", "US-West")
sh.addTagRange("sales.users", { region: 1 }, { region: 2 }, "US-East")
sh.addTagRange("sales.users", { region: 2 }, { region: 3 }, "US-West")
EOF
Testing
验证我们的配置是否正确sh.status()
。请注意,分片已正确分配,标签和区域分片键已正确分配。
[starver@rakshasa RegionalSharding 14:38:50]$ mongo --port 27017 sales
...
rakshasa(mongos-3.0.5)[mongos] sales> sh.status()
sharding version: {
"_id": 1,
"minCompatibleVersion": 5,
"currentVersion": 6,
"clusterId": ObjectId("55fdddc5746e30dc3651cda4")
}
shards:
{ "_id": "shard-US-East", "host": "shard-US-East/localhost:37017,localhost:37018", "tags": [ "US-East" ] }
{ "_id": "shard-US-West", "host": "shard-US-West/localhost:47017,localhost:47018", "tags": [ "US-West" ] }
balancer:
Currently enabled: yes
Currently running: no
Failed balancer rounds in last 5 attempts: 0
Migration Results for the last 24 hours:
1 : Success
databases:
{ "_id": "admin", "partitioned": false, "primary": "config" }
{ "_id": "test", "partitioned": false, "primary": "shard-US-East" }
{ "_id": "sales", "partitioned": true, "primary": "shard-US-East" }
sales.users
shard key: { "region": 1 }
chunks:
shard-US-East: 2
shard-US-West: 1
{ "region": { "$minKey" : 1 } } -> { "region": 1 } on: shard-US-East Timestamp(2, 1)
{ "region": 1 } -> { "region": 2 } on: shard-US-East Timestamp(1, 3)
{ "region": 2 } -> { "region": { "$maxKey" : 1 } } on: shard-US-West Timestamp(2, 0)
tag: US-East {
"region": 1
} -> {
"region": 2
}
tag: US-West {
"region": 2
} -> {
"region": 3
}
验证是否对正确的分片和主分片进行了写入。
在每个区域创建一条记录
db.users.insert({region:1, name:"us east user"})
db.users.insert({region:2, name:"us west user"})
您可以登录到每个副本集的每个成员,并且仅在美国东部分片上查看东部用户,仅在美国西部分片上查看西部用户。