概要
CRUDはいい加減おもしろくないですが、MongoDBのレプリケーションは簡単で面白いらしいよ。ということで、やってみる。
まず、レプリケーションのアーキテクチャに
Master/Slaveと
Replica Sets(+Replica Pair)がある。
新しいバージョンのMongoDBが導入できるならReplica Sets。
とくちょう
・Data Redundancy
・Automated Failover
・High Availability
・Distributing read load
・Simplify maintenance (compared to "normal" master-slave)
・Disaster recovery
日本語訳と違う(><)
レプリカ用ディレクトリ作成
D:\xampp\mongodb\data
┗rep1
┗rep2
┗rep3
レプリカセット起動
mongod --replSet rep_test --port 27017 --dbpath ../data/rep1
mongod --replSet rep_test --port 27018 --dbpath ../data/rep2
mongod --replSet rep_test --port 27019 --dbpath ../data/rep3
rep_testがレプリカセットの名前。
起動したけど、初期化してないから警告が出てる
Wed Mar 28 07:48:44 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
レプリカセット初期化
D:\xampp\mongodb\bin>mongo
MongoDB shell version: 2.0.3
connecting to: test
> config = {_id: 'rep_test', members: [
{_id: 0, host: '127.0.0.1:27017'},
{_id: 1, host: '127.0.0.1:27018'},
{_id: 2, host: '127.0.0.1:27019'}]
};
rs.initiate(config);
{
"info" : "Config now saved locally. Should come online in about a minute.",
"ok" : 1
}
>
initiate()すると、初期化がはじまってPrimary/Secondaryが選択される
[rsStart] replSet STARTUP2
[rsSync] ******
[rsSync] creating replication oplog of size: 47MB...
[rsSync] ******
[rsSync] replSet initial sync pending
[rsSync] replSet initial sync need a member to be primary or second initial sync
[rsHealthPoll] replSet member 127.0.0.1:27017 is up
[rsHealthPoll] replSet member 127.0.0.1:27017 is now in state SECONDARY
[rsHealthPoll] replSet member 127.0.0.1:27018 is up
[rsHealthPoll] replSet member 127.0.0.1:27018 is now in state SECONDARY
[rsSync] replSet initial sync finishing up
なぜか27019がマスタに。
状態確認
PRIMARY> rs.status()
{
"set" : "rep_test",
"date" : ISODate("2012-03-27T23:03:25Z"),
"myState" : 1,
"syncingTo" : "127.0.0.1:27019",
"members" : [
{
"_id" : 0,
"name" : "127.0.0.1:27017",
"health" : 1,
"state" : 1,
"stateStr" : "SECONDARY",
"optime" : {
"t" : 1332888980000,
"i" : 1
},
"optimeDate" : ISODate("2012-03-27T22:56:20Z"),
"self" : true
},
{
"_id" : 1,
"name" : "127.0.0.1:27018",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 422,
"optime" : {
"t" : 1332888980000,
"i" : 1
},
"optimeDate" : ISODate("2012-03-27T22:56:20Z"),
"lastHeartbeat" : ISODate("2012-03-27T23:03:25Z"),
"pingMs" : 0
},
{
"_id" : 2,
"name" : "127.0.0.1:27019",
"health" : 1,
"state" : 2,
"stateStr" : "PRIMARY",
"uptime" : 159,
"optime" : {
"t" : 1332888980000,
"i" : 1
},
"optimeDate" : ISODate("2012-03-27T22:56:20Z"),
"lastHeartbeat" : ISODate("2012-03-27T23:03:24Z"),
"pingMs" : 0
}
],
"ok" : 1
}
自動フェイルオーバーの確認
PRIMARYを落としてみる
[rsHealthPoll] replSet member 127.0.0.1:27019 is now in state DOWN
状態確認
PRIMARY> rs.status()
{
(略)
"members" : [
{
"_id" : 0,
"name" : "127.0.0.1:27017",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
(略)
},
{
"_id" : 1,
"name" : "127.0.0.1:27018",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
(略)
},
{
"_id" : 2,
"name" : "127.0.0.1:27019",
"health" : 0,
"state" : 8,
"stateStr" : "(not reachable/healthy)",
(略)
}
],
"ok" : 1
}
27017がPRIMARYになった。
27019復活
mongod --replSet rep_test --port 27019 --dbpath ../data/rep3
状態確認
PRIMARY> rs.status()
{
(略)
"members" : [
{
"_id" : 0,
"name" : "127.0.0.1:27017",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
(略)
},
{
"_id" : 1,
"name" : "127.0.0.1:27018",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
(略)
},
{
"_id" : 2,
"name" : "127.0.0.1:27019",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
(略)
}
],
"ok" : 1
}
27019はSECONDARYとして復活。
master/secondaryノードの決定は、各サーバの多数決で決定される。
というわけで、うわさどおり簡単で面白かったです。
かねこ(ゝω・)