gluster geo-replication rsync error 3

Issues related to applications and software problems
Post Reply
chtsalid
Posts: 7
Joined: 2017/02/20 08:43:54

gluster geo-replication rsync error 3

Post by chtsalid » 2018/10/05 13:58:59

Hi all,

I am testing a gluster geo-replication setup in glusterfs 3.12.14 version on CentOS Linux release 7.5.1804 and getting a faulty session due to rsync. It returns error 3.


[root@servera ~]# gluster volume info mastervol

Volume Name: mastervol
Type: Replicate
Volume ID: b7ec0647-b101-4240-9abf-32f24f2decec
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: servera:/bricks/brick-a1/brick
Brick2: serverb:/bricks/brick-b1/brick
Options Reconfigured:
changelog.changelog: on
geo-replication.ignore-pid-check: on
geo-replication.indexing: on
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
cluster.enable-shared-storage: enable



[root@servere ~]# gluster volume info slavevol

Volume Name: slavevol
Type: Replicate
Volume ID: 8b431b4e-5dc4-4db6-9608-3b82cce5024c
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: servere:/bricks/brick-e1/brick
Brick2: servere:/bricks/brick-e2/brick
Options Reconfigured:
performance.quick-read: off
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
features.read-only: off



[root@servera ~]# gluster volume geo-replication mastervol geoaccount@servere::slavevol status

MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED
----------------------------------------------------------------------------------------------------------------------------------------------------------------
servera mastervol /bricks/brick-a1/brick geoaccount geoaccount@servere::slavevol N/A Faulty N/A N/A
serverb mastervol /bricks/brick-b1/brick geoaccount geoaccount@servere::slavevol servere Active History Crawl 2018-10-05 15:24:03
[root@servera ~]# gluster volume geo-replication mastervol geoaccount@servere::slavevol status

MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED
-------------------------------------------------------------------------------------------------------------------------------------------------------
servera mastervol /bricks/brick-a1/brick geoaccount geoaccount@servere::slavevol N/A Faulty N/A N/A
serverb mastervol /bricks/brick-b1/brick geoaccount geoaccount@servere::slavevol N/A Faulty N/A N/A




cat /var/log/glusterfs/geo-replication/mastervol/ssh%3A%2F%2Fgeoaccount%4010.0.2.13%3Agluster%3A%2F%2F127.0.0.1%3Aslavevol.log

[2018-10-05 13:55:34.742177] I [master(/bricks/brick-a1/brick):1432:crawl] _GMaster: starting history crawl turns=1 stime=(1538745843, 0) entry_stime=None etime=1538747734
[2018-10-05 13:55:35.744625] I [master(/bricks/brick-a1/brick):1461:crawl] _GMaster: slave's time stime=(1538745843, 0)
[2018-10-05 13:55:36.255413] I [master(/bricks/brick-a1/brick):1863:syncjob] Syncer: Sync Time Taken duration=0.0837 num_files=1job=3 return_code=3
[2018-10-05 13:55:36.255831] E [resource(/bricks/brick-a1/brick):210:errlog] Popen: command returned error cmd=rsync -aR0 --inplace --files-from=- --super --stats --numeric-ids --no-implied-dirs --existing --xattrs --acls --ignore-missing-args . -e ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-replication/secret.pem -p 22 -oControlMaster=auto -S /tmp/gsyncd-aux-ssh-uVOLSe/05b8d7b5dab75575689c0e1a2ec33b3f.sock --compress geoaccount@servere:/proc/27025/cwd error=3
[2018-10-05 13:55:36.302834] I [syncdutils(/bricks/brick-a1/brick):271:finalize] <top>: exiting.
[2018-10-05 13:55:36.313239] I [repce(/bricks/brick-a1/brick):92:service_loop] RepceServer: terminating on reaching EOF.
[2018-10-05 13:55:36.313637] I [syncdutils(/bricks/brick-a1/brick):271:finalize] <top>: exiting.
[2018-10-05 13:55:36.664165] I [monitor(monitor):363:monitor] Monitor: worker died in startup phase brick=/bricks/brick-a1/brick
[2018-10-05 13:55:36.669894] I [gsyncdstatus(monitor):243:set_worker_status] GeorepStatus: Worker Status Change status=Faulty
[2018-10-05 13:55:46.883409] I [monitor(monitor):280:monitor] Monitor: starting gsyncd worker brick=/bricks/brick-a1/brick slave_node=ssh://geoaccount@servere:gluster://localhost:slavevol
[2018-10-05 13:55:47.187449] I [changelogagent(/bricks/brick-a1/brick):73:__init__] ChangelogAgent: Agent listining...
[2018-10-05 13:55:47.188601] I [resource(/bricks/brick-a1/brick):1780:connect_remote] SSH: Initializing SSH connection between master and slave...
[2018-10-05 13:55:49.329668] I [resource(/bricks/brick-a1/brick):1787:connect_remote] SSH: SSH connection between master and slave established. duration=2.1408
[2018-10-05 13:55:49.330225] I [resource(/bricks/brick-a1/brick):1502:connect] GLUSTER: Mounting gluster volume locally...
[2018-10-05 13:55:50.487957] I [resource(/bricks/brick-a1/brick):1515:connect] GLUSTER: Mounted gluster volume duration=1.1575
[2018-10-05 13:55:50.488302] I [gsyncd(/bricks/brick-a1/brick):799:main_i] <top>: Closing feedback fd, waking up the monitor
[2018-10-05 13:55:52.617573] I [master(/bricks/brick-a1/brick):1518:register] _GMaster: Working dir path=/var/lib/misc/glusterfsd/mastervol/ssh%3A%2F%2Fgeoaccount%4010.0.2.13%3Agluster%3A%2F%2F127.0.0.1%3Aslavevol/9517ac67e25c7491f03ba5e2506505bd
[2018-10-05 13:55:52.617953] I [resource(/bricks/brick-a1/brick):1662:service_loop] GLUSTER: Register time time=1538747752
[2018-10-05 13:55:52.660286] I [master(/bricks/brick-a1/brick):490:mgmt_lock] _GMaster: Got lock Becoming ACTIVE brick=/bricks/brick-a1/brick
[2018-10-05 13:55:52.665358] I [gsyncdstatus(/bricks/brick-a1/brick):276:set_active] GeorepStatus: Worker Status Change status=Active
[2018-10-05 13:55:52.667315] I [gsyncdstatus(/bricks/brick-a1/brick):248:set_worker_crawl_status] GeorepStatus: Crawl Status Changestatus=History Crawl
[2018-10-05 13:55:52.667908] I [master(/bricks/brick-a1/brick):1432:crawl] _GMaster: starting history crawl turns=1 stime=(1538745843, 0) entry_stime=None etime=1538747752
[2018-10-05 13:55:53.670293] I [master(/bricks/brick-a1/brick):1461:crawl] _GMaster: slave's time stime=(1538745843, 0)
[2018-10-05 13:55:54.119349] I [master(/bricks/brick-a1/brick):1863:syncjob] Syncer: Sync Time Taken duration=0.0830 num_files=1job=1 return_code=3
[2018-10-05 13:55:54.119719] E [resource(/bricks/brick-a1/brick):210:errlog] Popen: command returned error cmd=rsync -aR0 --inplace --files-from=- --super --stats --numeric-ids --no-implied-dirs --existing --xattrs --acls --ignore-missing-args . -e ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-replication/secret.pem -p 22 -oControlMaster=auto -S /tmp/gsyncd-aux-ssh-plMO22/05b8d7b5dab75575689c0e1a2ec33b3f.sock --compress geoaccount@servere:/proc/27178/cwd error=3
[2018-10-05 13:55:54.175019] I [syncdutils(/bricks/brick-a1/brick):271:finalize] <top>: exiting.
[2018-10-05 13:55:54.188937] I [repce(/bricks/brick-a1/brick):92:service_loop] RepceServer: terminating on reaching EOF.
[2018-10-05 13:55:54.189389] I [syncdutils(/bricks/brick-a1/brick):271:finalize] <top>: exiting.
[2018-10-05 13:55:54.499047] I [monitor(monitor):363:monitor] Monitor: worker died in startup phase brick=/bricks/brick-a1/brick
[2018-10-05 13:55:54.502306] I [gsyncdstatus(monitor):243:set_worker_status] GeorepStatus: Worker Status Change status=Faulty
[2018-10-05 13:56:04.703936] I [monitor(monitor):280:monitor] Monitor: starting gsyncd worker brick=/bricks/brick-a1/brick slave_node=ssh://geoaccount@servere:gluster://localhost:slavevol
[2018-10-05 13:56:04.984357] I [resource(/bricks/brick-a1/brick):1780:connect_remote] SSH: Initializing SSH connection between master and slave...
[2018-10-05 13:56:05.12824] I [changelogagent(/bricks/brick-a1/brick):73:__init__] ChangelogAgent: Agent listining...
[2018-10-05 13:56:07.147911] I [resource(/bricks/brick-a1/brick):1787:connect_remote] SSH: SSH connection between master and slave established. duration=2.1632
[2018-10-05 13:56:07.148320] I [resource(/bricks/brick-a1/brick):1502:connect] GLUSTER: Mounting gluster volume locally...
[2018-10-05 13:56:08.271363] I [resource(/bricks/brick-a1/brick):1515:connect] GLUSTER: Mounted gluster volume duration=1.1229
[2018-10-05 13:56:08.271608] I [gsyncd(/bricks/brick-a1/brick):799:main_i] <top>: Closing feedback fd, waking up the monitor
[2018-10-05 13:56:10.294498] I [master(/bricks/brick-a1/brick):1518:register] _GMaster: Working dir path=/var/lib/misc/glusterfsd/mastervol/ssh%3A%2F%2Fgeoaccount%4010.0.2.13%3Agluster%3A%2F%2F127.0.0.1%3Aslavevol/9517ac67e25c7491f03ba5e2506505bd
[2018-10-05 13:56:10.294866] I [resource(/bricks/brick-a1/brick):1662:service_loop] GLUSTER: Register time time=1538747770
[2018-10-05 13:56:10.313384] I [master(/bricks/brick-a1/brick):490:mgmt_lock] _GMaster: Got lock Becoming ACTIVE brick=/bricks/brick-a1/brick
[2018-10-05 13:56:10.317266] I [gsyncdstatus(/bricks/brick-a1/brick):276:set_active] GeorepStatus: Worker Status Change status=Active
[2018-10-05 13:56:10.319070] I [gsyncdstatus(/bricks/brick-a1/brick):248:set_worker_crawl_status] GeorepStatus: Crawl Status Changestatus=History Crawl
[2018-10-05 13:56:10.319447] I [master(/bricks/brick-a1/brick):1432:crawl] _GMaster: starting history crawl turns=1 stime=(1538745843, 0) entry_stime=None etime=1538747770
[2018-10-05 13:56:11.321458] I [master(/bricks/brick-a1/brick):1461:crawl] _GMaster: slave's time stime=(1538745843, 0)
[2018-10-05 13:56:11.830856] I [master(/bricks/brick-a1/brick):1863:syncjob] Syncer: Sync Time Taken duration=0.0485 num_files=1job=1 return_code=3
[2018-10-05 13:56:11.831165] E [resource(/bricks/brick-a1/brick):210:errlog] Popen: command returned error cmd=rsync -aR0 --inplace --files-from=- --super --stats --numeric-ids --no-implied-dirs --existing --xattrs --acls --ignore-missing-args . -e ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-replication/secret.pem -p 22 -oControlMaster=auto -S /tmp/gsyncd-aux-ssh-e4Mr9S/05b8d7b5dab75575689c0e1a2ec33b3f.sock --compress geoaccount@servere:/proc/27333/cwd error=3
[2018-10-05 13:56:11.844975] I [syncdutils(/bricks/brick-a1/brick):271:finalize] <top>: exiting.
[2018-10-05 13:56:11.848850] I [repce(/bricks/brick-a1/brick):92:service_loop] RepceServer: terminating on reaching EOF.
[2018-10-05 13:56:11.849129] I [syncdutils(/bricks/brick-a1/brick):271:finalize] <top>: exiting.
[2018-10-05 13:56:12.278266] I [monitor(monitor):363:monitor] Monitor: worker died in startup phase brick=/bricks/brick-a1/brick
[2018-10-05 13:56:12.282547] I [gsyncdstatus(monitor):243:set_worker_status] GeorepStatus: Worker Status Change status=Faulty
[2018-10-05 13:56:22.472763] I [monitor(monitor):280:monitor] Monitor: starting gsyncd worker brick=/bricks/brick-a1/brick slave_node=ssh://geoaccount@servere:gluster://localhost:slavevol
[2018-10-05 13:56:22.741744] I [resource(/bricks/brick-a1/brick):1780:connect_remote] SSH: Initializing SSH connection between master and slave...
[2018-10-05 13:56:22.742867] I [changelogagent(/bricks/brick-a1/brick):73:__init__] ChangelogAgent: Agent listining...
[2018-10-05 13:56:24.840967] I [resource(/bricks/brick-a1/brick):1787:connect_remote] SSH: SSH connection between master and slave established. duration=2.0990
[2018-10-05 13:56:24.841207] I [resource(/bricks/brick-a1/brick):1502:connect] GLUSTER: Mounting gluster volume locally...
[2018-10-05 13:56:25.953788] I [resource(/bricks/brick-a1/brick):1515:connect] GLUSTER: Mounted gluster volume duration=1.1124
[2018-10-05 13:56:25.954041] I [gsyncd(/bricks/brick-a1/brick):799:main_i] <top>: Closing feedback fd, waking up the monitor


Any idea how can I solve this problem?

Many thanks!

hunter86_bg
Posts: 2019
Joined: 2015/02/17 15:14:33
Location: Bulgaria
Contact:

Re: gluster geo-replication rsync error 3

Post by hunter86_bg » 2018/10/08 03:37:23

Have you checked that all nodes have the 'rsync' binary ? Maybe it's missing and you need to install it ?

Post Reply