I recently had an issue with setting up a new Celerra file system replication job. As soon as I selected the destination system I received the three errors below.
1) Query VDMs All. Cannot access any Data Mover on the remote system, hostname
Severity: Error
Brief Description: Cannot access any Data Mover on the remote system, hostname
Full Description: No Data Movers are available on the specified remote system to perform this operation
Recommended Action: 1) Check if the Data Movers on the specified remote system are accessible. 2) Ensure that the difference in system times between the local and remote Celerra systems or VNX systems does not exceed 10 minutes. Use NTP on the Control Stations to synchronize the system clocks. 3) Ensure that the passphrase on the local Control Station matches with the passphrase on the remote Control Station. 4) Ensure that the same local users that manage VNX for file systems exist on both the source and the destination Control Station. 5) Ensure that the global account is mapped to the same local account on both local and remote VNX Control Stations. Primus emc263860 provides more details.
Message ID: 13690601568
2) Query storage pools All. Execution failed: Segmentation fault: Operating system signal. [STRING.make_from_string]
Severity: Error
Brief Description: Execution failed: Segmentation fault: Operating system signal. [STRING.make_from_string]
Full Description: Operation failed for the reason described in the accompanying message.
Recommended Action: Correct the cause of the problem and repeat the operation.
Message ID: 13421840573
3) There are no destination pools available.
Severity: Info
Brief Description: There are no destination pools available.
Full Description: Destination side storage pools are not available.
Recommended Action: Check whether the storage pools have enough space.
Message ID: 26845970450
I was unable to determine the cause of the problem so I opened an SR with EMC.
It turns out there was a user discrepancy between the /etc/passwd file and the /nas/site/user_db file. This was causing the following error when checking the interconnect:
[nasadmin@celerra02 log]$ nas_cel -interconnect -l Error 2237: Execution failed: Segmentation fault: Operating system signal. [STRING.make_from_string]
The output should look something like this:
[nasadmin@celerra02 log]$ nas_cel -interconnect -l id name source_server destination_system destination_server 20001 loopback server_2 DRSITE1 server_2 20003 SITE1VNX5500 server_2 DRSITE1 server_2 20004 SITE2NS960 server_2 DRSITE2 server_2 20007 SITE11NS960 server_2 DRSITE1 server_2 20005 SITE3VNX5500 server_2 DRSITE1 server_2 20006 SITE4VNX5500 server_2 DRSITE1 server_2 20008 SITE5NS120 server_2 DRSITE1 server_2 40001 loopback server_4 DRSITE1 server_4 40003 SITE2NS40 server_4 DRSITE2 server_2
The problem was resolved by removing entries from /nas/site/user_db that were not in the /etc/passwd file. This was caused by a manual modification of the passwd file by a sysadmin, some old entries had been removed and the matching changes were not done in the user_db file.