Quantcast
Channel: THE SAN GUY
Viewing all 214 articles
Browse latest View live

Celerra Disk Provisioning Wizard incorrectly believes there are not enough drives available for provisioning

$
0
0

I recently had an issue attempting to extend our production NAS file pool on our NS-960.  We had just added Six new 2TB SATA disks to the array, and when I launched the Disk Provisioning Wizard it gave me this error:

“The number of drives available for provisioning additional storage are insufficient”.

That of course wasn’t true, as a 4+2 RAID6 config is indeed supported on this platform and I had just added six drives. I did come up with a workaround to do it manually thanks to some helpful advice from our local EMC technical rep.  I manually created a RAID6 Raid Group in a 4+2 config, and then created a single LUN using all of the available space in the Raid Group (about 7337GB).   Once the LUN is created, you can add it to the Celerrra storage group, in my case it was named “Celerra_hostname”.

When adding the LUN to the storage group, there is a critical step that you must not skip.  The HLU number must be modified!  After you click on a LUN, click add, look for it in the list and notice that the far right column shows the HLU (Host LUN ID).  The LUN you just added will have a blank entry.  It doesn’t look like it’s an editable field, but it is – simply click on the blank area where the number should be and you’ll get a drop down box.  The number you chose must be greater than 15.  Once you’ve modified the HLU for the new LUN, then click on OK to complete the process.

Next, you’ll want to switch back over to the Celerra Management interface, click on the ‘Storage’ tab, then click on the ‘Rescan Storage Systems’ link.  You will get a warning message that states:

“Rescan detects newly available storage and storage systems. Do not rescan unless all primary Data Movers are operating normally. The operation might take a few minutes to complete.”  

Heed the warning and make sure your data movers are up and functional.  You can monitor the progress in the background tasks area.   On my first attempt the Rescan failed.  I got this error message:

“Storage API code=3593: SYMAPI_C_CLARIION_LOAD_ERROR.  An error occurred while data was being loaded from a Clariion.” | “No additional information is available” | “No recommended action is available”.

At the point I got that error I was at the end of my work day and decided to get back to it the next day.  I had planned on opening an SR.  When I re-ran the same scan the next day it worked fine and my production pool auto-extended.  Problem solved.



Using the server_stats command on Celerra / VNX File

$
0
0

Server_stats is a CLI based real time performance monitoring tool from EMC for the Celerra and VNX file.  This post is meant to give a quick overview of the server_stats command with some samples on using it in a scheduled cron job. If you’re looking to dive into using the server_stats feature, I’d suggest using the online manual pages (man server_stats) to get a good idea of all the features and reviewing the “Managing Statistics for VNX” Guide from EMC here:  http://corpusweb130.emc.com/upd_prod_VNX/UPDFinalPDF/en/Statistics.pdf.

I don’t personally use it so I can’t explain how to set it up, but there is an opensource tool that you can use to push server_stats data to graphite called vnx2graphite. You can get it here: http://www.findbestopensource.com/product/fatz-vnx2graphite.   You can download Graphite here: http://graphite.wikidot.com/.

Here is the command line syntax:

server_stats <movername>

   -list
| -info [-all|<statpath_name>[,...]]
| -service { -start [-port <port_number>]

| -stop
| -delete
| -status }

 | -monitor -action {status|enable|disable}
|[

[{ -monitor {statpath_name|statgroup_name}[,...]
| -monitor {statpath_name|statgroup_name}
[-sort <field_name>]
[-order {asc|desc}]
[-lines <lines_of_output>]

 }…]
[-count <count>]
[-interval <seconds>]
[-terminationsummary {no|yes|only}]
[-format {text [-titles {never|once|<repeat_frequency>}]|csv}]
[-type {rate|diff|accu}]
[-file <output_filepath> [-overwrite]]
[-noresolve]
]

Here’s an explanation of a few of the useful table options and what to look for:

Syntax:  server_stats server_2 -i <interval in sec> -c <# of counts> -table <stat>

table cifs 

-Look at uSec/call. The output is in microseconds, divide by 1000 to convert to milliseconds. This tells you how long it takes the celerra to perform specific CIFS operations.

table dvol 

-This is for disk stats.  It shows the write distribution across all volumes.  Look for IO balance across resources.

table fsvol 

-Use this to check filesystem IO.  You’ll be able to monitor which file systems are getting all of the IO with this table.

Start with an interval of 1 first to look for spikes or bursts and then increase it incrementally (10 seconds, 30 seconds, 1 minute, 5 minutes, etc). You can also use Celerra monitor to get Clariion stats.  Look at queueing, cache flushes, etc.  Writes should be through to cache on the Clariion, and unless your write cache is filling up they should be faster than reads.

Here are some sample commands and what they do:

server_stats server_2 -table fsvol -interval 1 -count 10

-This correlates the filesystem to the meta-volumes and shows the % contribution of write requests for each meta-volume (FS Write Reqs %).

server_stats server_2 -table net -interval 1 -count 10

-This shows Network in (KiB/s) / Network In (Pkts/s) to figure out the packet size.  Do this for in and for out to verify the standard MTU size.

server_stats server_2 -summary nfs,cifs -interval 1 -count 10

-This will give a summary of performance stats for nfs and cifs.

Here are some additional sample commands, and how you can add to your crontab to automatically collect performance data:

Collect CIFS and NFS data every 5 minutes:

*/5 * * * * /nas/bin/server_stats server_2 -monitor cifs.smb1,cifs.smb2,nfs.v2,nfs.v3,nfs.v4,cifs.global,nfs.basic -format csv -terminationsummary no -i 5 -c 60 -type accu -file “/nas/quota/slot_2/perfstats/data/server_2/server_2_`date ‘+\%FT\%T’|sed s/://g`” > /dev/null

In the command above the -type accu option tells the command to accumulates statistics upon each capture rather than starting back at a baseline of zero. You can also do ‘diff’ to capture the difference from interval to interval.

Collect diskVol performance stats every 5 minutes:

*/5 * * * * /nas/bin/server_stats server_2 -monitor diskVolumes-std -i 5 -c 60 -file “/nas/quota/slot_2/perfstats/data/server_2/server_2_`date ‘+\%FT\%T’|sed s/://g`” > /dev/null

Collect top_talkers data every 5 minutes:

*/5 * * * * /nas/bin/server_stats server_2 -monitor nfs.client -i 5 -c 60 -file “/nas/quota/slot_2/perfstats/data/server_2/server_2_`date ‘+\%FT\%T’|sed s/://g`” > /dev/null

Below are some useful nfs and cifs stats that you can monitor (pulled from DART 8.1.2-51).  For a full list, run the command server_stats server_2 -i.

cifs.global.basic.totalCalls,cifs.global.basic.reads,cifs.global.basic.readBytes,cifs.global.basic.readAvgSize,cifs.global.basic.writes,cifs.global.basic.writeBytes,
cifs.global.basic.writeAvgSize,cifs.global.usage.currentConnections,cifs.global.usage.currentOpenFiles

nfs.basic,nfs.client,nfs.currentThreads,nfs.export,nfs.filesystem,nfs.group,nfs.maxUsedThreads,nfs.totalCalls,nfs.user,nfs.v2,nfs.v3,nfs.v4,nfs.vdm

 


Dynamic allocation pool limit has been reached

$
0
0

We were having issues with our backup jobs failing on CIFS share backups using Symantec Netbackup.  The jobs died with a “status 24″, which means it was losing communicaiton with the source.  Our backup administrator provided me with the exact times & dates of the failures and I noticed that immediately preceding his failures this error appeared in the server log on the control station:

2012-08-05 07:09:37: KERNEL: 4: 10: Dynamic allocation pool limit has been reached. Limit=0x30000 Current=0x50920 Max=0x0
 

A quick google search came up with this description of the error:  “The maximum amount of memory (number of 8K pages) allowed for dynamic memory allocation has almost been reached. This indicates that a possible memory leak is in progress and the Data Mover may soon panic. If Max=0(zero) then the system forced panic option is disabled. If Max is not zero then the system will force a panic if dynamic memory allocation reaches this level.”

Based on the fact that the error shows up right before a backup failure I saw the correlation.  To fix it, you’lll need to modify the Heap Limit from the default of 0x00030000 to a larger size.  Here is the command to do that:

.server_config server_2 -v “param kernel mallocHeapLimit=0x40000″ (to change the value)
.server_config server_2 -v “param kernel” (will list the kernel parameters).
 

Below is a list of all the kernel parameters:

Name                                                 Location        Current       Default
----                                                 ----------      ----------    ----------
kernel.AutoconfigDriverFirst                         0x0003b52d30    0x00000000    0x00000000
kernel.BufferCacheHitRatio                           0x0002093108    0x00000050    0x00000050
kernel.MSIXdebug                                     0x0002094714    0x00000001    0x00000001
kernel.MSIXenable                                    0x000209471c    0x00000001    0x00000001
kernel.MSI_NoStop                                    0x0002094710    0x00000001    0x00000001
kernel.MSIenable                                     0x0002094718    0x00000001    0x00000001
kernel.MsiRouting                                    0x0002094724    0x00000001    0x00000001
kernel.WatchDog                                      0x0003aeb4e0    0x00000001    0x00000001
kernel.autoreboot                                    0x0003a0aefc    0x00000258    0x00000258
kernel.bcmTimeoutFix                                 0x0002179920    0x00000002    0x00000002
kernel.buffersWatermarkPercentage                    0x0003ae964c    0x00000021    0x00000021
kernel.bufreclaim                                    0x0003ae9640    0x00000001    0x00000001
kernel.canRunRT                                      0x000208f7a0    0xffffffff    0xffffffff
kernel.dumpcompress                                  0x000208f794    0x00000001    0x00000001
kernel.enableFCFastInit                              0x00022c29d4    0x00000001    0x00000001
kernel.enableWarmReboot                              0x000217ee68    0x00000001    0x00000001
kernel.forceWholeTLBflush                            0x00039d0900    0x00000000    0x00000000
kernel.heapHighWater                                 0x00020930c8    0x00004000    0x00004000
kernel.heapLowWater                                  0x00020930c4    0x00000080    0x00000080
kernel.heapReserve                                   0x00020930c0    0x00022e98    0x00022e98
kernel.highwatermakpercentdirty                      0x00020930e0    0x00000064    0x00000064
kernel.lockstats                                     0x0002093128    0x00000001    0x00000001
kernel.longLivedChunkSize                            0x0003a23ed0    0x00002710    0x00002710
kernel.lowwatermakpercentdirty                       0x0003ae9654    0x00000000    0x00000000
kernel.mallocHeapLimit                               0x0003b5558c    0x00040000    0x00030000  (This is the parameter I changed)
kernel.mallocHeapMaxSize                             0x0003b55588    0x00000000    0x00000000
kernel.maskFcProc                                    0x0002094728    0x00000004    0x00000004
kernel.maxSizeToTryEMM                               0x0003a23f50    0x00000008    0x00000008
kernel.maxStrToBeProc                                0x0003b00f14    0x00000080    0x00000080
kernel.memSearchUsecs                                0x000208fa28    0x000186a0    0x000186a0
kernel.memThrottleMonitor                            0x0002091340    0x00000001    0x00000001
kernel.outerLoop                                     0x0003a0b508    0x00000001    0x00000001
kernel.panicOnClockStall                             0x0003a0cf30    0x00000000    0x00000000
kernel.pciePollingDefault                            0x00020948a0    0x00000001    0x00000001
kernel.percentOfFreeBufsToFreePerIter                0x00020930cc    0x0000000a    0x0000000a
kernel.periodicSyncInterval                          0x00020930e4    0x00000005    0x00000005
kernel.phTimeQuantum                                 0x0003b86e18    0x000003e8    0x000003e8
kernel.priBufCache.ReclaimPolicy                     0x00020930f4    0x00000001    0x00000001
kernel.priBufCache.UsageThreshold                    0x00020930f0    0x00000032    0x00000032
kernel.protect_zero                                  0x0003aeb4e8    0x00000001    0x00000001
kernel.remapChunkSize                                0x0003a23fd0    0x00000080    0x00000080
kernel.remapConfig                                   0x000208fe40    0x00000002    0x00000002
kernel.retryTLBflushIPI                              0x00020885b0    0x00000001    0x00000001
kernel.roundRobbin                                   0x0003a0b504    0x00000001    0x00000001
kernel.setMSRs                                       0x0002088610    0x00000001    0x00000001
kernel.shutdownWdInterval                            0x0002093238    0x0000000f    0x0000000f
kernel.startAP                                       0x0003aeb4e4    0x00000001    0x00000001
kernel.startIdleTime                                 0x0003aeb570    0x00000001    0x00000001
kernel.stream.assert                                 0x0003b00060    0x00000000    0x00000000
kernel.switchStackOnPanic                            0x000208f8e0    0x00000001    0x00000001
kernel.threads.alertOptions                          0x0003a22bf4    0x00000000    0x00000000
kernel.threads.maxBlockedTime                        0x000208f948    0x00000168    0x00000168
kernel.threads.minimumAlertBlockedTime               0x000208f94c    0x000000b4    0x000000b4
kernel.threads.panicIfHung                           0x0003a22bf0    0x00000000    0x00000000
kernel.timerCallbackHistory                          0x000208f780    0x00000001    0x00000001
kernel.timerCallbackTimeLimitMSec                    0x000208f784    0x00000003    0x00000003
kernel.trackIntrStats                                0x000209021c    0x00000001    0x00000001
kernel.usePhyDevName                                 0x0002094720    0x00000001    0x00000001

The steps for NFS exporting a file system on a VDM

$
0
0

I made a blog post back in January 2014 about creating an NFS export on a virtual data mover but I didn’t give much detail on the commands you need to use to actually do it. As I pointed out back then, you can’t NFS export a VDM file system from within Unisphere however when a file system is mounted on a VDM its path from the root of the physical Data Mover can be exported from the CLI.

The first thing that needs to be done is determining the physical Data Mover where the VDM resides.

Below is the command you’d use to make that determination:

[nasadmin@Celerra_hostname]$ nas_server -i -v name_of_your_vdm | grep server
server = server_4

That will show you just the physical data mover that it’s mounted on. Without the grep statement, you’d get the output below. If you have hundreds of filesystems it will cause the screen to scroll the info you’re looking for off the top of the screen. Using grep is more efficient.

[nasadmin@Celerra_hostname]$ nas_server -i -v name_of_your_vdm
id = 1
name = name_of_your_vdm
acl = 0
type = vdm
server = server_4
rootfs = root_fs_vdm_name_of_your_vdm
I18N mode = UNICODE
mountedfs = fs1,fs2,fs3,fs4,fs5,fs6,fs7,fs8,…
member_of =
status :
defined = enabled
actual = loaded, active
Interfaces to services mapping:
interface=10-3-20-167 :cifs
interface=10-3-20-130 :cifs
interface=10-3-20-131 :cifs

Next you need to determine the file system path from the root of the Data Mover. This can be done with the server_mount command. As in the prior step, it’s more efficient if you grep for the name of the file system. You can run it without the grep command, but it could generate multiple screens of output depending on the number of file systems you have.

[nasadmin@stlpemccs04a /]$ server_mount server_4 | grep Filesystem_03
Filesystem_03 on /root_vdm_3/Filesystem_03 uxfs,perm,rw

The final step is to actually export the file system using this path from the prior step. The file system must be exported from the root of the Data Mover rather than the VDM. Note that once you have exported the VDM file system from the CLI, you can then manage it from within Unisphere if you’d like to set server permissions. The “-option anon=0,access=server_name,root=server_name” portion of the CLI command below can be left off if you’d prefer to use the GUI for that.

[nasadmin@Celerra_hostname]$ server_export server_4 -Protocol nfs -option anon=0,access=server_name,root=server_name /root_vdm_3/Filesystem_03
server_4 : done

At this point the client can mount the path with NFS.


Matching LUNs and UIDs when presenting VPLEX LUNs to Unix hosts

$
0
0

Our naming convention for LUNs includes the pool ID, LUN number, server name, filesystem/drive letter, last four digits of the array’s serial number, and size (in GB). Having all of this information in the LUN name makes for very easy reporting and identification of LUNs on a server.  This is what our LUN names look like: P1_LUN100_SPA_0000_servername_filesystem_150G

Typically, when presenting a new LUN to our AIX administration team for a new server build, they would assign the LUNs to specific volume groups based on the LUN names. The command ‘powermt display dev=hdiskpower#’ always includes the name & intended volume group for the LUN, making it easy for our admins to identify a LUN’s purpose.  Now that we are presenting LUNs through our VPlex, when they run a powermt display on the server the UID for the LUN is shown, not the name.  Below is a sample output of what is displayed.

root@VIOserver1:/ # powermt display dev=all
Pseudo name=hdiskpower0
VPLEX ID=FNM00141800023
Logical device ID=6000144000000010704759ADDF2487A6 (this would usually be displayed as a LUN name)
state=alive; policy=ADaptive; queued-IOs=0
==============================================================================
————— Host ————— – Stor – — I/O Path — — Stats —
### HW Path I/O Paths Interf. Mode State Q-IOs Errors
==============================================================================
1 fscsi1 hdisk8 CL1-0B active alive 0 0
1 fscsi1 hdisk6 CL1-0F active alive 0 0
0 fscsi0 hdisk4 CL1-0D active alive 0 0
0 fscsi0 hdisk2 CL1-07 active alive 0 0

Pseudo name=hdiskpower1
VPLEX ID=FNM00141800023
Logical device ID=6000144000000010704759ADDF2487A1 (this would usually be displayed as a LUN name)
state=alive; policy=ADaptive; queued-IOs=0
==============================================================================
————— Host ————— – Stor – — I/O Path — — Stats —
### HW Path I/O Paths Interf. Mode State Q-IOs Errors
==============================================================================
1 fscsi1 hdisk9 CL1-0B active alive 0 0
1 fscsi1 hdisk7 CL1-0F active alive 0 0
0 fscsi0 hdisk5 CL1-0D active alive 0 0
0 fscsi0 hdisk3 CL1-07 active alive 0 0

In order to easily match up the UIDs with the LUN names on the server, an extra step needs to be taken on the VPlex CLI. Log in to the VPlex using a terminal emulator, and once you’re logged in use the ‘vplexcli’ command. That will take you to a shell that allows for additional commands to be entered.

login as: admin
Using keyboard-interactive authentication.
Password:
Last login: Fri Sep 19 13:35:28 2014 from 10.16.4.128
admin@service:~> vplexcli
Trying ::1…
Connected to localhost.
Escape character is ‘^]’.

Enter User Name: admin

Password:

VPlexcli:/>

Once you’re in, run the ls -t command with the additional options listed below. You will need to substitute the STORAGE_VIEW_NAME with the actual name of the storage view that you want a list of LUNs from.

VPlexcli:/> ls -t /clusters/cluster-1/exports/storage-views/STORAGE_VIEW_NAME::virtual-volumes

The output looks like this:

/clusters/cluster-1/exports/storage-views/st1pvio12a-b:
Name Value
————— ————————————————————————————————–
virtual-volumes [(0,P1_LUN411_7872_SPB_VIOServer1_VIO_10G,VPD83T3:6000144000000010704759addf2487a6,10G),
(1,P0_LUN111_7872_SPA_VIOServer1_VIO_10G,VPD83T3:6000144000000010704759addf2487a1,10G)]

Now you can easily see which disk UID is tied to which LUN name.

If you would like to get a list of every storage view and every LUN:UID mapping, you can substitute the storage view name with an asterisk (*).

VPlexcli:/> ls -t /clusters/cluster-1/exports/storage-views/*::virtual-volumes

The resulting report will show a complete list of LUNs, grouped by storage view:

/clusters/cluster-1/exports/storage-views/VIOServer1:
Name Value
————— ————————————————————————————————–
virtual-volumes [(0,P1_LUN421_9322_SPB_/clusters/cluster-1/exports/storage-views/ VIOServer2:
Name Value
————— ————————————————————————————————–
virtual-volumes [(0,P1_LUN421_9322_SPB_VIOServer2_root_75G,VPD83T3:6000144000000010704759addf248ad9,75G),
(1,R2_LUN125_9322_SPB_VIOServer2_redo2_12G,VPD83T3:6000144000000010704759addf248b09,12G),
(2,R2_LUN124_9322_SPA_VIOServer2_redo1_12G,VPD83T3:6000144000000010704759addf248b04,12G),
(3,P3_LUN906_9322_SPB_VIOServer2_oraarc_250G,VPD83T3:6000144000000010704759addf248aff,250G),
(4,P2_LUN706_9322_SPA_VIOServer2_oraarc_250G,VPD83T3:6000144000000010704759addf248afa,250G)]

/clusters/cluster-1/exports/storage-views/VIOServer2:
Name Value
————— ————————————————————————————————
virtual-volumes [(1,R2_LUN1025_9322_SPB_VIOServer2_redo2_12G,VPD83T3:6000144000000010704759addf248b09,12G),
(2,R2_LUN1024_9322_SPA_VIOServer2_redo1_12G,VPD83T3:6000144000000010704759addf248b04,12G),
(3,P3_LUN906_9322_SPB_VIOServer2_ora1_250G,VPD83T3:6000144000000010704759addf248aff,250G),
(4,P2_LUN706_9322_SPA_VIOServer2_ora2_250G,VPD83T3:6000144000000010704759addf248afa,250G)]

/clusters/cluster-1/exports/storage-views/VIOServer3:
Name Value
————— ————————————————————————————————
virtual-volumes [(0,P0_LUN101_3432_SPA_VIOServer3_root_75G,VPD83T3:6000144000000010704759addf248a0a,75G),
(1,P0_LUN130_3432_SPA_VIOServer3_redo1_25G,VPD83T3:6000144000000010704759addf248a0f,25G),

Our VPlex has only been installed for a few months and our team is still learning.  There may be a better way to do this, but it’s all I’ve been able to figure out so far.


VPLEX Health Check

$
0
0

This is a brief post to share the CLI commands and sample output for a quick VPLEX health check.  Our VPLEX had a dial home event and below are the commands that EMC ran to verify that it was healthy.  Here is the dial home event that was generated:

SymptomCode: 0x8a266032
SymptomCode: 0x8a34601a
Category: Status
Severity: Error
Status: Failed
Component: CLUSTER
ComponentID: director-1-1-A
SubComponent: stdf
CallHome: Yes
FirstTime: 2014-11-14T11:20:11.008Z
LastTime: 2014-11-14T11:20:11.008Z
CDATA: Compare and Write cache transaction submit failed, status 1 [Versions:MS{D30.60.0.3.0, D30.0.0.112, D30.60.0.3}, Director{6.1.202.1.0}, ClusterWitnessServer{unknown}] RCA: The attempt to start a cache transaction for a Scsi Compare and Write command failed. Remedy: Contact EMC Customer Support.

Description: The processing of a Scsi Com pare and Write command could not complete.
ClusterID: cluster-1

Based on that error the commands below were run to make sure the cluster was healthy.

This is the general health check command:

VPlexcli:/> health-check
 Product Version: 5.3.0.00.00.10
 Product Type: Local
 Hardware Type: VS2
 Cluster Size: 2 engines
 Cluster TLA:
 cluster-1: FNM00141800023
 
 Clusters:
 ---------
 Cluster Cluster Oper Health Connected Expelled Local-com
 Name ID State State
 --------- ------- ----- ------ --------- -------- ---------
 cluster-1 1 ok ok True False ok
 
 Meta Data:
 ----------
 Cluster Volume Volume Oper Health Active
 Name Name Type State State
 --------- ------------------------------- ----------- ----- ------ ------
 cluster-1 c1_meta_backup_2014Nov21_100107 meta-volume ok ok False
 cluster-1 c1_meta_backup_2014Nov20_100107 meta-volume ok ok False
 cluster-1 c1_meta meta-volume ok ok True
 
 Director Firmware Uptime:
 -------------------------
 Director Firmware Uptime
 -------------- ------------------------------------------
 director-1-1-A 147 days, 16 hours, 15 minutes, 29 seconds
 director-1-1-B 147 days, 15 hours, 58 minutes, 3 seconds
 director-1-2-A 147 days, 15 hours, 52 minutes, 15 seconds
 director-1-2-B 147 days, 15 hours, 53 minutes, 37 seconds
 
 Director OS Uptime:
 -------------------
 Director OS Uptime
 -------------- ---------------------------
 director-1-1-A 12:49pm up 147 days 16:09
 director-1-1-B 12:49pm up 147 days 16:09
 director-1-2-A 12:49pm up 147 days 16:09
 director-1-2-B 12:49pm up 147 days 16:09
 
 Inter-director Management Connectivity:
 ---------------------------------------
 Director Checking Connectivity
 Enabled
 -------------- -------- ------------
 director-1-1-A Yes Healthy
 director-1-1-B Yes Healthy
 director-1-2-A Yes Healthy
 director-1-2-B Yes Healthy
 
 Front End:
 ----------
 Cluster Total Unhealthy Total Total Total Total
 Name Storage Storage Registered Ports Exported ITLs
 Views Views Initiators Volumes
 --------- ------- --------- ---------- ----- -------- -----
 cluster-1 56 0 299 16 353 9802
 
 Storage:
 --------
 Cluster Total Unhealthy Total Unhealthy Total Unhealthy No Not visible With
 Name Storage Storage Virtual Virtual Dist Dist Dual from Unsupported
 Volumes Volumes Volumes Volumes Devs Devs Paths All Dirs # of Paths
 --------- ------- --------- ------- --------- ----- --------- ----- ----------- -----------
 cluster-1 203 0 199 0 0 0 0 0 0
 
 Consistency Groups:
 -------------------
 Cluster Total Unhealthy Total Unhealthy
 Name Synchronous Synchronous Asynchronous Asynchronous
 Groups Groups Groups Groups
 --------- ----------- ----------- ------------ ------------
 cluster-1 0 0 0 0
 
 Cluster Witness:
 ----------------
 Cluster Witness is not configured

This command checks the status of the cluster:

VPlexcli:/> cluster status
Cluster cluster-1
operational-status: ok
transitioning-indications:
transitioning-progress:
health-state: ok
health-indications:
local-com: ok

This command checks the state of the storage volumes:

VPlexcli:/> storage-volume summary
Storage-Volume Summary (no tier)
---------------------- --------------------

Health out-of-date 0
storage-volumes 203
unhealthy 0

Vendor DGC 203

Use meta-data 4
used 199

Capacity total 310T


VPLEX initiator paths dropped

$
0
0

We recently ran into an SP bug check on one of our VNX arrays and after it came back up several of the initiator paths to the VPLEX did not come back up.  We were also seeing IO timeouts.  This is a known bug that happens when there is an SP reboot and is fixed with Patch 1 for GeoSynchrony 5.3.  EMC has released a script that provides a workaround until the patch can be applied: https://download.emc.com/downloads/DL56253_VPLEX_VNX_SCRIPT.zip.zip

The following pre-conditions need to happen during a VNX NDU to see this issue on VPLEX:
1] During a VNX NDU, SPA goes down.
2] At this point IO time-outs start happening on IT nexus’s pertaining to SPA.
3] The IO time-outs cause the VPLEX SCSI Layer to send LU Reset TMF’s. These LU Reset TMF’s get timed out as well.

You can review ETA 000193541 on EMC’s support site for more information.  It’s a critical bug and I’d suggest patching as soon as possible.

 


Rescan Storage System command on Celerra results in conflict:storageID-devID error

$
0
0

I was attempting to extend our main production NAS file pool on our NS-960 and ran into an issue.  I had recently freed up 8 SATA disks from a block pool and was attempting to re-use them and extend a Celerra file pool.  I created a new RAID Group and LUN that used the maximum capacity of the RAID Group.  I then added the LUN to the celerra storage group, making sure to set the HLU to a number greater than 15.  I then changed the setting on our main production file pool to auto-extend, and clicked on the “Rescan Storage Systems” option.  Unfortunately rescanning produced an error every time it was run.  I have done this exact same procedure in the past and it’s worked fine.  Here is the error:

conflict:storageID-devID: disk=17 old:symm=APM00100600999,dev=001F new:symm=APM00100600999,dev=001F addr=c16t1l11

I checked the disks on the Celerra using the nas_disk –l command, and the new disk shows up as “in use” even though the rescan command didn’t properly complete.

[nasadmin@Celerra tools]$ nas_disk -l
id   inuse  sizeMB    storageID-devID      type   name  servers
17    y     7513381   APM00100600999-001F  CLATA  d17   <BLANK>

Once the dvol is presented to Celerra (assuming the rescan goes fine) it should not be inuse until it is assigned to a storage pool and a file system uses it.  In this case that didn’t happen.  If you run /nas/tools/whereisfs (depending on your DART version, it may be “.whereisfs” with the dot) it shows a listing of every file system and which disk and which LUN they reside on.  I verified that the disk was not in use using that command.

In order to be on the safe side, I opened an SR with EMC rather than simply deleting the disk.  They suggested that the NAS database has a corruption. I’m going to have EMC’s Recovery Team check the usage of the diskvol and then delete it and re-add it.  In order to engage the recovery team you need to sign a “Data Deletion Form” absolving EMC of any liability for data loss, which is standard practice when they delete volumes on a customer array.  If there are any further caveats or important things to note after EMC has taken care of this I’ll update this post.



EMC World 2015

$
0
0

I’m at EMC World in Las Vegas this week and it’s been fantastic so far.  I’m excited about the new 40TB XtremIO X-bricks and how we might leverage that for our largest and most important 80TB oracle database, also excited about possible use cases for  the Virtual VNX in our small branch locations, and all the other exciting futures that I can’t publicly share because I’m under an NDA with EMC.  Truly exciting and innovative technology is coming from them.  VXblock was also really impressive, although that’s not likely something my company will implement anytime soon.

I found out for the first time today that the excellent VNX Monitoring and Reporting application is now free for the VNX1 platform as well as VNX2.  If you would like to get a license for any of your VNX1 arrays, simply ask your local  sales representative to submit a zero dollar sales order for a license.  We’re currently evaluating ViPR SRM as a replacement for our soon to be “end of life” Control Center install, but until then VNX MR is a fantastic tool that provides nearly the same performance data for no cost at all.  SRM adds much more functionality beyond just VNX monitoring and reporting (i.e., monitoring SAN switches) and I’d highly recommend doing a demo if you’re also still using Control Center.

We also implemented a VPLEX last year and it’s truly been a lifesaver and is an amazing platform.  We currently have a VPLEX local implantation in our primary data center and it’s allowed us to easily migrate workloads from one array to another seamlessly with no disruption to applications.   I’m excited about the possibilities with RecoverPoint as well, I’m still learning about it.

If anyone else who’s at EMC World happens to read this, comment!  I’d love to hear your experiences and what you’re most excited about with EMC’s latest technology.


Scripting an alert for checking the availability of individual CIFS server shares

$
0
0

It was recently asked to come up with a method to alert on the availability of specific CIFS file shares in our environment.  This was due to a recent issue we had on our VNX with our data mover crashing and causing the corruption of a single file system when it came back up.  We were unaware for several hours of the one file system being unavailable on our CIFS server.

This particular script would require maintenance whenever a new file system share is added to a CIFS server.  A unique line must to be added for every file system share that you have configured.  If a file system is not mounted and the share is inaccessible, an email alert will be sent.  If the share is accessible the script does nothing when run from the scheduler.  If it’s run manually from the CLI, it will echo back to the screen that the path is active.

This is a bash shell script, I run it on a windows server with Cygwin installed using the ‘email’ package for SMTP.  It should also run fine from a linux server, and you could substitute the ‘email’ syntax for sendmail or whatever other mail application you use.   I have it scheduled to check the availability of CIFS shares every one hour.

 

DIR1=file_system_1; SRV1=cifs_servername;  echo -ne $DIR1 && echo -ne “: ” && [ -d //$SRV1/$DIR1 ] && echo “Network Path is Active” || email -b -s “Network Path \\\\$SRV1\\$DIR1 is offline” emailaddress@email.com

DIR2=file_system_2; SRV1=cifs_servername;  echo -ne $DIR1 && echo -ne “: ” && [ -d //$SRV1/$DIR2 ] && echo “Network Path is Active” || email -b -s “Network Path \\\\$SRV1\\$DIR2 is offline” emailaddress@email.com

DIR3=file_system_3; SRV1=cifs_servername;  echo -ne $DIR1 && echo -ne “: ” && [ -d //$SRV1/$DIR3 ] && echo “Network Path is Active” || email -b -s “Network Path \\\\$SRV1\\$DIR3 is offline” emailaddress@email.com

DIR4=file_system_4; SRV1=cifs_servername;  echo -ne $DIR1 && echo -ne “: ” && [ -d //$SRV1/$DIR4 ] && echo “Network Path is Active” || email -b -s “Network Path \\\\$SRV1\\$DIR4 is offline” emailaddress@email.com

DIR5=file_system_5; SRV1=cifs_servername;  echo -ne $DIR1 && echo -ne “: ” && [ -d //$SRV1/$DIR5 ] && echo “Network Path is Active” || email -b -s “Network Path \\\\$SRV1\\$DIR5 is offline” emailaddress@email.com


Default Passwords

$
0
0

Here is a collection of default passwords for EMC, HP, Cisco, VMware, TrendMicro and IBM hardware & software.

EMC Secure Remote Support (ESRS) Axeda Policy Manager Server:

  • Username: admin
  • Password: EMCPMAdm7n

EMC VNXe Unisphere (EMC VNXe Series Quick Start Guide, step 4):

  • Username: admin
  • Password: Password123#

EMC vVNX Unisphere:

  • Username: admin
  • Password: Password123#
    NB You must change the administrator password during this first login.

EMC CloudArray Appliance:

  • Username: admin
  • Password: password
    NB Upon first login you are prompted to change the password.

EMC CloudBoost Virtual Appliance:
https://<FQDN&gt;:4444

  • Username: local\admin
  • Password: password
    NB You must immediately change the admin password.
    $ password <current_password> <new_password>

EMC Ionix Unified Infrastructure Manager/Provisioning (UIM/P):

  • Username: sysadmin
  • Password: sysadmin

EMC VNX Monitoring and Reporting:

  • Username: admin
  • Password: changeme

EMC RecoverPoint:

  • Username: admin
    Password: admin
  • Username: boxmgmt
    Password: boxmgmt
  • Username: security-admin
    Password: security-admin

EMC XtremIO:

XtremIO Management Server (XMS)

  • Username: xmsadmin
    password: 123456 (prior to v2.4)
    password: Xtrem10 (v2.4+)

XtremIO Management Secure Upload

  • Username: xmsupload
    Password: xmsupload

XtremIO Management Command Line Interface (XMCLI)

  • Username: tech
    password: 123456 (prior to v2.4)
    password: X10Tech! (v2.4+)

XtremIO Management Command Line Interface (XMCLI)

  • Username: admin
    password: 123456 (prior to v2.4)
    password: Xtrem10 (v2.4+)

XtremIO Graphical User Interface (XtremIO GUI)

  • Username: tech
    password: 123456 (prior to v2.4)
    password: X10Tech! (v2.4+)

XtremIO Graphical User Interface (XtremIO GUI)

  • Username: admin
    password: 123456 (prior to v2.4)
    password: Xtrem10 (v2.4+)

XtremIO Easy Installation Wizard (on storage controllers / nodes)

  • Username: xinstall
    Password: xiofast1

XtremIO Easy Installation Wizard (on XMS)

  • Username: xinstall
    Password: xiofast1

Basic Input/Output System (BIOS) for storage controllers / nodes

  • Password: emcbios

Basic Input/Output System (BIOS) for XMS

  • Password: emcbios

EMC ViPR Controller :
http://ViPR_virtual_ip (the ViPR public virtual IP address, also known as the network.vip)

  • Username: root
    Password: ChangeMe

EMC ViPR Controller Reporting vApp:
http://<hostname&gt;:58080/APG/

  • Username: admin
    Password: changeme

EMC Solutions Integration Service:
https://<Solutions Integration Service IP Address>:5480

  • Username: root
    Password: emc

EMC VSI for VMware vSphere Web Client:
https://<Solutions Integration Service IP Address>:8443/vsi_usm/

  • Username: admin
  • Password: ChangeMe

Note:
After the Solutions Integration Service password is changed, it cannot be modified.
If the password is lost, you must redeploy the Solutions Integration Service and use the default login ID and password to log in.

Cisco Integrated Management Controller (IMC) / CIMC / BMC:

  • Username: admin
  • Password: password

Cisco UCS Director:

  • Username: admin
  • Password: admin
  • Username: shelladmin
  • Username: changeme

Hewlett Packard P2000 StorageWorks MSA Array Systems:

  • Username: admin
  • Password: !admin (exclamation mark ! before admin)
  • Username: manage
  • Password: !manage (exclamation mark ! before manage)
IBM Security Access Manager Virtual Appliance:
  • Username: admin
  • Password: admin

VCE Vision:

  • Username: admin
  • Password: 7j@m4Qd+1L
  • Username: root
  • Password: V1rtu@1c3!

VMware vSphere Management Assistant (vMA):

  • Username: vi-admin
  • Password: vmware

VMware Data Recovery (VDR):

  • Username: root
  • Password: vmw@re (make sure you enter @ as Shift-2 as in US keyboard layout)

VMware vCenter Hyperic Server:
https://Server_Name_or_IP:5480/

  • Username: root
  • Password: hqadmin

https://Server_Name_or_IP:7080/

  • Username: hqadmin
  • Password: hqadmin

VMware vCenter Chargeback:
https://Server_Name_or_IP:8080/cbmui

  • Username: root
  • Password: vmware

VMware vCenter Server Appliance (VCSA) 5.5:
https://Server_Name_or_IP:5480

  • Username: root
  • Password: vmware

VMware vCenter Operations Manager (vCOPS):

Console access:

  • Username: root
  • Password: vmware

Manager:
https://Server_Name_or_IP

  • Username: admin
  • Password: admin

Administrator Panel:
https://Server_Name_or_IP/admin

  • Username: admin
  • Password: admin

Custom UI User Interface:
https://Server_Name_or_IP/vcops-custom

  • Username: admin
  • Password: admin

VMware vCenter Support Assistant:
http://Server_Name_or_IP

  • Username: root
  • Password: vmware

VMware vCenter / vRealize Infrastructure Navigator:
https://Server_Name_or_IP:5480

  • Username: root
  • Password: specified during OVA deployment

VMware ThinApp Factory:

  • Username: admin
  • Password: blank (no password)

VMware vSphere vCloud Director Appliance:

  • Username: root
  • Password: vmware

VMware vCenter Orchestrator :
https://Server_Name_or_IP:8281/vco – VMware vCenter Orchestrator
https://Server_Name_or_IP:8283 – VMware vCenter Orchestrator Configuration

  • Username: vmware
  • Password: vmware

VMware vCloud Connector Server (VCC) / Node (VCN):
https://Server_Name_or_IP:5480

  • Username: admin
  • Password: vmware
  • Username: root
  • Password: vmware

VMware vSphere Data Protection Appliance:

  • Username: root
  • Password: changeme

VMware HealthAnalyzer:

  • Username: root
  • Password: vmware

VMware vShield Manager:
https://Server_Name_or_IP

  • Username: admin
  • Password: default
    type enable to enter Privileged Mode, password is 'default' as well

Teradici PCoIP Management Console:

  • The default password is blank

Trend Micro Deep Security Virtual Appliance (DS VA):

  • Login: dsva
  • password: dsva

Citrix Merchandising Server Administrator Console:

  • User name: root
  • password: C1trix321

VMTurbo Operations Manager:

  • User name: administrator
  • password: administrator
    If DHCP is not enabled, configure a static address by logging in with these credentials:
  • User name: ipsetup
  • password: ipsetup
    Console access:
  • User name: root
  • password: vmturbo

VPLEX Unisphere Login hung at “Retrieving Meta-Volume Information”

$
0
0

I recently had an issue where I was unable to log in to the Unisphere GUI on the VPLEX, it would hang with the message “Retrieving Meta-Volume Information” after progressing about 30% on the progress bar.

This was caused by a hung Java process.  In order to resolve it, you must restart the management server. This will not cause any disruption to hosts connected to the VPLEX.

To do this, run the following command:

ManagementServer:/> sudo /etc/init.d/VPlexManagementConsole restart

If this hangs or does not complete, you will need to run the top command to identify the PID for the java service:

admin@service:~>top
Mem:   3920396k total,  2168748k used,  1751648k free,    29412k buffers
Swap:  8388604k total,    54972k used,  8333632k free,   527732k cached

  PID USER      PR  NI  VIRT  RES  SHR S   %CPU %MEM    TIME+  COMMAND
26993 service   20   0 2824m 1.4g  23m S     14 36.3  18:58.31 java
 4948 rabbitmq  20   0  122m  42m 1460 S      1  1.1  13118:32 beam.smp
    1 root      20   0 10540   48   36 S      0  0.0  12:34.13 init

Once you’ve identified the PID for the java service, you can kill the process with the kill command, and then run the command to restart the management console again.

ManagementServer:/> sudo kill -9 8798
ManagementServer:/> sudo /etc/init.d/VPlexManagementConsole start

Once the management server restarts, you should be able to log in to the Unisphere for VPLEX GUI again.


Web interface disabled on brocade switch

$
0
0

I ran into an issue where one of our brocade switches was inaccessible via the web browser. The error below was displayed when connecting to the IP:

Interface disabled
This Interface (10.2.2.23) has been blocked by the administrator.

In order to resolve this, you’ll need to allow port 80 traffic on the switch.  It was disabled on mine.

First, Log in to the switch and review the existing IP filters (Look for port 80 set to deny):

switcho1:admin> ipfilter –show

Name: default_ipv4, Type: ipv4, State: active
Rule Source IP Protocol Dest Port Action
1 any tcp 22 permit
2 any tcp 23 deny
3 any tcp 897 permit
4 any tcp 898 permit
5 any tcp 111 permit
6 any tcp 80 deny
7 any tcp 443 permit
8 any udp 161 permit
9 any udp 111 permit
10 any udp 123 permit
11 any tcp 600 – 1023 permit
12 any udp 600 – 1023 permit

Next, clone the default policy, as you cannot make changes to the default policy.  Note that you can name the policy anything you like, I chose to name it “Allow80”.

ipfilter –clone Allow80 -from default_ipv4

Delete the rule that denys port 80 (rule 6 in the above example):

ipfilter –delrule Allow80 -rule 6

Add a rule back in to permit it:

ipfilter –addrule Allow80 -rule 12 -sip any -dp 80 -proto tcp -act permit

Save it:

ipfilter –save Allow80

Activate it (this will change default policy to a “defined” state):

ipfilter –activate Allow80

 

That’s it… you should now be able to access your switch via the web browser.


NetApp FAS Zero disk procedure

$
0
0

We recently had a need to zero out and reinstall a NetApp FAS 8080 in order to move it from test to production.  Below are the steps to zero out the disks in the array.

Steps:

  1. SSH to each nodes service-processor
  2. Halt each node.
    1. system node halt -node Node1 -inhibit-takeover true
    2. system node halt -node Node2
  3. At the Loader prompt for each node boot ONTAP (You might want to do these one at a time so you don’t miss the CTRL-C for the Boot Menu)
    1. LOADER A> boot_ontap
    2. LOADERB> boot_ontap
  4. Press Ctrl + C when you see the message below to enter the Boot Menu and select option 4 to wipe the configuration and zero disks (Do this on each node)
*******************************
*                             *
* Press Ctrl-C for Boot Menu. *
*                             *
*******************************

(1) Normal Boot.
(2) Boot without /etc/rc.
(3) Change password.
(4) Clean configuration and initialize all disks.
(5) Maintenance mode boot.
(6) Update flash from backup config.
(7) Install new software first.
(8) Reboot node. Selection (1-8)? 4

5. Enter y to the questions that follow:

Zero disks, reset config and install a new file system?: y

This will erase all the data on the disks, are you sure?: y

The node will reboot and start initializing the disks.  Once the disks are zeroed the system should reboot to the cluster setup.


Pure Storage data reduction report

$
0
0

I had a request to publish a daily report that outlines our data reduction numbers for each LUN on our production Pure storage arrays.  I wrote a script that will log in to the Pure CLI and issue the appropriate command, using ‘expect’ to do a screen grab and output the data to a csv file.  The csv file is then converted to an HTML table and published on our internal web site.

The ‘expect’ commands (saved as purevol.exp in the same directory as the bash script)

#!/usr/bin/expect -f
spawn ssh pureuser@10.10.10.10
expect “logon as: ”
send “pureuser\r”
expect “pureuser@10.10.10.10’s password: ”
send “password\r”
expect “pureuser@pure01> ”
send “purevol list –space\r”
expect “pureuser@pure01> ”
send “exit\r”

.

The bash script (saved as purevol.sh):

#!/bin/bash

# Pure Data Reduction Report Script
# 11/28/16

#Define a timestamp function
#The output looks like this: 6-29-2016/8:45:12
timestamp() {
date +”%m-%d-%Y/%H:%M:%S”
}

#Remove existing output file
rm /home/data/pure/purevol_532D.txt

#Run the expect script to create the output file
/usr/bin/expect -f /home/data/pure/purevol.exp > /home/data/pure/purevol_532D.txt

#Remove the first ten lines of the output file
#The first 12 lines contain login and command execution info not needed in the report
sed -i ‘1d’ /home/data/pure/purevol_532D.txt
sed -i ‘1d’ /home/data/pure/purevol_532D.txt
sed -i ‘1d’ /home/data/pure/purevol_532D.txt
sed -i ‘1d’ /home/data/pure/purevol_532D.txt
sed -i ‘1d’ /home/data/pure/purevol_532D.txt
sed -i ‘1d’ /home/data/pure/purevol_532D.txt
sed -i ‘1d’ /home/data/pure/purevol_532D.txt
sed -i ‘1d’ /home/data/pure/purevol_532D.txt
sed -i ‘1d’ /home/data/pure/purevol_532D.txt
sed -i ‘1d’ /home/data/pure/purevol_532D.txt
sed -i ‘1d’ /home/data/pure/purevol_532D.txt
sed -i ‘1d’ /home/data/pure/purevol_532D.txt

#Remove the last line of the output file
#This is because the expect script leaves a CLI prompt as the last line of the output
sed -i ‘$ d’ /home/data/pure/purevol_532D.txt

#Add date to output file, remove previous temp files
rm /home/data/pure/purevol_532D-1.csv
rm /home/data/pure/purevol_532D-2.csv
echo -n “Run time: ” > /home/data/pure/purevol_532D-1.csv
echo $(timestamp) >> /home/data/pure/purevol_532D-1.csv

#Add titles to new csv file
echo “Volume”,”Size”,”Thin Provisioning”,”Data Reduction”,” “,” “,”Total Reduction”,” “,” “,”Volume”,”Snapshots”,”Shared Space”,”System”,”Total” >> /home/data/pure/purevol_532D-1.csv

#Convert the space delimited file into a comma delimited file
sed -r ‘s/^\s+//;s/\s+/,/g’ /home/data/pure/purevol_532D.txt > /home/data/pure/purevol_532D-2.csv

#Combine the csv files into one
cat /home/data/pure/purevol_532D-1.csv /home/data/pure/purevol_532D-2.csv > /home/data/pure/purevol_532D.csv

#Use the csv2htm perl script to convert the csv to an html table
#csv2html script available here:  http://web.simmons.edu/~boyd3/imap/csv2html/
./csv2htm.pl -e -T -i /home/data/pure/purevol_532D.csv -o /home/data/pure/purevol_532D.html

#Copy the html file to the www folder to publish it
cp /home/data/pure/purevol_532D.html /cygdrive/C/inetpub/wwwroot

Below is an example of what the output looks like after the script is run and the output is converted to an HTML table.  Note there are columns missing to the right in order to fit the formatting of this post.  Also included are the numbers for Total reduction and snapshots.

Name Size Thin Provisioning Data Reduction
LUN001_PURE_0025_ESX_5T 5T 78% 16.4 to 1
LUN002_PURE_0025_ESX_5T 5T 75% 7.8 to 1
LUN003_PURE_0025_ESX_5T 5T 71% 9.3 to 1
LUN004_PURE_0025_ESX_5T 5T 87% 10.5 to 1

 

 

 

 



Block and File VNX Password Change Procedure

$
0
0

Below is the procedure for changing the passwords on a Unified VNX on both the block and file sides.

Please note that Changing the Global VNX administrator password can cause communication failures between the Control Station and the array, the issue is documented in emc261195 and is the reason I’m adding this post, I was researching how to avoid the issue. The article notes that in DART OS versions newer than v7.0.14 the synchronization was automated and the cached credentials are updated automatically, in DART OS v7.0.14 and prior you must do it manually on the active control station.

Whenever a change is made to the active Control Station always verify that the standby control station configuration matches on takeover.  Takeover is initiated by the standby control station, failover is initiated by the active control station. As an example, if the time zone is changed on the active control station it is not part of the synchronization during the failover process. Time zone changes needs to be configured separately on each one and is a setting that requires a reboot to take effect. Unisphere will prompt you to do so as a reminder, however on takeover/failover the newly promoted control station never reboots.

Block Side: Change the sysadmin global domain account

A) Updating global domain account password

1) Log into Unisphere with the sysadmin global account, using the control station IP

2) From the “All Systems” page select “Domains”

3) Select “Manage Global Users”

4) Highlight sysadmin and click on “Modify”

5) Change the password

B) Update Security on Control Station

1) Open a putty session to the primary control station and run the commands below. They should be as is, however a possible exception would be the first 2 might make you add userid/password info before it would be accepted (add -user sysadmin –password “pswd ” –scope 0 to the commands below).

/nas/sbin/naviseccli -h spa -AddUserSecurity -user sysadmin -scope 0

/nas/sbin/naviseccli -h spb -AddUserSecurity -user sysadmin -scope 0

nas_storage –modify id=1 –security

 C) Verify the updated sysadmin password in the following locations:

1) Via Unisphere (Log in with the newly changed password)

2) Verify communication between the control station and storage processors on each array:

* Log in to the active control station via putty using the nasadmin local account

* Run the following commands:

/nas/sbin/navicli -h SPA getagent

nas_storage -check -all

The NAS storage check command should respond with “done”.

File Side: Change the nasadmin and root local accounts

A) Local accounts need to be modified on each array individually

1) Log into Unisphere with the sysadmin global account

2) Select the desired array

3) Click on “Settings” -> “Security” -> “Local Users for File”

4) Highlight nasadmin click on “Properties”

5) Change the password

6) Highlight root, click on Properties

7) Change the password

Note that the password for the local nasadmin and root accounts can also be changed from the CLI:

[root@fakevnxprompt ~]# passwd nasadmin
Changing password for user nasadmin.
New UNIX password: <enter a password>
BAD PASSWORD: it is based on a dictionary word
Retype new UNIX password:
passwd: all authentication tokens updated successfully.
[root@fakevnxprompt ~]#

B) Verify both passwords

1) Log in to the active control station via putty as nasadmin, verify your newly changed password

2) run the su command and verify your newly changed root password

C) Propagate changes to the standby control station

At this point the standby control station local account passwords have not yet been updated. It’s now time to test control station failover.  You can review one of my related prior blog posts on control station failover here.

1) While logged in to the active control station with root privileges, run this command:

/nas/sbin/cs_standby -failover

This will synchronize the control stations, reboot the active control station, and then make the standby control station active.

Caveats: Please note the expected issues listed below as part of out-of-band communication without an online, active control station:

* In-band production data will not be disrupted

* Data mover failover cannot occur

* Auto-extension of filesystems will not occur

* Scheduled checkpoints will not occur

* Replication sessions may be disrupted

2) Log in to the active control station (the previous standby control station)

3) Verify the new nasadmin password

4) su and verify the root password

5) Fail back to the original primary control station:

/nas/sbin/cs_standby -failover

 

 

 


Isilon CLI Command Reference

$
0
0

Below is a command reference for almost all of the CLI commands available in the Isilon OneFS CLI.  The basic commands are outlined and in many cases multiple samples of using the commands are provided.  This information was gathered as an easier to use, more condensed reference than the 1000+ page long CLI Administration guide provided by EMC, however I recommend you refer to that guide for more specific information on the commands.  The information gathered here applies to OneFS 8.0.0.0.

Device Commands
isi devices drive firmware list Displays a list of firmware details for the data drives in a node.
isi devices drive firmware list –node-lnn all View the drive firmware status of all the nodes.
isi devices drive firmware list –node-lnn View the drive firmware status of drives on a specific node.
isi devices drive firmware update start all –node-lnn all Update the drive firmware for your entire cluster.
isi devices drive firmware update start all –node-lnn Update the drive firmware for a specific node only.
isi devices -d Confirm that a node has finished updating.
isi devices firmware list Displays a list of firmware details for the data drives in a node.
isi devices drive firmware update list Ensure that no drive firmware updates are currently in progress.
isi devices firmware update start Updates firmware on one or more drives in a node.
isi devices firmware update view Displays info about a drive firmware update for a node.
isi devices firmware view { | –lnum } Displays information about the firmware on a single drive
isi devices node list Identify the serial number of the node.
isi devices node add Join a node to the cluster.
isi devices node smartfail Smartfails a node and removes it from the cluster.
isi devices node stopfail Discontinues the smartfail process on a node.
isi devices –action smartfail –device 3 This command removes a node with a logical node number (LNN) of 3.
lnnset 22 83 switches the LNN of a node from 22 to 83
isi devices add Scans for available drives and adds the drives
isi devices drive add Scans for available drives and adds the drives to the node.
isi devices drive view Displays information about a single drive.
isi devices drive format Formats a drive so you can add it to a node.
isi devices drive list Displays a list of data drives in a node.
isi devices drive suspend Temporarily suspends all activities for a drive.
isi devices drive purposelist Displays a list of possible use cases for drives.
isi devices drive smartfail Smartfails a drive so you can remove it from a node.
isi devices drive stopfail Discontinues the smartfail process on a drive.
isi devices drive suspend Temporarily suspends all activities for a drive.
isi readonly list Displays a list of read-only status by node.
isi readonly modify Modifies a node’s read-only status.
isi readonly view Displays the read-only status of a node.
isi servicelight list Displays a list of service LEDs in the cluster by node and the status of each service LED.
isi servicelight modify Turns a node’s service LED on or off.
isi servicelight view Displays the status of a node’s service LED.
isi devices drive purpose Assigns a use case to a drive. For example, you can designate a drive for normal data
  storage operations, or you can designate the drive for L3 caching instead of storage.
System Commands
reboot Reboots the cluster.
reboot 8 Reboots a single node, in this case node 8.
shutdown all shuts down all nodes on the cluster.
isi email settings modify Modify email settings for the cluster.
isi email settings view View cluster email settings.
isi services [-l | -a]  [ [{enable | disable}]] Displays a list of available services.
isi set Works similar to chmod, providing a mechanism to adjust OneFS-specific file attributes
isi version [–format {list | json}]  –verbose Displays cluster version information.
isi_for_array Runs commands on multiple nodes in an array, either in parallel or in serial.
isi_gather_info Collects and uploads the most recent cluster log information to ESRS
isi_phone_home Modify the settings for the isi_phone_home feature.
isi license licenses view View information about the current status of any optional Isilon software modules.
isi status Cluster node and drive health, storage data sizes, ip addresses, throughput, critical events and job status
isi status -n 1 Displays status info about a specific logical node.
isi batterystatus list View the status of all NVRAM batteries and charging systems on the node.
isi snmp settings Modify SNMP settings for a cluster.
isi snmp settings view View snmp settings.
isi get Displays information about a set of files, including the requested protection, current
isi batterystatus list Displays a list of batteries in the cluster by node & the status of each battery.
isi batterystatus view Displays the status of a node’s batteries.
  actual protection, and whether write-coalescing is enabled.
Config Commands
isi config Opens a new prompt where node and cluster settings can be altered, only isi config commands are valid
changes Displays a list of changes to the configuration that have not been committed.
commit Commits configuration settings and then exits isi config.
date Displays or sets the current date and time on the cluster.
encoding [list] Sets the default encoding character set for the cluster.
exit Exits the isi config subsystem.
help Displays a list of all isi config commands.
interface {enable | disable} Displays the IP ranges, netmask, and MTU and enables or disables internal interfaces.
iprange Displays a list of internal IP addresses that can be assigned to nodes, or adds addresses to the list.
joinmode Displays the setting for how nodes are added to the current cluster.
migrate Displays a list of IP address ranges that can be assigned to nodes or both adds and removes IP ranges from that list.
mtu Displays the size of the maximum transmission unit (MTU) that the cluster uses
name Displays the names currently assigned to clusters when run with no arguments, or assigns new name.
netmask []] Displays or sets the subnetmask on the cluster.
quit Exits the isi config subsystem.
reboot [{ | all}] Reboots one or more nodes, specified by LNN.
shutdown [{ | all}] Shuts down one or more nodes, specified by LNN.
status [advanced] Displays current information on the status of the cluster.
timezone [] Displays the current time zone or specifies new time zones.
version Displays information about the current OneFS version.
wizard Activates a wizard on unconfigured nodes.
deliprange Displays a list of internal network IP addresses that can be assigned to nodes or
  removes specified addresses from the list.
lnnset Displays a table of logical node number (LNN), device ID, and internal IP address for
  each node in the cluster when run without arguments. Changes the LNN when specified.
Statistics Commands
isi statistics client Displays the most active, by throughput, clients accessing the cluster for each supported protocol.
isi statistics drive Displays performance information by drive.
isi statistics heat Displays the most active /ifs paths for varous metrics.
isi statistics query current Displays current statistics.
isi statistics query history Displays available historical statistics.
isi statistics list keys Displays a list of all available keys.
isi statistics list operations Displays a list of valid arguments for the –operations option.
isi statistics protocol Displays statistics by protocol, such as NFSv3 and HTTP.
isi statistics pstat Displays a selection of cluster-wide and protocol data.
isi statistics system Displays general cluster statistics & op rates for supported protocols, network & disk traffic.
Access Zones
isi zone zones create Isolates data and restrict which users can access the data
isi zone zones create DevZone /ifs/hr/data Creates an access zone named DevZone and sets the base directory to /ifs/hr/data.
isi zone zones modify –add-auth-providers : Add an authentication provider.
isi zone zones modify DevZone –clear-auth-providers Remove all authentication providers.
isi zone zones delete DevZone Delete any access zone except the built-in System zone.
isi zone zones list View a list of all access zones on the cluster.
isi zone zones view TestZone Display the setting details of TestZone.
isi zone restrictions create Prohibits user or group access to the /ifs directory.
isi zone restrictions delete Removes a restriction that prohibits user or group access to the /ifs directory.
isi zone restrictions list Displays a list of users or groups that are prohibited from accessing the /ifs directory.
Authentication/Active Directory
isi auth ads create Create an Active Directory provider.
isi auth ads list Displays a list of Active Directory providers.
isi auth ads view Displays the properties of an Active Directory provider.
isi auth ads create –name=adserver.corp.com \ –user=admin –groupnet=group5 Specific example of adding a domain.
isi auth ads modify Modify the advanced settings for an Active Directory provider.
isi auth ads delete Delete an Active Directory provider.
isi auth ldap create Create an LDAP provider.
isi auth ldap modify Modify any setting for an LDAP provider (except its name).
isi auth ldap delete Delete an LDAP provider.
isi auth ldap list Displays a list of LDAP providers.
isi auth ldap view Displays the properties of an LDAP provider.
isi auth nis create Configure a NIS provider.
isi auth nis modify Modify any setting for an NIS provider (except its name).
isi auth nis delete Delete a NIS provider.
isi auth nis list Displays a list of NIS providers and indicates whether a provider is functioning correctly.
isi auth nis view Displays the properties of an NIS provider.
isi auth krb5 create { | –keytab-file } Creates an MIT Kerberos provider and joins a user to an MIT Kerberos realm.
isi auth krb5 delete [–force] Deletes an MIT Kerberos authentication provider and removes the user from an MIT Kerberos realm.
isi auth krb5 list Displays a list of MIT Kerberos authentication providers.
isi auth krb5 view Displays the properties of an MIT Kerberos authentication provider.
isi auth krb5 domain create [–realm ] Creates an MIT Kerberos domain mapping.
isi auth krb5 domain delete [–force] Deletes an MIT Kerberos domain mapping.
isi auth krb5 domain list Displays a list of MIT Kerberos domain mappings.
isi auth krb5 domain modify [–realm ] Modifies an MIT Kerberos domain mapping.
isi auth krb5 domain view Displays the properties of an MIT Kerberos domain mapping.
isi auth krb5 realm create Creates an MIT Kerberos realm.
isi auth krb5 realm modify Modify an MIT Kerberos realm.
isi auth krb5 realm list View a list of all Kerberos realms configured on the cluster.
isi auth krb5 realm view View a list of all Kerberos realms configured on the cluster.
isi auth krb realm view TEST.corp.COM View a list for a specific domain.
isi auth krb5 realm delete Delete an MIT Kerberos realm.
isi auth krb5 domain create Add an MIT Kerberos domain to an MIT Kerberos realm.
isi auth krb5 domain modify Modify a Kerberos domain.
isi auth krb5 domain modify –realm Example of modifying a Kerberos domain by specificing an alternate realm.
isi auth krb5 domain view View the properties of an MIT Kerberos domain mapping.
isi auth krb5 domain list List one or more MIT Kerberos domains.
isi auth krb5 domain delete Delete one or more MIT Kerberos domain mappings.
isi auth krb5 spn list View the service principal names (SPNs) and their associated keys that are registered for an MIT Kerberos provider.
isi auth krb5 spn delete –all Delete all keys for a specified SPN or a specific version of a key.
isi auth krb5 spn create Add or update keys for an SPN.
isi auth krb5 spn check Compare the list of registered SPNs against the list of discovered SPNs.
isi auth krb5 spn fix Fix the missing SPNs.
isi auth krb5 spn fix Add missing SPNs for an MIT Kerberos service provider
isi auth krb5 spn import Import the keys of a keytab file.
isi auth ads spn check Checks valid service principal names (SPNs).
isi auth ads spn create Adds one or more service principal names (SPNs) for a machine account.
isi auth ads spn delete Deletes one or more SPNs that are registered against a machine account.
isi auth ads spn fix Adds missing service principal names (SPNs) for an Active Directory provider.
isi auth ads spn list Displays a list of service principal names (SPNs) that are registered against a machine account.
isi auth krb5 spn create Creates or updates keys for an MIT Kerberos provider.
isi auth krb5 spn delete { | –all} Deletes keys from an MIT Kerberos provider.
isi auth krb5 spn check Checks for missing service principal names (SPNs) for an MIT Kerberos provider.
isi auth krb5 spn fix Adds the missing service principal names (SPNs) for an MIT Kerberos provider.
isi auth krb5 spn import Imports keys from a keytab file for an MIT Kerberos provider.
isi auth krb5 spn list Lists the service principal names (SPNs) and keys registered for an MIT Kerberos provider.
isi auth ads trusts controllers list Displays a list of domain controllers for a trusted domain.
isi auth ads trusts list Displays a list of trusted domains.
isi auth ads trusts view Displays the properties of a trusted domain.
isi auth error Displays error code definitions from the authentication log files.
isi auth file create Creates a file provider.
isi auth file delete Deletes a file provider.
isi auth file list Displays a list of file providers.
isi auth file modify Modifies a file provider.
isi auth file view Displays the properties of a file provider.
isi auth mapping create {| –source-uid Creates a manual mapping between a source identity and target identity
isi auth mapping delete {| –source-uid Deletes one or more identity mappings.
isi auth mapping dump Displays or prints the kernel mapping database.
isi auth mapping flush Flushes the cache for one or all identity mappings.
isi auth mapping import Imports mappings from a source file to the ID mapping database.
isi auth mapping list Displays the ID mapping database for an access zone.
isi auth mapping modify Sets or modifies a mapping between two identities.
isi auth mapping token Displays the access token that is calculated for a user during authentication.
isi auth mapping view Displays mappings for an identity.
isi auth netgroups view Displays information about a netgroup.
isi auth privileges Displays a list of system privileges.
isi auth refresh Refreshes authentication system configuration settings.
isi auth roles create Creates an empty role.  Run the isi auth roles modify command to add items.
isi auth roles delete Deletes a role.
isi auth roles list [–verbose] Displays a list of roles.
isi auth roles members list Displays a list of the members of a role.
isi auth roles modify Modifies a role.
isi auth roles privileges list Displays a list of privileges that are associated with a role.
isi auth roles view Displays the properties of a role.
isi auth settings acls modify Modifies access control list (ACL) settings for OneFS.
isi auth settings acls view Displays access control list (ACL) settings for OneFS.
isi auth settings global modify Modifies the global authentication settings.
isi auth settings global view Displays global authentication settings.
isi auth settings krb5 modify Modifies the global settings of an MIT Kerberos authentication provider.
isi auth settings krb5 view Displays MIT Kerberos provider authentication settings.
isi auth settings mapping modify Modifies identity mapping settings.
isi auth settings mapping view [–zone ] Displays identity mapping settings in an access zone.
isi auth status Displays provider status,available authentication providers, and which are functioning.
isi auth privileges –verbose To view a list of all privileges.
isi auth id To view a list of your privileges.
isi auth mapping token To view a list of privileges for another user.
Managing file providers
isi auth file create Specify replacement files for any combination of users, groups, and netgroups.
pwd_mkdb /ifs/test.passwd Generates an spwd.db file in the /etc directory.
isi auth file modify Modify any setting for a file provider, including its name.
isi auth file delete Delete a file provider.
Managing local users and groups
isi auth users create Creates a user account.
isi auth users delete { | –uid | –sid } Deletes a local user from the system.
isi auth users flush Flushes cached user information.
isi auth users list Displays a list of users.
isi auth users modify { | –uid | –sid } Modifies a local user.
isi auth users view { | –uid | –sid } Displays the properties of a user.
isi auth users list –provider=”:” View a list of users and groups for a specified provider.
isi auth users list –provider=”lsa-ldap-provider:Unix LDAP” List users and groups for an LDAP provider type that is named Unix LDAP.
isi auth users create –provider=”local:” \ –password=”” Create a local user.
isi auth groups create –provider “local:” Create a local group.
isi auth local view system View the current password settings.
isi auth local list Displays a list of local providers.
isi auth local modify Modifies a local provider.
isi auth local view Displays the properties of a local provider.
isi auth log-level modify [–verbose] Specifies the logging level for the authentication service on the node.
isi auth log-level view Displays the logging level for the authentication service on the node.
isi auth users modify Modify any setting for a local user account except the user name.
isi auth groups modify Add or remove members from a local group.
isi auth users delete Delete a local user.
isi auth groups delete Delete a local group.
isi auth groups flush Flushes cached group information.
isi auth groups list Displays a list of groups.
isi auth groups members list { | –gid | –sid } Displays a list of members that are associated with a group.
isi auth groups modify { | –gid | –sid } Modifies a local group.
isi auth groups view { | –gid | –sid } Displays the properties of a group.
isi auth id Displays your access token.
isi auth access /ifs/ Lists the permissions that a user has to access a given file or directory.
SMB
isi smb settings global view View the global SMB settings.
isi smb settings global modify Modify SMB Global Settings.
isi smb shares create Create SMB Shares.
isi smb shares modify Modify SMB Shares.
isi smb shares modify Share2 –file-filtering-enabled=yes \ file-filter-extensions=.wav,.mpg Enables file filtering on a share, denies .wav and .mpg.
isi smb shares list Displays a list of SMB shares.
isi smb shares permission create Creates permissions for an SMB share.
isi smb shares permission delete Deletes user or group permissions for an SMB share.
isi smb shares permission list Displays a list of permissions for an SMB share.
isi smb shares permission modify Modifies permissions for an SMB share.
isi smb shares permission view Displays a single permission for an SMB share.
isi smb shares view [–zone ] Displays information about an SMB share.
isi smb settings shares view View the default SMB share settings specific to an access zone.
isi smb settings shares modify Configure SMB share settings specific to each access zone.
isi smb settings global modify –zone=TestZone –impersonate-guest=never Specifies that guests are never allowed access to shares in zone 5.
isi smb shares delete Share1 –zone=zone-5 Deletes a share named Share1 from the access zone named zone-5.
isi smb shares permission modify Modify SMB Share Permissions.
isi smb shares permission list ifs List permissions on a share.
isi smb log-level filters create Creates a new SMB log filter.
isi smb log-level filters delete Deletes SMB log filters.
isi smb log-level filters list Lists SMB log filters.
isi smb log-level filters view View an individual SMB log-level filter.
isi smb log-level modify [–verbose] Sets the log level for the SMB service.
isi smb log-level view Shows the current log level for the SMB service.
isi smb openfiles list View a list of open files.
isi smb openfiles close [–force] Closes an open file.
isi smb openfiles list Displays a list of files that are open in SMB shares.
isi smb sessions delete [{–user | –uid | –sid }] [–force] Deletes SMB sessions, filtered first by computer and then optionally by user.
isi smb sessions delete computer1 Deletes all SMB sessions associated with a computer named computer1.
isi smb sessions delete computer1 –user=user1 Deletes all SMB sessions associated with a computer named computer1 and a user named user1.
isi smb sessions delete-user { | –uid | –sid } [–computer-name ] Deletes SMB sessions, filtered first by user then optionally by computer.
isi smb sessions list Displays a list of open SMB sessions.
isi smb settings global modify Modifies global SMB settings.
isi smb settings global view Displays the default SMB configuration settings.
isi smb settings shares modify Modifies default settings for SMB shares.
isi smb settings shares view [–zone ]
NFS
isi nfs settings global view View the global NFS settings that are applied to all nodes in the cluster.
isi nfs settings global modify Configure NFS file sharing.
isi nfs settings global modify –nfsv4-enabled=yes Enables NFSv4 support.
isi nfs settings export view [–zone ] View the current default export settings.
isi nfs settings export modify Configure default NFS export settings.
isi nfs settings export modify –max-file-size 1099511627776 Specifies a maximum export file size of one terabyte.
isi nfs settings export modify –revert-max-file-size Restores the maximum export file size to the system default.
isi nfs exports view View NFS Exports.
isi nfs exports view 1 Displays the settings of the default export.
isi nfs exports modify 1 –map-root-enabled true –map-root nobody Enable root-squash for the default NFS export.
isi nfs exports list List NFS Exports.
isi nfs exports create Create NFS exports to share files in OneFS.
isi nfs exports create /ifs/data/projects,/ifs/home –all-dirs=yes Creates an export supporting client access to multiple paths and their subdirectories.
isi nfs exports check Check for errors in NFS exports, conflicting export rules, invalid paths, etc.
isi nfs exports modify Modify the settings for an existing NFS export.
isi nfs exports modify 2 –add-read-write-clients 10.1.1.100 For example, the following adds a client with read-write access to NFS export 2
isi nfs exports delete Delete unneeded NFS exports.
isi nfs exports delete 2 Deletes an export whose ID is 2.
isi nfs exports delete 3 –force Deletes an export whose ID is 3 without displaying a confirmation prompt
isi nfs exports reload Reloads the NFS exports configuration.
isi nfs aliases create Create an NFS alias to map a long directory path to a simple pathname.
isi nfs aliases create /home /ifs/data/offices/hq/home –zone hq-home Creates an alias to a full pathname in OneFS in an access zone named hq-home
isi nfs aliases modify Modify an NFS alias.
isi nfs aliases modify /home –zone hq-home –name /home1 Changes the name of an alias in the access zone hq-home.
isi nfs aliases delete Delete an NFS alias.
isi nfs aliases delete /home –zone hq-home Deletes the alias /home in an access zone named hq-home.
isi nfs aliases list [–zone] View a list of NFS aliases that have already been defined for a particular zone.
isi nfs aliases view List NFS Aliases.
isi nfs aliases view /projects –zone hq-home –check Provides information on an alias in the access zone, hqhome, including the health of the alias.
isi nfs log-level modify Sets the logging level for the NFS service.
isi nfs log-level view Shows the logging level for the NFS service.
isi nfs netgroup check Updates the NFS netgroup cache.
isi nfs netgroup flush Flushes the NFS netgroup cache.
isi nfs netgroup modify Modifies the NFS netgroup cache settings.
isi nfs nlm locks list Applies to NFSv3 only. Displays a list of NFS Network Lock Manager (NLM) advisory locks.
isi nfs nlm locks waiters List of clients waiting to place a Network Lock Manager (NLM) lock on a currently locked file.
isi nfs nlm sessions check Searches for lost locks.
isi nfs nlm sessions delete Delete NFS NLM Sessions.
isi nfs nlm sessions list Displays a list of clients holding NFS Network Lock Manager (NLM) locks.
isi nfs nlm sessions refresh Refreshes an NFS Network Lock Manager (NLM) client.
isi nfs nlm sessions view Displays information about NFS Network Lock Manager (NLM) client connections.
isi nfs settings zone modify Modifies the default NFS zone settings for the NFSv4 ID mapper.
isi nfs settings zone view Displays the default NFSv4-related access zone settings.
FTP
isi ftp settings view View a list of current FTP configuration settings.
isi services vsftpd enable Enable FTP.  The FTP service, vsftpd, is disabled by default.
isi ftp settings modify Modify FTP Settings.
isi ftp settings modify –server-to-server=yes Enables server-to-server transfers.
isi ftp settings modify –allow-anon-upload=no Disables anonymous uploads.
HTTP and HTTPS
isi http settings modify Modifies HTTP global settings.
isi http settings modify –service=enabled –dav=yes \ basic-authentication=yes Enables the HTTP service, WebDAV, and basic authentication.
isi_gconfig -t http-config https_enabled=true Enable HTTPS.
isi_gconfig -t http-config https_enabled=false Disable HTTPS.
isi http settings view Displays HTTP global settings.
File Filtering
isi file-filter settings modify
isi file-filter settings view View file filtering settings in an access zone.
isi file-filter settings view –zone=DevZone Displays file filtering settings in the DevZone access zone.
isi file-filter settings modify –zone=DevZone \ file-filtering-enabled=yes file-filter-type=allow \ Enables file filtering in the DevZone access zone and allows users to write only to specific file types.
  file-filter-extensions=.xml,.html,.txt
isi file-filter settings modify –zone=DevZone \ file-filtering-enabled=yes file-filter-type=deny \ Enables file filtering in DevZone and denies users write access only to specific file types.
  file-filter-extensions=.xml,.html,.txt
Auditing
isi_audit_viewer View both configuration audit and protocol audit logs.
isi_audit_viewer -t protcol View protocol access audit logs.
isi_audit_viewer -t config View system configuration logs.
isi audit settings global modify [–protocol-auditing-enabled {yes | no}] Modify the types of protocol access events to be audited.
isi audit settings modify –syslog-forwarding-enabled Enable forwarding of protocol access events to syslog.
isi audit settings modify –syslog-forwarding-enabled=no –zone=DevZone Disables forwarding of audited protocol access events from the DevZone access zone.
isi audit settings global modify –config-auditing-enabled=yes Enables system configuration auditing on the cluster.
isi audit settings global modify –config-syslog-enabled=yes Enables forwarding of system configuration changes.
isi audit settings global modify –config-syslog-enabled=no Disables forwarding of system configuration changes.
isi audit settings modify –audit-failure=create,close,delete –zone=DevZone Creates a filter that audits the failure of create, close, and delete events in the DevZone access zone.
isi audit settings global view Displays global audit settings configured on the EMC Isilon cluster.
isi audit settings view [–zone] [–verbose] Displays audit filter settings in an access zone and whether syslog forwarding is enabled.
isi audit topics list Displays a list of configured audit topics, which are internal collections of audit data.
isi audit topics modify Modifies the properties of an audit topic.
isi audit topics view Displays the properties of an audit topic.
Snapshots
It is recommended that you do not create more than 1,000 snapshots of a single directory to avoid performance degradation.
You can create up to 20,000 snapshots on a cluster at a time.
isi snapshot snapshots modify Modify the name and expiration date of a snapshot.
isi snapshot snapshots modify HourlyBackup_07-15-2014_22:00 \ –expires 2014-07-25T01:30 Causes HourlyBackup_07-15-2014_22.00 to expire on 1.30PM on July 25th, 2014.
isi snapshot snapshots modify Modify the alias of a snapshot to assign an alternative name for the snapshot.
isi snapshot snapshots modify HourlyBackup_03-15-2017_22:00 \ –alias LastKnownGood Assigns an alias of LastKnownGood to HourlyBackup_03-15-2017_22.00.
isi snapshot snapshots list View a list of snapshots or detailed information about a specific snapshot.
isi snapshot snapshots view Displays the properties of an individual snapshot.
isi snapshot snapshots delete –snapshot newSnap1 Deletes newSnap1.
isi job jobs start snaprevert –snapid 46 Reverts HourlyBackup_07-15-2014_23.00
isi snapshot schedules modify Modify a snapshot schedule.
isi snapshot schedules modify hourly_media_snap –duration 14D Snapshots created with the schedule hourly_media_snap are deleted 14 days after creation.
isi snapshot schedules delete [–force] [–verbose] Deletes a snapshot schedule.
isi snapshot schedules delete hourly_media_snap Deletes a snapshot schedule named hourly_media_snap.
isi snapshot schedules view Displays information about a snapshot schedule.
isi snapshot schedules view every-other-hour Displays detailed information about the snapshot schedule every-other-hour
isi snapshot schedules modify WeeklySnapshot –alias LatestWeekly Configures the alias LatestWeekly for the snapshot schedule WeeklySnapshot.
isi snapshot schedules create Creates a snapshot schedule.
isi snapshot schedules pending list Displays a list of snapshots that are scheduled to be generated by snapshot schedules.
isi snapshot aliases create [–verbose] Assigns a snapshot alias to a snapshot or to the live version of the file system.
isi snapshot aliases create latestWeekly Weekly-01-30-2017 Creates a snapshot alias for Weekly-01-30-2017.
isi snapshot aliases modify latestWeekly –target LIVE Reassigns the latestWeekly alias to the live file system.
isi snapshot aliases delete { | –all} [–force] [–verbose] Deletes a snapshot alias.
isi snapshot aliases list View a list of all snapshot aliases by running the following command.
isi snapshot aliases view latestWeekly Displays information about latestWeekly.
isi snapshot locks create Creates a snapshot lock.
isi snapshot locks create SnapshotApril2016 –expires 1M \ –comment “Maintenance Lock” Applies a snapshot lock to SnapshotApril2016, sets expiration in one month, and adds a description.
isi snapshot locks modify Modify the expiration date of a snapshot lock.
isi snapshot locks modify SnapshotApril2014 1 –expires 3D Sets an expiration date three days from the present date for a snapshot lock with an ID of 1.
isi snapshot locks delete Delete a snapshot lock.
isi snapshot locks delete Snapshot2014Apr16 1 Deletes a snapshot lock that is applied to SnapshotApril2014 with a lock id of 1
isi snapshot locks view Displays information about a snapshot lock.
isi snapshot locks list View snapshot lock information.
isi snapshot settings view View current SnapshotIQ settings.
isi snapshot settings modify –reserve 30 Sets the snapshot reserve to 30%.
isi job jobs start ChangelistCreate –older-snapid 3 –newersnapid 7 Create a changelist that shows what data was changed between snapshots.
isi_changelist_mod -k 21_23 Deletes changelist 21_23.
isi_changelist_mod -l View the IDs of changelists.
isi_changelist_mod -a 2_6 Displays the contents of a changelist named 2_6.
isi job jobs start domainmark –root /ifs/data/media \ –dm-type SnapRevert Creates a SnapRevert domain for /ifs/data/media.
isi snapshot snapshots create /ifs/data/media –name media-snap Creates a snapshot for /ifs/data/media.
isi snapshot snapshots delete OldSnapshot Deletes a snapshot named OldSnapshot.
isi snapshot schedules create hourly /ifs/data/media \ Creates a snapshot schedule for /ifs/data/media.
 HourlyBackup_%m-%d-%Y_%H:%M “Every day every hour” \ –duration 1M
isi job jobs start snapshotdelete Increase the speed at which deleted snapshot data is freed on the cluster, use this command
isi job jobs start shadowstoredelete Increase the speed at which deleted data shared between deduplicated and cloned
  files are freed on the cluster.
Deduplication
isi dedupe settings modify –assess-paths /ifs/data/archive Assess the amount of disk space you will save by deduplicating a directory.
isi job jobs start dedupeassessment Start the assessment job by running the following command.
isi dedupe reports list Identify the ID of the assessment report by running the following command.
isi dedupe reports view View prospective space savings by running the isi dedupe reports view.
isi dedupe settings modify –paths /ifs/data/media,/ifs/data/archive This command targets /ifs/data/archive and /ifs/data/media for deduplication.
isi job types Dedupe –schedule “Every Friday at 11:00 PM” Configures the deduplication job to be run every Friday at 11PM.
isi dedupe stats View the amount of disk space that you are currently saving with deduplication.
isi dedupe settings view Displays current deduplication settings.
Data Replication
isi sync settings modify Configure default settings for replication policies.
isi sync settings modify –report-max-age 3Y Configures SyncIQ to delete replication reports that are older than three years.
isi sync settings view Displays global replication settings.
isi sync policies resolve [–force] Resolving a replication policy enables you to run the policy again.
isi sync policies reset { | –all} [–verbose] If you cannot resolve the issue that caused the error, you can reset the replication policy.
isi sync policies delete { | –all} Deletes a replication policy.
isi sync policies modify Modify the settings of a replication policy.
isi sync policies modify newPolicy \ –target-compare-initial-sync on Enables differential synchronization, Run the policy by running isi sync jobs start.
isi sync policies modify dailySync –schedule “” Ensures that the policy dailySync runs only manually.
isi sync policies delete dailySync Eeletes dailySync from the source cluster.
isi sync policies enable dailySync Enables dailySync.
isi sync policies disable dailySync Disables dailySync.
isi sync policies list View information about replication policies.
isi sync policies view dailySync Displays detailed information about dailySync.
isi job jobs start domainmark –root /ifs/data/media \ –dm-type SyncIQ Start Domainmark.
isi sync jobs start dailySync –test Creates a report about how much data will be transferred when dailySync is run.
isi sync reports view dailySync 1 Displays the assessment report for dailySync.
isi sync jobs start dailySync Starts ‘dailySync’ replication job.
isi sync jobs start dailySync \ –source-snapshot HourlyBackup_07-15-2013_23:00 Replicates the source directory of dailySync according to snapshot HourlyBackup_07-15-2013_23.00.
isi sync jobs pause dailySync Pauses ‘dailySync’ replication job.
isi sync jobs resume dailySync Resumes ‘dailySync’ replication job.
isi sync jobs cancel dailySync Cancels ‘dailySync’ replication job.
isi sync jobs list View all active replication jobs.
isi sync jobs reports list Displays information about running replication jobs targeting the local cluster.
isi sync jobs reports view Displays information about a running replication job targeting the local cluster.
isi sync jobs view dailySync Displays detailed information about a replication job.
isi sync recovery allow-write dailySync Enables replicated directories and files specified in the dailySync policy to be writable.
isi sync recovery allow-write newPolicy –revert Reverts a failover operation for newPolicy.
isi sync recovery resync-prep dailySync Creates a mirror policy for dailySync.
isi sync jobs start dailySync_mirror Runs a mirror policy named dailySync_mirror immediately.
isi sync modify dailySync_mirror –enabled yes –schedule “every day at 12:01 AM” Schedules a mirror policy named dailySync_mirror to run daily at 12.01 AM.
isi sync recovery allow-write dailySync_mirror Allows writes to the directories specified in the dailySync_mirror policy.
isi sync recovery resync-prep dailySync_mirror Complete failback for dailySync_mirror, places secondary clust into RO mode, checks consistency.
isi sync recovery allow-write SmartLockSync Enables writes to the target directory of SmartLockSync.
isi worm domains modify –domain /ifs/data/smartlock \ –autocommit-offset 1m Automatically commits all files in /ifs/data/ smartlock to a WORM state after one minute.
isi sync target cancel Cancel a replication job that is targeting the local cluster.
isi sync target cancel dailySync Cancels a replication job created according to dailySync
isi sync target cancel –all Cancel all jobs targeting the local cluster.
isi sync target break Break local target association.
isi sync target break dailySync Breaks the association between dailySync and the local cluster.
isi sync target list View information about replication policies that are currently replicating data.
isi sync target view dailySync Displays detailed information about dailySync.
isi sync target reports list Displays information about completed replication jobs targeting the local cluster.
isi sync target reports view Displays information about a completed replication job that targeted the local cluster.
isi sync target reports subreports list Displays subreports about completed replication jobs targeting the local cluster.
isi sync target reports subreports view Displays a subreport about a completed replication job targeting the local cluster.
isi sync rules create Create a network traffic rule
isi sync rules create bandwidth 9:00-17:00 M-F 100 Creates a network traffic rule that limits bandwidth (100 KB per second from 9AM to 5PM weekdays).
isi sync rules create file_count 9:00-17:00 M-F 3 limits the file-send rate to 3 files per second from 9.00 AM to 5.00 PM every weekday
isi sync rules list Identify the ID of the performance rule.
isi sync rules view Displays information about a replication performance rule.
isi sync rules modify bw-0 –days X,S Performance rule with an ID of bw-0 to be enforced only on Saturday and Sunday.
isi sync rules delete { | –all | –type } [–force] [–verbose] Deletes a replication performance rule.
isi sync rules delete bw-0 Deletes a performance rule with an ID of bw-0.
isi sync rules modify bw-0 –enabled true Enables a performance rule with an ID of bw-0..
isi sync rules modify bw-0 –enabled false Disables a performance rule with an ID of bw-0.
isi sync reports list View a list of all replication reports.
isi sync reports view dailySync 2 Displays a replication report for dailySync.
isi sync reports subreports list dailySync 1 Displays subreports for dailySync.
isi sync reports subreports view dailySync 1 2 Displays a subreport for dailySync.
isi sync reports rotate [–verbose] Causes excess reports to be deleted immediately.
isi sync policies create Create a replication policy with SyncIQ.
isi sync policies create mypolicy sync /ifs/data/source Creates a policy that replicates the directory /ifs/data/
  10.1.99.36 /ifs/data/target –schedule “Every Sunday at 12:00 AM”   source on the source cluster to /ifs/data/target on target cluster 10.1.99.36
  –target-snapshot-archive on –target-snapshot-expiration 1Y   every week. The command also creates archival snapshots on the target cluster.
  –target-snapshot-pattern “%{PolicyName}-%{SrcCluster}-%Y-%m-%d   creates a SyncIQ domain for /ifs/data/source.
NDMP
isi ndmp settings global modify –service=yes Enable NDMP backup.
isi ndmp settings global modify –dma=emc configures OneFS to interact with EMC NetWorker.
isi ndmp settings global modify service=no Disable NDMP backup.
isi ndmp settings global view View NDMP backup settings.
isi ndmp settings diagnostics modify Modifies NDMP diagnostics settings.
isi ndmp settings diagnostics view [–format {list | json}] Displays NDMP diagnostic settings.
isi ndmp users create NDMPuser –password=1234 Creates an NDMP user account called NDMPuser.
isi ndmp users modify NDMPuser –password=5678 Modifies the password of a user named NDMPuser.
isi ndmp users delete NDMPuser Deletes a user named NDMPuser.
isi ndmp users view View NDMP user accounts.
isi ndmp users view Displays information about the account for a specific user.
isi tape rescan –node=18 Detects devices on node 18.
isi tape rescan –reconcile Remove entries for devices and paths that have become inaccessible.
isi tape modify tape003 –new-name=tape005 Modify the name of an NDMP device entry.
isi tape delete –name=tape001 Disconnects tape001 from the cluster.
isi tape list –node=18 –tape List tape devices on node 18.
isi tape view Displays information about a tape or media changer device.
isi fc settings modify 5.1 –topology=ptp Configures port 1 on node 5 to support a point-to-point Fibre Channel topology.
isi fc settings modify 5.1 –state=enable | disable Enable or disable an NDMP backup port.
isi fc settings view 5.1 View Fibre Channel port settings for port 1 on node 5.
isi fc settings list Lists Fibre Channel port settings.
isi fc settings view Displays settings for a specific Fibre Channel port.
isi ndmp sessions list Retrieve the ID of the NDMP session that you want to end.
isi ndmp sessions delete View the status of NDMP sessions or terminate a session that is in progress.
isi ndmp sessions delete 4.36339 –force Ends an NDMP session with an ID of 4.36339.
isi ndmp sessions view View NDMP Sessions.
isi ndmp list View information about active NDMP sessions.
isi ndmp contexts list –type bre View NDMP restartable backup contexts that have been configured.
isi ndmp contexts view View detailed information about a specific restartable backup context.
isi ndmp contexts delete Delete a restartable backup context.
isi ndmp settings global modify Specify the number of restartable backup contexts that OneFS can retain, up to 1024.
isi ndmp settings global modify –bre_max_num_contexts=128 Modify max number of contexts.
isi ndmp settings variables modify Modify default NDMP variable settings.
isi ndmp settings variables list view the default NDMP settings for a path.
isi ndmp settings variables create Sets the default value for an NDMP environment variable for a given path.
isi ndmp dumpdates delete [–name ] Delete snapshots created for snapshot-based incremental backups.
isi ndmp dumpdates list View snapshots generated for snapshot-based incremental backups.
File Retention
isi worm cdate set Set the compliance clock.
isi worm cdate view View the current time of the compliance clock.
isi job jobs start DomainMark –root /ifs/data/smartlock –dm-type Worm Creates a SmartLock enterprise domain
isi worm domains create Creates a SmartLock directory.
isi worm domains modify /ifs/data/SmartLock/prod_dir \  –default-retention 1Y Sets the default retention period to one year.
isi worm domains view View detailed information about a specific SmartLock directory.
isi worm domains modify /ifs/data/SmartLock/prod_dir \ –override-date 2014-06-01 Overrides the retention period expiration date of /ifs/data/SmartLock/prod_dir to June 1, 2014.
isi worm domains modify –privileged-delete Modify smartlock directory to allow deletion.
isi worm files view /ifs/data/SmartLock/prod_dir/file Displays the WORM status of a file.
isi worm files delete Deletes a file committed to a WORM state.
isi worm domains list Displays a list of WORM directories.
Protection Domains
isi job jobs start domainmark –root /ifs/data/media \ –dm-type SyncIQ Creates a SyncIQ domain for /ifs/data/source.
isi job jobs start domainmark –root /ifs/data/media \ –dm-type SyncIQ –delete Deletes a SyncIQ domain for /ifs/data/source.
Data-at-rest-encryption
isi_reformat_node Securely deletes the authentication keys on an entire cluster, smartfails each node,
  and runs the isi_reformat_node command on the last node.
SmartQuotas
isi quota quotas create –help Information about the parameters and options that can be used.
isi quota quotas create Create an accounting quota.
isi quota quotas create /ifs/data/test_1 directory \ –advisory-threshold=10M –enforced=false Creates an informative quota for the /test_1 directory.
isi job events list –job-type quotascan Verify that no QuotaScan jobs are in progress.
isi quota quotas list –help For information about the parameters and options that you can use.
isi quota quotas list –path=/ifs/data/quota_test_1 Finds all quotas that monitor the /ifs/data/quota_test_1 directory.
isi quota quotas list -v –path=/ifs/data/quota_test_2 \ –include-snapshots=”yes” Provides current usage information for the root user.
isi quota reports list -v Lists all info in the quota report.
isi quota reports delete Deletes a specified report.
isi quota quotas delete /ifs/data/quota_test_2 directory Deletes the specified directory-type quota.
isi quota quotas modify /ifs/dir-1 user –linked=false –user=admin Unlinks a user quota.
isi_classic quota list –export The quota configuration file displays as raw XML.
isi_classic quota import –from-file= Import quota settings in the form of a configuration file.
isi quota settings notifications modify –help Information about the parameters and options.
isi quota settings notifications modify advisory exceeded \ –action-alert=true Generate an alert when the advisory threshold is exceeded.
isi quota settings reports modify –schedule=”Every 2 days” Creates a quota report schedule that runs every two days.
isi quota settings mappings create -v Creates a SmartQuotas email mapping rule.
isi quota settings mappings delete Deletes SmartQuotas email mapping rules.
isi quota settings mappings list Lists SmartQuotas email mapping rules.
isi quota settings mappings modify Modifies an existing SmartQuotas email mapping rule.
si quota settings mappings view View a SmartQuotas email mapping rule.
isi quota reports create -v Creates an ad-hoc quota report.
isi quota quotas view Displays detailed properties of a single file system quota.
isi quota quotas notifications create /ifs/data/test_2 \ directory advisory exceeded –holdoff=10W Advisory quota notification rule, specifies length of time to wait before creating a notification.
isi quota quotas notifications delete –path Deletes a quota notification rule.
isi quota quotas notifications disable Disables all quota notifications.
isi quota quotas notifications list Displays a list of quota notification rules.
isi quota quotas notifications modify Modifies a notification rule for a quota.
isi quota quotas notifications view Displays the properties of a quota notification rule.
isi quota quotas notifications clear Clears rules for a quota and uses system notification settings.
Storage Pools
isi storagepool compatibilities class active create Create a node class compatibility.
isi storagepool compatibilities class active create N400 N410 creates a compatibility between Isilon NL400 and NL410 nodes.
isi storagepool compatibilities class active list lists active compatibilities and their ID numbers
isi storagepool compatibilities class available list Lists node class compatibilities that are available, but not yet created.
isi storagepool compatibilities class active delete 9 deletes a node class compatibility with an ID number of 9.
isi storagepool compatibilities class active view Displays the details of an active node class compatibility.
isi storagepool compatibilities ssd active create S200 Creates an SSD class compatibility for Isilon S200 nodes that have different capacity SSDs.
isi storagepool compatibilities ssd active delete 1 Deletes an SSD compatibility with an ID number of 1.
isi storagepool compatibilities ssd active list Lists SSD compatibilities that have been created.
isi storagepool compatibilities ssd active view Displays the details of an active SSD compatibility.
isi storagepool compatibilities ssd available list Lists SSD compatibilities that are available, but not yet created.
isi storagepool nodepools create Create a node pool manually
isi storagepool nodepools create PROJECT-TEST –lnns 1,2,3 Creates a node pool by specifying the LNNs of three nodes to be included.
isi storagepool nodepools delete Deletes a node pool and autoprovisions the affected nodes into the appropriate node pool.
isi storagepool nodepools modify PROJECT-TEST –lnns 3-4, 11 Adds nodes with the LNNs (logical node numbers) of 3, 4, and 11 to an existing node pool.
isi storagepool nodepools modify PROJECT-TEST –set-name PROD-PROJECT \ –protection-policy +2:1 Changes the name and protection policy of a node pool.
isi storagepool nodepools modify PROD_ARCHIVE –remove-lnns 7,9 Removes two nodes, identified by its LNNs.
isi storagepool nodepools modify PROD-PROJECT –tier PROD_ARCHIVE Adds a node pool named PROD-PROJECT to a tier.
isi storagepool nodepools list Displays a list of node pools.
isi storagepool nodepools view Displays details for a node pool.
isi storagepool settings modify Modify default storage pool settings
isi storagepool settings view Lays global SmartPools settings.
isi storagepool settings modify –ssd-l3-cache-default-enabled yes Sets L3 cache enabled as the default for new node pools that are added.
isi storagepool nodepools modify hq_datastore –l3 true Enables L3 cache on a node pool named hq_datastore.
isi storagepool nodepools create Creates a manually managed node pool (use with assistance of technical support personnel)
isi storagepool tiers create PROD_ARCHIVE –children hq_datastore1 –children hq_datastore2 Creates a tier named PROD_ARCHIVE, and adds node pools to the tier.
isi storagepool tiers modify PROD_ARCHIVE –set-name ARCHIVE_TEST Renames a tier from PROD_ARCHIVE to ARCHIVE_TEST.
isi storagepool tiers delete ARCHIVE_TEST Deletes a tier named ARCHIVE_TEST.
isi storagepool tiers list Displays a list of tiers.
isi storagepool tiers view Displays details for a tier.
isi filepool policies create Create a file pool policy.
isi filepool policies delete Deletes a file pool policy.
isi filepool policies list view a list of available file pool policies.
isi filepool policies view View the current settings of a file pool policy.
isi filepool policies view OLD_ARCHIVE Displays the settings of a file pool policy named OLD_ARCHIVE.
isi filepool default-policy view Display the current default file pool policy settings.
isi filepool default-policy modify Change default settings
isi filepool policies modify PERFORMANCE –apply-order 1 Changes the priority of a file pool policy named PERFORMANCE.
isi filepool policies delete PROD_ARCHIVE Deletes a file pool policy named PROD_ARCHIVE.
isi storagepool health –verbose Displays a tabular description of storage pool health.
isi storagepool list Displays node pools and tiers in the cluster.
isi filepool apply Applies all file pool policies to the specified file or directory path.
isi filepool policies delete Delete a custom file pool policy. The default file pool policy cannot be deleted.
isi filepool templates list Lists available file pool policy templates.
isi filepool templates view View the detailed settings in a file pool policy template.
CloudPools
CloudPools can seamlessly connect to EMC-based cloud storage systems and to popular third-party providers, Amazon S3 and Microsoft Azure.
isi cloud pools create cp_az azure csa_azure1 –vendor Microsoft Creates an Azure-based CloudPool.
isi cloud pools view cp_az View the result of this operation of the CloudPool that you created.
isi cloud pools list View a list of CloudPools that have been created on your cluster.
isi cloud pools view cah_s3_cp Information on the CloudPool named cah_s3_cp.
isi cloud pools modify c_pool_azure –remove-accounts c_acct2 –description “Secondary archive” modifies a CloudPool named c_pool_azure, removing its cloud storage acct.
isi cloud pools delete c_pool_azure Deletes the CloudPool named c_pool_azure.
isi cloud archive Archive specific files directly to the cloud.
isi cloud archive /ifs/data/shared/images/*.* –recursive yes Specifies a directory and all of its subdirectories and files to be archived.
isi cloud access add Adds cloud write access to the cluster.
isi cloud access list List the GUIDs of clusters that are accessible for SyncIQ failover or restore operations.
isi cloud access add ac7dd991261e33e382240801204c9a66 Enables a secondary cluster, identified by GUID, to have write access to cloud data.
 isi cloud access remove Remove previously granted access to SmartLink files.
isi cloud access view View the details of a cluster with, or eligible for, write access to cloud data.
isi cloud jobs list List all CloudPools jobs.
isi cloud jobs view View information about a CloudPools job.
isi cloud jobs pause Pause a running CloudPools job.
isi cloud jobs resume Resume a cloud job that has been paused.
isi cloud jobs cancel Cancel a running CloudPools job.
isi cloud jobs files list Displays the list of files matched by the specified CloudPools job.
isi cloud settings view View the top-level settings for CloudPools.
isi cloud settings modify Modify default CloudPools settings.
isi cloud settings modify –default-archive-snapshot-files=no Disables archival of files that have snapshot versions.
isi cloud settings modify –default-encryption-enabled=yes Enables both encryption and compression of cloud data.
isi cloud settings regenerate-encryption-key –verbose Generate a new master encryption key.
isi cloud recall [–recursive {yes | no}] [–verbose] Specify one or more files to be recalled from the cloud.
isi cloud restore_coi Restores the cloud object index (COI) for a cloud storage account on the cluster.
isi cloud settings regenerate-encryption-key –verbose Generates a new master encryption key for data to be archived to the cloud.
isi cloud accounts create Create a cloud storage account.
isi cloud accounts delete Delete a cloud storage account.
isi cloud accounts list List all cloud storage accounts created on your cluster.
isi cloud accounts view CloudAcct3 Displays account information for the CloudAcct3 account.
isi cloud accounts modify CloudAcct3 –name=CloudAcct5 Changes the name of the cloud storage account CloudAcct3 to CloudAcct5
isi cloud accounts delete OldRecords –acknowledge yes Deletes the cloud storage account OldRecords.
isi cloud accounts create –name=c-acct1 –type=azure Creates a Microsoft Azure cloud storage account.
  –uri=https://admin2.blob.core.windows.net –account-username=adm1
System Jobs
isi job jobs start
isi job jobs start Collect –policy MEDIUM –priority 2 Runs the Collect job with a stronger impact policy and a higher priority.
isi job jobs start multiscan –priority 8 –policy high Starts a MultiScan job with a priority of 8 and a high impact policy.
isi job jobs pause 7 Pauses a job with an ID of 7.
isi job jobs pause Collect Pauses an active job.
isi job jobs list –state paused_user Lists jobs that have been manually paused.
isi job jobs list –format csv > /ifs/data/joblist.csv Outputs a CSV-formatted list of jobs to a file in the /ifs/data path.
isi job jobs list Displays information about active jobs.
isi job jobs view Displays information about a running or queued job, including the state, impact policy, priority, and schedule.
isi job jobs modify Changes the priority level or impact policy of a queued, running, or paused job.
isi job pause Collect Pauses collect job.
isi job jobs modify 7 –priority 3 –policy medium Updates the priority and impact policy of an active job.
isi job jobs modify Collect –priority 3 –policy medium Job type can be specified instead of the job id.
isi job jobs resume 7 Resumes a job with the ID number 7.
isi job jobs resume Collect Job type can be specified instead of the job id.
isi job jobs cancel 7 Cancels a job with the ID number 7.
isi job jobs cancel Collect Job type can be specified instead of the job id.
isi job types modify mediascan –priority 2 –policy medium Modifies the default priority level and impact level for the MediaScan job type.
isi job types modify mediascan –schedule ‘every Saturday at 09:00’ –force Schedules the MediaScan job to run every Saturday morning.
isi job types modify mediascan –clear-schedule –force Removes the schedule for a job type that is scheduled.
isi job types list Displays a list of job types and default settings.
isi job types view Displays the parameters of a specific job type
isi job jobs list View active jobs.
isi job events list –job-type multiscan Displays the activity of the MultiScan job type.
isi job events list –begin 2013-09-16 View all jobs within a specific time frame.
isi job events list –begin 2013-09-15 –end 2013-09-16 > /ifs/data/report1.txt Outputs job history for a specific two-week period.
isi job policies create MY_POLICY –impact medium –begin ‘Saturday 00:00’ –end ‘Sunday 23:59’ Creates a custom policy defining a specific time frame and impact level.
isi job policies view MY_POLICY Displays the impact policy settings of the custom impact policy MY_POLICY.
isi job policies modify MY_POLICY –reset-intervals Resets the policy interval settings to the base defaults.  Low impact and anytime operation.
isi job policies delete MY_POLICY Deletes a custom impact policy named MY_POLICY.
isi job policies list –verbose Displays the names and descriptions of job impact policies.
isi job statistics view –job-id 857 View statistics for a job in progress.
isi job statistics list Displays a statistical summary of active jobs in the Job Engine queue.
isi job reports view 857 Displays the report of a Collect job with an ID of 857.
isi job reports list Displays information about successful job operations.
isi job status [–verbose] Displays a summary of active, completed, and failed jobs.
Networking
Run the isi config command, The command-line prompt changes to indicate that you are in the isi config subsystem. Run the commit command to complete.
iprange int-a 192.168.101.10-192.168.101.20 Adds an IP range to the int-a internal network.
deliprange int-a 192.168.101.15-192.168.101.20 Deletes an existing IP address range from the int-a internal network.
netmask int-a 255.255.255.0 Changes the int-a internal network netmask.
netmask int-b 255.255.255.0 Changes the int-b internal network netmask.
iprange int-b 192.168.101.21-192.168.101.30 Adds an IP range to the int-b internal network.
iprange failover 192.168.101.31-192.168.101.40 Adds an IP range to the internal failover network.
interface int-b enable Specifies the interface name as int-b and enables it.
interface int-b disable Specifies the int-b interface and disables it.
isi network groupnet create Create a groupnet and configure DNS client settings.
isi network groupnet create ProdGroupNet \ –dns-servers=192.0.2.0 –dns-cache-enabled=true Creates a groupnet named ProdGroupNet that supports one DNS server and enables DNS caching
isi network groupnet modify Modify groupnet attributes.
isi network groupnet modify ProdGroupNet \ –dns-search=dat.corp.com,stor.corp.com Modifies ProdGroupNet to enable DNS search on three suffixes.
isi network groupnet modify ProdGroupNet \ –add-dns-servers=192.0.2.1 –dns-options=rotate Modifies ProdGroupNet to support a second DNS server and to enable rotation through the configured DNS resolvers
isi network groupnet delete Delete a groupnet.
isi network groupnet delete ProdGroupNet Deletes a groupnet named ProdGroupNet.
isi network groupnets list Retrieve and sort a list of all groupnets on the system.
isi network groupnets list –sort=id –descending Sorts the list of groupnets by ID in descending order.
isi network groupnets view ProdGroupNet Displays the details of a groupnet named ProdGroupNet.
isi network groupnets modify Modifies a groupnet which defines the DNS settings applied to services that connect through the groupnet.
isi network subnets create Add a subnet to the external network of an EMC Isilon cluster.
isi network subnets create \ ProdGroupNet.subnetX ipv4 255.255.255.0 Creates a subnet associated with ProdGroupNet.
isi network subnets list identify the ID of the external subnet.
isi network subnets modify ProdGroupNet.subnetX \ –name=subnet5 Changes the name of subnetX under ProdGroupNet to subnet5.
isi network subnets modify g1.sbet3 –mtu=1500 –gateway=198.162.205.10 –gateway-priority=1 Sets the MTU to 1500, sets GW to 198.162.205.10, sets GW priority to 1.
isi network subnets delete ProdGroupNet.subnetX Deletes subnetX under ProdGroupNet.
isi network subnets view view the details of a specific subnet.
isi network subnets view ProdGroupNet.subnetX displays details for subnetX under ProdGroupNet.
isi network subnets modify ProdGroupNet.subnetX \ –sc-service-addr=198.11.100.15 Specifies the SmartConnect service IP address on subnetX under ProdGroupNet.
isi networks modify subnet Enable or disable VLAN tagging on the external subnet.
isi network subnets modify ProdGroupNet.subnetX \ –vlan-enabled=true –vlan-id=256 Enables VLAN tagging on subnetX under ProdGroupNet, sets VLAN ID to 256.
isi network subnets modify ProdGroupNet.subnetX \ –vlan-enabled=false Disables VLAN tagging on subnetX under ProdGroupNet.
isi network subnets modify ProdGroupNet.subnetX \ –add-dsr-addrs=198.11.100.20 Adds a DSR address to subnetX under ProdGroupNet.
isi network subnets modify ProdGroupNet.subnetX \ –remove-dsr-addrs=198.11.100.20 removes a DSR address from subnetX under ProdGroupNet.
isi network pools create Create an IP address pool.
isi network pools create ProdGroupNet.subnetX.ProdPool1 Creates a pool named ProdPool1 and assigns it to subnetX under ProdGroupNet.
isi network pools create ProdGroupNet.subnetX.ProdPool1 \ –access-zone=zoneB Creates a pool named ProdPool1, assigns it to ProdGroupNet.subnetX, specifies zoneB as the access zone.
isi networks modify pool Modify IP address pools to update pool settings.
isi network pools modify ProdGroupNet.subnetX.pool3 –name=ProdPool1 Changes the name of the pool from pool3 to ProdPool1.
isi networks delete pool Delete an IP address pool that you no longer need.
isi network pools delete ProdGroupNet.subnetX.ProdPool1 Deletes the pool name ProdPool1 from ProdGroupNet.subnetX.
isi network pools list View all IP address pools within a groupnet or subnet
isi network pools list ProdGroupNet.subnetX Displays all IP address pools under ProdGroupNet.subnetX.
isi network pools view ProdGroupNet.subnetX.ProdPool1 Displays the setting details of ProdPool1 under ProdGroupNet.subnetX.
isi network pools modify ProdGroupNet.subnetX.ProdPool1 \ –add-ranges=192.0.102.12-192.0.102.22 Adds an address range to ProdPool1 under ProdGroupNet.subnetX
isi network pools modify ProdGroupNet.subnetX.ProdPool1 \ –remove-ranges=192.0.102.12-192.0.102.14 Deletes an address range from ProdPool1
isi network pools modify ProdGroupNet.subnetX.ProdPool1 \ –alloc-method=dynamic Specifies dynamic distribution of IP addresses in ProdPool1 under ProdGroupNet.subnet 3.
isi networks modify pool .. Configures a SmartConnect DNS zone.
isi network pools modify ProdGroupNet.subnetX.ProdPool1 \ –sc-dns-zone=www.corp.com Specifies a SmartConnect DNS zone in ProdPool1 under subnetX and ProdGroupNet.
isi networks modify pool Configures a SmartConnect DNS zone alias.
isi network pools modify ProdGroupNet.subnetX.ProdPool1 \ –add-sc-dns-zone-aliases=data.corp.com Specifies SmartConnect DNS aliases in ProdPool1 under subnetX and ProdGroupNet.
isi network pools modify ProdGroupNet.subnetX.ProdPool1 \ –remove-dns-zone-aliases=data.corp.com removes a SmartConnect DNS aliases from ProdPool1 under subnetX and ProdGroupNet.
isi network pools modify ProdGroupNet.subnetX.ProdPool1 \ –sc-subnet=subnet0 specifies subnet0 as the a SmartConnect service subnet of ProdPool1 under subnetX and ProdGroupNet.
isi network pools modify ProdGroupNet.subnetX.ProdPool1 –add-ifaces=1-3:ext-1 modifies ProdPool1 under ProdGroupNet.subnetX to add the first external network interfaces on nodes 1 through 3.
isi network pools modify ProdGroupNet.subnetX.ProdPool1 –remove-ifaces=3:ext-1 Removes the first network interface on node 3 from ProdPool1.
isi network pools modify ProdGroupNet.subnetX.ProdPool1 –aggregation-mode-fec Modifies ProdPool1 under Pnet1.subnetX to specify FEC as the aggregation mode for all aggregated interfaces in the pool.
isi network pools modify Pnet1.subnetX.ProdPool1 –add-ifaces=1:ext-agg –aggregation-mode=lacp Modifies ProdPool1 under Pnet1.subnetX to add ext-agg on node 1 and specify LACP as the aggregation mode.
isi network pools modify Pnet1.snet3.ProdPool1 –add-static-routes=192.168.100.0/24-192.168.205.2 Adds an IPv4 static route to ProdPool1 and assigns the route to all network interfaces that are members of the pool.
isi network pools modify ProdGroupNet.subnetX.ProdPool1 \ –sc-connect-policy=conn_count Specifies a connection balancing policy based on connection count in ProdPool1 under subnet 3 and ProdGroupNet.
isi network pools modify groupnet0.subnetX.ProdPool1 \ –sc-failover-policy=cpu_usage Specifies a IP failover policy based on CPU usage in ProdPool1 under subnet 3 and groupnet0.
isi network pools modify ProdGroupNet.subnetX.ProdPool1 \ –rebalance-policy=manual Specifies manual rebalancing of IP addresses in ProdPool1 under ProdGroupNet.subnet 3.
isi network pools sc-suspend-nodes .. Suspends DNS query responses for an node.
isi network pools rebalance-ips .. Manually rebalances a specific IP address pool.
isi network pools rebalance-ips ProdGroupNet.subnetX.ProdPool1 Rebalances the IP addresses in ProdPool1.
isi network pools sc-suspend-nodes ProdGroupNet.subnetX.ProdPool1 3 Suspends DNS query responses on node 3.
isi network pools sc-resume-nodes Resumes DNS query responses for an IP address pool.
isi network pools sc-resume-nodes ProdGroupNet.subnetX.ProdPool1 3 Resumes DNS query responses on node 3.
isi network external view Displays configuration settings for the external network.
isi network external modify Modifies global external network settings on the EMC Isilon cluster.
isi network external modify –sc-balance-delay Specifies a rebalance delay (in seconds) that passes after a qualifying event prior to an automatic rebalance.
isi network sc-rebalance-all Rebalances all IP address pools.
isi networks modify pool .. Configure which network interfaces are assigned to an IP address pool.
isi network interfaces list Retrieve and sort a list of all external network interfaces on the EMC Isilon cluster.
isi network interfaces list –nodes=1,3 Displays interfaces only on nodes 1 and 3.
isi network rules create … Creates a node provisioning rule.
isi network rules create ProdGroupNet.subnetX.ProdPool1.rule7 \ –iface=ext-1 –node-type=accelerator Creates a rule (rule7) that assigns the first external network if on each new accelerator node to Pnet1.subnetX.ProdPool1
isi network rules modify … Modifies node provisioning rules settings.
isi network rules modify ProdGroupNet.subnetX.ProdPool1.rule7 \ –name=rule7accelerator Changes the name of rule7 to rule7accelerator.
isi network rules modify ProdGroupNet.subnetX.ProdPool1.rule7 \ –node-type=backup-accelerator Changes rule7 so that it applies only to backup accelerator nodes.
isi networks delete rule … Delete an node provisioning rule that are no longer needed.
isi network rules delete ProdGroupNet.subnetX.ProdPool1.rule7 Deletes rule7 from ProdPool1.
isi network rules list Lists all of the provisioning rules in the system.
isi network rules list –groupnet=ProdGroupNet Lists rules in ProdGroupNet.
isi network rules view ProdGroupNet.subnetX.ProdPool1.rule7 Displays the setting details of rule7 under ProdGroupNet.subnetX.ProdPool1.
isi network external modify –sbr=true Enablessource-based routing on the cluster.
isi network external modify –sbr=false Disables source-based routing on the cluster.
isi network dnscache flush [–verbose] Simultaneously flushes the DNS cache of each groupnet that has enabled DNS caching.
isi network dnscache modify Modifies global DNS cache settings for each DNS cache that is enabled per groupnet.
isi network dnscache view Displays DNS cache settings.
Hadoop
isi hdfs settings modify The HDFS service (enabled by default after you activate an HDFS license) can be enabled or disabled per access zone.
isi hdfs settings modify –service=yes –zone=DevZone Enables the HDFS service in DevZone.
isi hdfs settings modify –service=no –zone=DevZone Disables the HDFS service in DevZone.
isi hdfs settings modify Configure HDFS service settings in each zone to improve performance for HDFS workflows.
isi hdfs settings modify –default-block-size=256K –zone=DevZone Sets the block size to 256 KB in the DevZone access zone  (Suffixes K, M, and G are allowed).
isi hdfs settings modify –default-checksum-type=crc32 –zone=DevZone Sets the checksum type to crc32 in the DevZone access zone.
isi hdfs settings view View HDFS settings in an access zone.
isi hdfs settings view –zone=ProdZone Displays the HDFS settings in the ProdZone access zone.
isi hdfs log-level modify Sets the default logging level of HDFS services events.
isi hdfs log-level modify –set=trace Sets the HDFS log level to trace on the node.
isi hdfs log-level view View the default logging level of HDFS services events.
isi hdfs settings modify –root-directory=/ifs/DevZone/hadoop –DevZone Grant access to the /ifs/data/hadoop directory.
isi hdfs settings modify –authentication-mode=simple_only –DevZone Clients connecting to DevZone must be identified through the simple authentication method.
isi zone zones modify DevZone –authentication-mode=kerberos_only Clients connecting to DevZone must be identified through the Kerberos authentication method.
isi hdfs settings modify –webhdfs-enabled=yes –zone=DevZone Enables WebHDFS in DevZone.
isi hdfs settings modify –webhdfs-enabled=no –zone=DevZone Disables WebHDFS in DevZone.
isi hdfs proxyusers create Creates a proxy user.
isi hdfs proxyusers create hadoop-HDPUser –zone=ProdZone Designates hadoop-HDPUser in ProdZone as a new proxy user.
isi hdfs proxyusers modify Modifies the list of members that a proxy user securely impersonates.
isi hdfs proxyusers delete Deletes a proxy user from an access zone.
isi hdfs proxyusers delete hadoop-HDPUser –zone=ProdZone Deletes the proxy user hadoop-HDPUser from the ProdZone access zone.
isi hdfs proxyusers members list Lists the members of a proxy user.
isi hdfs proxyusers list –zone=ProdZone Displays a list of all proxy users configured in ProdZone.
isi hdfs proxyusers view View the configuration details for a specific proxy user.
isi hdfs proxyusers view hadoop-HDPUser –zone=ProdZone displays the configuration details for the hadoop-HDPUser.
isi hdfs racks create Create a virtual HDFS rack of nodes.
isi hdfs racks create /hdfs-rack2 –zone=TestZone Creates a rack named /hdfs-rack2 in the TestZone access zone.
isi hdfs racks modify Modify the settings of a virtual HDFS rack.
isi hdfs racks modify /hdfs-rack2 –new-name=/hdfs-rack5 –zone=DevZone Renames a rack named /hdfs-rack2 in the DevZone access zone to /hdfs-rack5.
isi hdfs racks delete Delete a virtual HDFS rack from an access zone.
isi hdfs racks delete /hdfs-rack2 –zone=ProdZone Deletes the virtual HDFS rack named /hdfs-rack2 from the ProdZone access zone.
isi hdfs racks list View a list of all virtual HDFS racks in an access zone.
isi hfds racks list –zone=ProdZone Lists all HDFS racks configured in the ProdZone access zone.
isi hdfs racks view /hdfs-rack2 –zone=ProdZone View the setting details for a specific virtual HDFS rack.
ESRS Commands
isi remotesupport connectemc modify Enable and configure ESRS.
isi remotesupport connectemc modify –enabled=no Disables ESRS.
isi remotesupport connectemc view View ESRS Config.
Antivirus
isi antivirus settings modify Target specific files for scans by antivirus policies.
isi antivirus settings modify –glob-filters-enabled true \ –glob-filters .txt Configures OneFS to scan only files with the .txt extension.
isi antivirus settings modify –scan-on-close true \ –path-prefixes /ifs/data/media Configures OneFS to scan files and directories under /ifs/data/media when they are closed.
isi antivirus settings modify –repair true –quarantine true Configures OneFS and ICAP servers to attempt to repair infected files and quarantine files that cannot be repaired.
isi antivirus settings modify –report-expiry 12w Configures OneFS to delete antivirus reports older than 12 weeks.
isi antivirus settings modify –service enable Enables antivirus scanning.
isi antivirus settings modify –service disable Dsables antivirus scanning.
isi antivirus servers create Add and connect to an ICAP server.
isi antivirus servers create icap://192.168.1.100 –enabled yes Adds and connects to an ICAP server at 192.168.1.100.
isi antivirus servers modify icap://192.168.1.100 –enabled yes Temporarily disconnects from the ICAP server.
isi antivirus servers modify icap://192.168.1.100 –enabled no Reconnects to an ICAP server.
isi antivirus servers delete icap://192.168.1.100 Removes an ICAP server with an ID of icap.//192.168.1.100.
isi antivirus policies create Create a  policy that causes specific files to be scanned for viruses each time the policy is run.
isi antivirus policies create HolidayVirusScan –paths /ifs/d \ –schedule “Every Friday at 12:00 PM” Creates an antivirus policy that scans /ifs/data every Friday at 12.00 PM.
isi antivirus policies modify HolidayVirusScan \ –schedule “Every Friday at 12:00 PM” Modifies a policy called HolidayVirusScan to be run on Saturday at 12.00 PM.
isi antivirus policies delete HolidayVirusScan Deletes a policy called HolidayVirusScan.
isi antivirus policies modify HolidayVirusScan –enabled yes Enables a policy called HolidayVirusScan.
isi antivirus policies modify HolidayVirusScan –enabled no Eisables a policy called HolidayVirusScan.
isi antivirus policies list View antivirus policies.
isi antivirus scan Manually scan an individual file for viruses.
isi antivirus scan /ifs/data/virus_file Scans the /ifs/data/virus_file file for viruses.
isi antivirus quarantine Quarantine a file to prevent the file from being accessed by users.
isi antivirus quarantine /ifs/data/badFile.txt Quarantines /ifs/data/badFile.txt.
isi antivirus scan /ifs/data/virus_file Scans /ifs/data/virus_file.
isi antivirus release /ifs/data/newFile Removes /ifs/data/badFile.txt from quarantine.
isi antivirus reports threats list View files that have been identified as threats by an ICAP server.
isi antivirus reports scans list View antivirus reports.
isi event events list View events that relate to antivirus activity.
Event Commands
isi event groups list Identify the group ID of the event group that you want to view.
isi event groups view View the details of a specific group.
isi event alerts list Identify the alert ID of the alert that you want to view.
isi event alerts delete Deletes an alert.
isi event alerts view NewExternal View the details of a specific alert, the name of the alert is case-sensitive.
isi event channels list Identify the name of the channel that you want to view
isi event channels view Support View the details of a channel
isi event settings view View your storage and maintenance settings.
isi event test create “Test message” Manually generate a test alert.
isi event settings modify Change the frequency that a heartbeat event is generated.
isi event alerts create Hardware NEW-EVENTS –channel RemoteSupport This command creates an alert named Hardware, sets the alert condition to
  NEW_EVENTS, and sets the channel that will broadcast the event as RemoteSupport
Isilon Technical Support Commands
isi_auth_expert
isi_bootdisk_finish
isi_bootdisk_provider_dev
isi_bootdisk_status
isi_bootdisk_unlock
isi_checkjournal
isi_clean_idmap
isi_client_stats
isi_cpr
isi_cto_update
isi_disk_firmware_reboot
isi_dmi_info
isi_dmilog
isi_dongle_sync
isi_drivenum
isi_dsp_install
isi_dumpjournal
isi_eth_mixer_d
isi_evaluate_provision_drive
isi_fcb_vpd_tool
isi_flexnet_info
isi_flush
isi_for_array
isi_fputil
isi_gather_info
isi_gather_auth_info
isi_gather_cluster_info
isi_gconfig
isi_get_itrace
isi_get_profile
isi_hangdump
isi_hw_check
isi_hw_status
isi_ib_bug_info
isi_ib_fw
isi_ib_info
isi_ilog
isi_imdd_status
isi_inventory_tool
isi_ipmicmc
isi_job_d
isi_kill_busy
isi_km_diag
isi_lid_d
isi_linmap_mod
isi_logstore
isi_lsiexputil
isi_make_abr
isi_mcp
isi_mps_fw_status
isi_netlogger
isi_nodes
isi_ntp_config
isi_ovt_check
isi_patch_d
isi_promptsupport
isi_radish
isi_rbm_ping
isi_repstate_mod
isi_restill
isi_rnvutil
isi_sasphymon
isi_save_itrace
isi_savecore
isi_sed
isi_send_abr
isi_smbios
isi_stats_tool
isi_transform_tool
isi_ufp
isi_umount_ifs
isi_update_cto
isi_update_serialno
isi_vitutil
isi_vol_copy
isi_vol_copy_vnx

Using isi_vol_copy_vnx for VNX to Isilon data migration

$
0
0

For most data migrations from VNX to Isilon EMC recommends that you use the OneFS migration tool isi_vol_copy_vnx. It can often be more efficient than host-based tools (such as EMCopy and Robocopy) because the performance of host-based tools performance is dependent on the network connectivity of the host, while isi_vol_copy_vnx depends only on the network connection between the source device and the Isilon cluster. Below is a basic outline of the syntax, the steps required, and a few troubleshooting tips.

You might consider migrating data with a host based tool if one or more of the following conditions apply to your migration:

  • The source device and Isilon cluster are on separate networks.
  • Security restrictions prevent the source device and Isilon cluster from communicating directly.

Command Syntax:

isi_vol_copy_vnx --help

The source must contain a source host, a colon, and then the absolute source path name.

 isi_vol_copy_vnx <src_filer>:<src_dir> <dest_dir>
                 [-sa user: | user:password]
                 [-sport ndmp_src_port]
                 [-dport ndmp_data_port]
                 [-full | -incr] [-level_based]
                 [-dhost dest_ip_addr]
                 [-no_acl]

 isi_vol_copy_vnx -list [migration-id] | [[-detail] [-state=<state>] [-destination=<pathname>]]
 isi_vol_copy_vnx -cleanup <migration-id> [-everything [-noprompt]]
 isi_vol_copy_vnx -get_config
 isi_vol_copy_vnx -set_config <name>=<value>
 isi_vol_copy_vnx -h | -help
 Defaults:
   src_auth_user       = root
   src_auth_password   =
   ndmp_src_port       = 0  (0 means NDMP default, usually 10000)
   ndmp_data_port      = any
   dest_ip_addr        = none

Note: This tool uses NDMP to transfer the data from the source VNX to the Isilon.

Migration Steps:

  1. Configure NDMP User

Create a new NDMP user on the source VNX. Log in to the control station and run the following command:

/nas/sbin/server_user -add <new_username> -ndmp_md5 -passwd

Select the defaults when prompted and be sure to make note of the password.

  1. Determine the absolute path of your filesystems and shares

If you’re using virtual data movers it changes the root path of your filesystem.  Issue the following command to review your file systems and mount paths:

server_mount ALL

Note the the specific path for the file system that is targed for migration. The path when using a VDM will be similar to this:

FILESYSTEM1 on /root_vdm_1/FILESYSTEM1 uxfs,perm,rw

In this case the path will be /root_vdm_1/FILESYSTEM1, which will be used for the source path in the isi_vol_copy_vnx command.

    3. Determine the target Isilon Data Location

Determine the destination location on the Isilon in the /ifs/data folder where the data will migrated.  If the destination folder doesn’t exist on the Isilon, it will create it with the exact same NTFS permissions as the source.  Create the command with the following syntax:

isi_vol_copy_vnx <datamoverIP>:<source_path> <target Isilon path> -sa : <-full or -incr>

isi_vol_copy_vnx 10.10.10.10:/root_vdm_1/FILESYSTEM1 /ifs/data/FILESYSTEM1 -sa ndmpuser1: -full
  1. Migrate the Data

The command outlined above will run a full copy using the ndmpuser1 account and will prompt for a password, it does not have to be shown in plain text. The password can be specified in the command by using the appropriate syntax (<ndmpuser:password>), however you will still be required to follow the username with a colon.

If successful, the message “msg SnapSure file system creation succeeds” will appear, which means the NDMP session created a checkpoint successfully and is starting to copy data from that checkpoint.

Note that this does not migrate shares, just data.  Sharedupe can be utilized for that or the CIFS shares/NFS exports can be manually re-created.  It is recommended that any other data migrations on the source VNX be disabled prior to the copy so that you don’t run into performance issues.

Caveats:

  • There is no bandwidth throttling option with this command, it will consume all available bandwidth.
  • Isilon does not  support Historical SIDs in versions 8.0.0 or earlier, which may result in permission issues due to being unable to resolve historical SIDs post migration from other platforms (see KB468562).  If SIDHistory is in use on the source, then this is not the proper tool.   From the comments section, please note that OneFS does support Security Identifier (SID) history beginning in 8.0.1 and later releases, and 8.0.0.3 and later releases (see latest docu44518_Isilon-Supportability-and-Compatibility-Guide).
  • If fs_dedupe is enabled on the Celerra or VNX, you will need to change the backup threshold to zero for each filesystem.  This means that when sending the data over NDMP send the full file, not the compressed or de-duped version.  Note that there is a risk here of inflating existing backup sets if they are being done over NDMP.
  • On the source the account performing the copy needs local administrator or backup operators permissions to the source CIFS Server, and full control over the source share.
  • Standard NDMP backups and isi_vol_copy_vnx can affect each other and the data backed up by the two NDMP clients.  See KB187008 for a workaround.

Troubleshooting Tips:

  1. Checkpoint Creation on the VNX

The most common issue when running the isi_vol_copy_vnx command is with checkpoint creation on the source VNX.

If you are receiving a message that’s similar to msg SnapSure file system creation failed during a copy session, the command is failing to create a snapshot of the source file system. It could be due to many reasons including a lack of available disk space. Try manually creating a snapshot on the source VNX file system to see if it fails, below is the syntax to do so:

#fs_ckpt Test_FS -name Test_FS-ckpt01 –Create
  1. Permission or Connection Issues.

In general the error message itself will be self explanatory. Make sure you are using the correct credentials for the NDMP user in the migration command. The user should have sufficient rights on the source system and target systems. The logged in user should be able to create and modify directories and the files contained within.  As an example, in the case below, NDMP Port 53169 is blocked between the VNX and the Isilon.  Opening the port on the firewall resolved the issue.

ISILON568-1# isi_vol_copy_vnx 10.10.10.10:/Volcopytest /ifs/data/Volcopytest -sa Volcopytest:Volcopytest -sport 53169 -full -dhost 10.10.10.11
system call (connect): Connection refused
Could not open NDMP connection to host 10.10.10.10
isi_vol_copy_vnx did not run properly

3.  32bit Unix Application Issues.

If your app is 32bit, 32bit settings on the new NFS export must be enabled.

ISILON# isi nfs exports modify EXPID –zone=NFSZone –return-32bit-file-ids=yes

Replace the EXPID with the ID of the target export, verify the bit settings by viewing the export.

ISILON# isi nfs exports view EXPID –zone=NFSZone | grep -i return

ISILON# _

4. Snapshot creation on the target Isilon array.

Snapshots can fail for many reasons, but most often it’s due to lack of available space. In the example below the snapshot creation failed because there was an existing snapshot with the same name.

ISILON568# isi_vol_copy_vnx VNX-SERVER3:/Test_FS/NFS01 /ifs/data/Test_NFS01 -sa ndmp:NDMPpassword -incr
Snapshot with conflicting name ‘isi_vol_copy.011.1.snap’ found. Remove/Rename the conflicting snapshot to continue with further migration runs.
snapshot already exists
ISILON568-1#

Either delete or rename the existing snap to resolve the issue.  In the example below the snapshot was deleted.

ISILON568-1# isi snapshot snapshots list| grep isi_vol_copy.011
134 isi_vol_copy.011.0.snap /ifs/.ifsvar/modules/isi_vol_copy/011/persistent_store
136 isi_vol_copy.011.1.snap /ifs/.ifsvar/modules/isi_vol_copy/011/persistent_store
ISILON568-1#

ISILON568-1# isi snapshot snapshots delete –snapshot=134
Are you sure? (yes/[no]): y
ISILON568-1#

VNX NAS Files incorrectly report as Locked for Editing

$
0
0

When opening a shared Microsoft Office file, you may see the error message “File in Use, file_name is locked for editing by user_name“, when in fact no other user is currently using the file.

We had users that would view files with the preview pane, it would create a lock on the file, and then when the explorer window was closed the lock would remain.  The next time the file was accessed it would state that it was locked even though the user didn’t have it open.  Below are some steps you can take to troubleshoot and resolve the issue.  Note that changing some of these parameters can have a performance impact, so make these changes at your own risk.  Oplocks let clients lock files and locally cache information while preventing another user from changing the file. This increases performance for many file operations.

1. Disable Oplocks on the VNX

Disabling oplocks can affect client performance. It will increase the number of metadata requests that are sent to the server because when you use SMB with oplocks, the client caches the data that is locked to speed up access to frequently accessed files. When oplocks are disabled, the client does not cache data and all reads are made directly to the NAS server.

Syntax for disabling oplocks and verifying the change:

[nasadmin@VNX1 ~]$ server_mount vdm_file_system -o nooplock test_file_system01_fs /test_file_system01
vdm_file_system : done

[nasadmin@VNX1 ~]$ server_mount vdm_file_system | grep test_file_system01_fs
test_file_system01_fs on /test_file_system01 uxfs,perm,rw,noprefetch,nonotify,accesspolicy=NATIVE,nooplock

2. Disable caching on the Windows client

The Windows client setting controls the cache lifetime. As stated earlier, if caching is disabled on the windows client then all reads are to the NAS server directly. In order to to disable caching on the windows client rather than disabling oplocks on the VNX Data Mover, the following three registry changes would need to be made:

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\LanmanWorkstation\Parameters\

– Directory cache,  set DirectoryCacheLifetime to Zero.
– File Not Found cache, set FileNotFoundCacheLifetime to Zero.
– File information cache, set FileInfoCacheLifetime to Zero.

3. Apply a Microsoft hotfix

The Microsoft KB article http://support.microsoft.com/kb/942146 describes the problem and the fix in detail.  It directly addresses the issue with files locking from the preview pane.  It applies to all versions of Windows Vista and 7 as well as Windows Server 2008.


A Primer on Object Storage

$
0
0

I recently took a new job in the Enterprise storage group at a large company, and as part of my new responsibilities I will be implementing, configuring, and managing our object storage.  We will be using EMC’s ECS solution, however I’ve been researching object storage as a platform in general to learn more about it, it’s capabilities, it’s use cases, and it’s general management. This blog post is a summary of the information I’ve assimilated about object storage along with some additional information about the vendors who offer object storage solutions.

At its core, what is object based storage?  Object based storage is an architecture that manages data as objects as opposed to other architectures such as file systems that manage data as a file hierarchy, and block storage that manages data as blocks within sectors & tracks.  To put it another way, Object Storage is not directly accessed by the operating system and is not seen as a local or remote filesystem by an operating system. Interaction with data occurs only at the application level via an API.  While Block Storage and File Storage are designed to be consumed by your operating system, Object Storage is designed to be consumed by your application.  What are the implications of this?

  • Byte level interaction with data is no longer possible. Data objects are stored or retrieved in their entirety with a single command. This is results in powerful scalability by making all file I/O sequential, which is much higher performance than random I/O.
  • It allows for easier application development by providing a higher level of abstraction than traditional storage platforms.
  • Interaction can happen through a single API endpoint. This removes complex storage network topologies from the design of the application infrastructure, and dramatically reduces security vulnerabilities as the only available access is the HTTP/HTTPS API and the service providing the API functionality.
  • Filesystem level utilities cannot interact directly with Object Storage.
  • Object Storage is one giant volume, resulting in almost all storage management overhead of Block and File Storage being eliminated.

Object storage is designed to be more scalable than traditional block and file storage and it specifically targets unstructured content.  In order to achieve improved scalability over traditional file storage it bundles the data with additional metadata tags and a unique identifier. The metadata is completely customizable, allowing an administrator to input much more identifying information for each data object. The objects are also stored in a flat address space which makes it much easier to locate and retrieve data across geographic regions.

Object storage began as a niche technology for enterprises, however it quickly became one of the basic general underlying technologies of cloud storage.  Object storage has become a valid alternative to file based storage due to rapid data growth and the proliferation of big data in the enterprise, an increased demand for private and hybrid cloud storage and a growing need for customizable and scalable storage infrastructures.  The number of object storage products has been rapidly expanding from both major storage vendors and startup companies in recent years to accommodate the increasing demand.  Because many vendors offer Object Storage platforms that run on commodity hardware, even with data protection overhead the price point is typically very attractive when compared to traditional storage.

WHAT’S THE DIFFERENCE BETWEEN OBJECT AND FILE STORAGE?

Object storage technology has been finding its way into file based use cases at many companies.  In some cases object storage vendors are positioning their products as viable NAS alternatives. To address the inherent limitations of traditional file and block level storage to reliably support a huge amount of data and at the same time be cost-effective, object storage focuses on scalability, resiliency, security and manageability. So, what’s the difference between the two?  In general, the difference is in it’s performance, geographic distribution, scalability, and analytics.

OBJECT STORAGE IS HIGHLY SCALABLE

Scalability is a major issue in storage, and it’s only increasing as time goes on. If you need to scale into the petabytes and beyond, you many need to scale in an order of magnitude greater than what a traditional single storage system is capable of.  As traditional storage systems aren’t going to scale to that magnitude, a different type of storage needs to be considered that can still be cost effective, and object storage fills that need very well.

Object storage overcomes many of the scalability limitations that file storage faces.  I really liked the warehouse example that Cloudian used on their website, so I’m going to summarize that.  If you think of file storage as a large warehouse, when you first store a box of files your warehouse looks almost empty and your available space looks infinite. As your storage needs expand that warehouse fills up before you know it, and being in the big city there’s no room to build another warehouse next to it.  In this case, think of object storage as a warehouse without a roof.  New boxes of files can be added almost indefinitely.

While that warehouse with the infinite amount of space sounds good in theory, you may have some trouble finding a specific box in that warehouse as it expands into infinity. Object storage addresses that limitation by allowing customizable metadata.  While a file storage system may only allow metadata to save the date, owner, location, and size of the box, the object storage system can customize the metadata, and object metadata lives directly in the object, rather than in a separate inode (this is useful as the amount of metadata that is desirable in a storage platform that is tens or hundreds of Petabytes is generally an order of magnitude greater than what conventional storage is designed to handle at scale).  Getting back to the warehouse example, along with the date, owner, location, and size, the object metadata could include the exact coordinates of the box, a detailed log of changes and access, and a list of the contents of the box.  Object storage systems replace the limited and rigid file system attributes of file level storage with highly customizable metadata that captures common object characteristics and can also hold application-specific information. Because object storage uses a flat namespace performance may suffer as your data warehouse explodes in size, but you’re not going to have to worry about finding what you need when you need it.

In addition, object storage systems substitute the locking requirements of file level storage to prevent multiple concurrent updates, which enables rollback features and the undeleting of objects as well as the ability to access prior versions of objects.

Object vs. File.  Here’s a brief overview of the main differences.

  • Performance.  Object storage performs best for big data and high storage throughput, file storage performs better for smaller files.  Scality’s Organic Ring offers high performance configurations for applications such as email and databases, however traditional storage in general still offers better performance for those use cases.
  • Geography.  Object storage data can be storage and shared across multiple geographic regions, file storage data is typically shared locally.  File data spread throughout geographic regions is typically read-only replicated copies of data.
  • Scalability.  Object storage offers almost infinite scalability, file storage does not scale nearly as well when you get into millions of files in a volume and petabytes and beyond of total capacity.
  • Analytics.  Object storage offers customizable metadata in a flat namespace and is not limited in the number of metadata tags, file storage is limited in that respect.

OBJECT STORAGE AND RESILIENCY

So, we now understand that object storage offers much greater scalability than traditional file storage, what about resiliency?  Traditional file storage systems are hampered by their inherent limitations in supporting massive capacity, most importantly with a sufficient amount of data protection.  As any backup administrator would know, it’s unrealistic to try and back up hundreds of petabytes (or more) of data in any reasonable amount of time.  Object systems directly address that issue as they are designed to not need backups. They aren’t intended to be backed up.  Rather than using traditional backups, an object storage infrastructure is designed to store data with sufficient redundancy so that data is never lost even when multiple components of the infrastructure have failed.

How is this achieved? The primary way this is achieved is by keeping multiple replicas of objects, ideally across a wide geographic area.  Because of the additional storage that replication requires, object storage systems implement an efficient erasure coding data protection method to supplement data replication. What is erasure coding? It uses an algorithm to create additional information that allows for recreating data from a subset of the original data, similar to RAID protection’s parity bits.  The degree of resiliency is generally configurable on all object storage systems. The higher the level of resiliency that the administrator chooses of course results in a larger storage capacity requirement. Erasure coding saves capacity but impacts performance, especially if erasure coding is performed across geographically dispersed nodes. Different vendors handle the performance balance between erasure coding and replication differently.  Geographic erasure coding is generally supported, however only using it locally and replicating data geographically with data reduction seems to strike a good balance between performance and resiliency.

OBJECT STORAGE EASES MANAGEMENT

Object storage systems are designed to minimize storage administration through automation, policy engines and self-correcting capabilities. They are designed for zero downtime and across all vendors administration tasks can be performed without service disruption.  This includes adding capacity, hardware maintenance and upgrades, and even migrating to a different data center.  The object storage policy engines enable the automation of object storage features such as when to change the number of replicas to address spikes in usage, when to use replication vs. erasure coding, and which data centers to store objects based on the relevant metadata.

OBJECT STORAGE AND APPLICATION ACCESSIBILITY

As you might expect, each object storage vendor has implemented its own proprietary APIs Object storage utilizing the REST API to use the various storage functions.  All object storage products also support the industry standard Amazon S3 API, which enjoys perhaps the largest level of application support.  It’s not surprising as Amazon S3 has extensive capabilities and supports complex storage operations.  Be aware that some object storage vendors only support an S3 API subset, and understanding the S3 API implementation’s limitations is absolutely key to ensuring the widest level of application support in your environment.

In addition to S3 most object storage vendors also support the OpenStack Swift API.  File system protocol support is common in object storage systems, but implementations of course vary by product.  As I mentioned earlier, the company I work for went with ECS.  EMC ECS has geo distributed active/active NFS support, a key feature, and it offers Hadoop HDFS interfaces which allows Hadoop to directly access data in an object stores.  With the ECS system’s consistency support it’s a very strong geo-distributed NAS product.  Different vendors of course have different strengths and weaknesses.  A strong competitor of EMC, Scality, claims that it has EMC Isilon-level NAS performance (although I haven’t tested that), and the NetApp StorageGRID Webscale now offers protocol duality by having a one-to-one relationship between objects and files.  Other object storage products provide different unique features as well.  Some offer file system support through their own or third-party cloud storage gateways, and some provide S3 compliant connectors that allow Hadoop to use object storage as an alternative to HDFS.

OBJECT STORAGE AND DATA ENCRYPTION

Public cloud storage is a very common use case for object storage, and encryption is obviously a must for public cloud storage.  Most object storage products support both at-rest and in-transit encryption, and most use an at-rest encryption approach where encryption keys are generated dynamically without a need for a separate key management system.  Some vendors (such as Cloudian and Amplidata) support client-managed encryption keys in addition to server-side managed encryption keys which gives cloud providers an option to allow their customers to manage their own keys.  LDAP and Active Directory authentication support of users accessing the object store are also commonly supported in current object storage systems.  If support of AWS v2 or v4 authentication is needed to provide access to vaults and vault objects, do your research as support is less common.

THE BEST USE CASES FOR OBJECT STORAGE

The ability of object storage to scale and accessibility via APIs makes them suitable in use cases where traditional storage systems just can’t compete, even in the NAS arena.  So, now that you know what object storage is, what can it be used for, and how can you take advantage of the improved scalability and accessibility?  While object storage is typically not well suited for relational database or any data that requires a large amount of random I/O, it has many possible use cases that I’ve outlined below.

Advantages

  • Highly Scalable capacity and performance
  • Low cost on commodity hardware at petabyte scale
  • Simplified management
  • Single Access Point/namespace for data

Disadvantages

  • No random access to files
  • Lower performance on a per-object basis compared to traditional storage
  • Integration may require modification of application and workflow logic
  • POSIX utilities will not work directly with object storage

Use Cases

  • Logging.  It is often used to capture large amounts of log data generated by devices and applications which are ingested into the object store via a message broker.
  • NAS.  Many companies are considering object storage as a NAS alternative, most notably if there is another use case that requires an object storage system and the two use cases can be combined.
  • Big Data.  Several object storage products offer certified S3 Hadoop Distributed File System interfaces that allow Hadoop to directly access data on the object store.
  • Content distribution network.  Many companies use an object storage implementation to globally distribute content (like media files) using policies to govern access, along with features like automatic object deletion based on expiration dates.
  • Backup and Archive of structured and semi-structured data.  Because object storage systems are cost-effective, many companies are looking to them as highly scalable backup and archive solution.
  • Content Repositories.  Object storage is often used as a content repository for images, videos and general media content accessed directly through applications or through file system protocols.
  • Enterprise Collaboration.  Because of the scale and resiliency of object storage across large geographic regions, distributed object storage systems are often used as collaboration platforms in large enterprises where content is accessed and shared around the globe.
  • Storage as a Service (SaaS).  Object storage is often used for private and public clouds of enterprises and internet service providers.

OBJECT STORAGE VENDORS AND PRODUCTS

There are numerous object storage vendors in the market today.  You can purchase a detailed vendor comparison of all the object storage vendors at the Evaluator Group (Evaluator Group Object Storage Comparison Matrix) , and view the Gartner Object storage comparison matrix for more detailed information.  According to Gartner, DellEMC, Scality, and IBM are the current market leaders with Hitachi as the strongest challenger.

Gartner Market Leaders for Object Storage:

EMC Elastic Cloud Storage (ECS)

EMC Delivers ECS as a  turnkey integrated appliance or as a software package to be installed and run on commodity hardware.  It features highly efficient strong consistency on access of geodistributed objects, and is designed from the ground up with geodistribution in mind.

Scality RING

It is provided as delivered software only to run on commodity hardware, it stores metadata in a custom-developed distributed database, and they claim EMC Isilon performance when it’s used as NAS.

IBM Clould Object Storage

It is provided as delivered software only to run on certified hardware, it is a multi-tiered architecture with no centralized servers, and it offers extreme scalability enabled by peer-to-peer communication of storage nodes.

Other Object Storage Vendors:

Hitachi Content Platform (HCP)

It is delivered as a turnkey integrated appliance or as a software package to run on commodity hardware or as a managed service hosted by HDS.  It offers extreme density with a single cluster able to support up to 800 million objects and 497 PB of addressable capacity, and an integrated portfolio: HCP cloud storage, HCP Anywhere File Sync & Share, and Hitachi Data Ingestor (HDI) for remote and branch offices.

NetApp StorageGRID Webscale

It is delivered as software appliance or as a turnkey integrated appliance, it stores metadata (including the physical location of objects) in a distributed NoSQL Cassandra database.

DDN WOS Object Storage

It is delivered as turnkey integrated appliance or as a software package to run on commodity hardware, It can be configured as small as one node to start and it is able to scale to hundreds of petabytes.

Caringo Swarm 8

It is delivered as a software package to run on commodity hardware. It offers out-of-box integration with Elastic search for fast object searching.

Red Hat Ceph

It is delivered as a software package to run on commodity hardware and is based on the open-source Reliable Autonomic Distributed Object Store (RADOS).  It features strong consistency on write performance.

Cloudian HyperStore

It is delivered as turnkey integrated appliance or as a software package to run on commodity hardware.  It stores metadata with objects but also in a distributed NoSQL Cassandra database for speed.

HGST Amplidata

It is delivered as a turnkey rack-level system and uses HGST Helium filled hard drives for power efficiency, reliability and capacity.

SwiftStack Object Storage System

It is delivered as a software package to run on commodity hardware and is based on OpenStack Swift, which is the enterprise offering of Swift with cluster and management tools and 24×7 support.


Viewing all 214 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>