Quantcast
Channel: THE SAN GUY
Viewing all articles
Browse latest Browse all 214

VNX vs. V7000

$
0
0

Like my previous post about the XIV, I did a little bit of research on how the IBM V7000 compares as an alternative to EMC’s VNX series (which I use now).  It’s only a 240 disk array but allows for 4 nodes in a cluster, so it comes close to matching the VNX 7500’s 1000 disk capability.  It’s an impressive array overall but does have a few shortcomings.  Here’s a brief overview list of some interesting facts I’ve read about, along with some of my opinions based on what I’ve read.  It’s by no means a comprehensive list. 

Good:

·         Additional Processing power.  Block IO can be spread out among 8 Storage Processors in a four node cluster.  You need to buy four frames to get 8 SP’s and up to 960 disks.  Storage Pools can span clustered nodes.

·         NAS Archiving Support. NAS has a built in archiving option to easily move files to a different array or different disks based on date.  EMC does not have this feature built in to DART.

·         File Agent. There is a management agent for both block and file.  EMC does not have a NAS/file host agent, you have to open a session and log in to the Linux based OS with DART.

·         SDD multipathing agent is free.  IBM’s SDD multi-pathing agent (comparable to PowerPath) is free of charge.  EMC’s is not.

·         Nice Management GUI.  The GUI is extremely impressive and easy to use.

·         Granularity.  The EasyTier data chunk size is configurable;  EMC’s FAST VP is stuck at 1GB.

·         Virtualization.  The V7000 can virtualize other vendor’s storage, making it appear to be disks on the actual V7000.  Migrations from other arrays would be simple.

·         Real time Compression.  Real time compression that, according to IBM, actually increases performance when enabled.  EMC utilizes post process compression. 

Bad: 

·         No Backend Bus between nodes. Clustered nodes have no backend bus and must be zoned together.  Inter-node IO traverses the same fabric as hosts.  I don’t see this configuration as optimal, it is a fact that the IO between clustered nodes must travel across the core fabric.  All nodes plug in to the core switches and are zoned together to facilitate communication between them.  According to IBM this introduces a 0.06ms latency, but in my opinion that latency could increase based on IO contention from hosts.  You may see an improvement in performance and response time due to the extra cache and striping across more disks, but you would see that on the VNX as well by adding additional disks to a pool and increasing the size of Fast Cache, and all the disk IO would remain on the VNX’s faster backend bus.  The fact that the clustered configuration introduces any latency at all is a negative in my mind.  Yes, this is another “FUD” argument.

·         Lots of additional connectivity required.  Each node would use 4 FC ports on the core switches (16 ports on each fabric in a four node cluster).  That’s significantly more port utilization on the core switches than a VNX would use.

·         No 3 Tier support in storage pools.  IBM’s EasyTier supports only two tiers of disk.  For performance reasons I would want to use only SSD and SAS.  Eliminating NL-SAS from the pools would significantly increase the number of SAS drives I’d have to purchase to make up the difference in capacity.  EMC’s FAST VP of course supports 3 tiers in a pool.

·         NAS IO limitation.  On unified systems, NAS file systems can only be serviced by IOgroup0, which means they can only use the two SP’s in the first node of the cluster.  File systems can be spread out among disks on other nodes, however.

 



Viewing all articles
Browse latest Browse all 214

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>