you're reading...

EMC VFCache versus IBM XIV Gen3 SSD Caching – Setting Tony Straight


Way back in February 2012, Tony Pearson over at the IBM Inside Storage Systems blog, wrote a comparison of EMC VFCache (A localised PCI-E SSD based cache) vs. IBM XIV Gen3 SSD Read Cache (A remote SSD cache) and it was such a misinforming comparison, I felt It was important to try and clarify the difference between the two and make a more accurate comparison for the benefit of both the buying public and those at IBM who obviously need to do a bit more research.

Now, I’ve been MIA for a bit and extremely busy with some major life changing events, so I do apologise for both my absence and the haphazard writing here, but I hope it does help to understand the significant differences between the two solutions.

Setting Tony Straight

I don’t have a relative in the film business like Tony Pearson, but I do have a video shop down the road.

It’s often the case that after the release of a rather serious movie, that a sometimes funny / often terrible parody of said movie is released.

FTC Disclosure: I am NOT employed by any vendor and receive no compensation from any vendor with exception of the following:

  • EMC – Various USB Keys, 1 x Iomega HDD, 1 x decent rain jacket, 1 x baseball cap, several t-shirts and polo’s, a few business lunches and dinners (not in relation to this blog), 1 x bottle opener keyring, pens.
  • NetApp – Various USB Keys, 1x multi-tool and torch, 1 x baseball cap, several t-shirts and polo’s, a few business lunches and dinners (not in relation to this blog), 1 x bottle opener keyring (just like EMC J ), pens, playing cards, clock.
  • HDS – Various USB Keys, 1 x baseball cap, several t-shirts and polo’s, stress ball.
  • Compellent and Dell Compellent – Various USB Keys, 1 x baseball cap, several t-shirts and polo’s
  • IBM – Various USB Keys, 1 x baseball cap, several t-shirts and polo’s, stress ball (Ironic really).
  • HP – Various USB Keys, 1 x baseball cap, several t-shirts and polo’s, brolly, stress ball.
  • Most garnered as gifts or prizes at conferences.

Whilst I may sound like a mouth piece for EMC by now, I’m my own man; just out to stop FUD, who ever writes it!

Some of these great examples are:

But the one movie which I want to reference most is: Thank you for smoking – a parody on the role of PR as a whole – because, it provides a great example of how Tony uses a fallacy known as the “Straw man argument“.

It’s often easier to argue on what someone doesn’t believe than what they do believe. The straw man argument is characterized by a misrepresentation of an opponent’s viewpoint to make for easier and more eloquent criticism of that opinion.

In the following example from the movie “Thank You for Smoking,” notice how Nick characterizes Joey’s position as “anti-choice” which is absurd and meaningless in the context of their original debate:

So, what happens when you’re wrong?
Well, Joey, I’m never wrong.
But you can’t always be right.
Well, if it’s your job to be right, then you’re never wrong.
But what if you are wrong?
Okay, let’s say that you’re defending chocolate and I’m defending vanilla. Now, if I were to say to you, “Vanilla’s the best flavor ice cream”, you’d say …?
“No, chocolate is.”
Exactly. But you can’t win that argument. So, I’ll ask you: So you think chocolate is the end-all and be-all of ice cream, do you?
It’s the best ice cream; I wouldn’t order any other.
Oh. So it’s all chocolate for you, is it?
Yes, chocolate is all I need.
Well, I need more than chocolate. And for that matter, I need more than vanilla. I believe that we need freedom and choice when it comes to our ice cream, and that, Joey Naylor, that is the definition of liberty.
But that’s not what we’re talking about.
Ah, but that’s what I’m talking about.
But … you didn’t prove that vanilla’s the best.
I didn’t have to. I proved that you’re wrong, and if you’re wrong, I’m right.

Rather than go into detail of what a straw man argument is, I’ll let this guy do a better job of explaining it for you:

And it is to this; I wonder if Tony Pearson over at IBM is writing in the style of the great lampoon when he made a comparison between IBM XIV’s latest additions of SSD drive as array based read cache vs. EMC host based VFCache.

You see, the two are only related by the fact that they are FLASH/SSD solutions – however; that’s where the similarities stop, they solve a completely different problems. One creates an extended read cache in the array – away from the application (XIV Gen 2 SSD Cache) and the other creates an extended read cache in the host right next to the application (EMC VFCache).

Now if you want to talk about copy-cat; well Tony, adding flash as a cache to the XIV more than 2 years after EMC did it with the Clariion, that is a copy-cat, except the XIV is only a read cache, EMC Fast Cache is both read and write!

I’ve mentioned before about my view on FUD – I just don’t like it – and Tony’s post had nothing but FUD in the form of a straw man argument “sprinkled liberally” all over it!

But it’s not just Tony with IBM… HP, NetApp and a whole host of others are popping up from the woodwork to proclaim that VFCache is some sort of solution to a non-existent problem that only exists with EMC, but the reality is, all vendors have the same problem but none at present have an alternative of their own to VFCache and no array in existence can solve the problem that VFCache solves – Latency external to the array – and they’ll keep preaching this until that is…. they release their own.

A real comparison of the two architectures – EMC VFCache vs. IBM XIV Gen3 SSD Cache:

In reality, the EMC VFCache and XIV Gen3 SSD Cache have some very minor similarities, but the two solutions are completely different as are the intended purposes and architectures.

Category EMC VFCache IBM XIV Gen3 SSD Caching
Placement: In Host In Array
Physical makeup: PCIe SSD SLC Card(SLC = Single Level Cell 1bit /cell) SATA/SAS SSD MLC drive in a PCI/e interposer slot.(MLC = Multi-level Cell 2bits/cell)
Intended purpose: For applications demanding ultra-low latency such as OLTP, reporting, analytics etc.Host and/or application specific acceleration of data reads localised at the host to bypass external latencies resulting from distance, media and switching. Overall array acceleration or selected system volumes for regular read requests to localise data within a higher speed medium than that of traditional rotating disks and reduce latency of read requests to bypass array disks.
Architecture: In-host installed PCIe SSD card with host driver level filter algorithms to intelligently detect regular read requests and keep the required data as close the application as possible rather than resort to the storage network and it’s latencies. In-Array installed SSD drive with intelligent algorithms to detect regular read requests and reduce the dependency on traditional rotational drives for read.
Method: Intelligent algorithms to detect read requests and create a local copy of read data in the host. Intelligent algorithms to detect read requests and create an array ssd cache to supplement the array’s DRAM based cache.
Benefit: Enables ultra-high speed/low-latency read request delivery to greatly improve response times of time sensitive applications.Data is closer to the application. Enable high-speed read request delivery for the entirety of the data served by the array (or specific volumes).

You see, VFCache solves a problem that no storage array architecture can solve: Latency outside of the array.

Here are the causes of this latency in order:

  • Application
  • OS
  • File system
  • CPU
  • Memory
  • Bus
  • Block Driver
  • Host Bus Adapter (HBA)
  • HBA Media (the electronic to optics conversion)
  • Cable
  • Switch port media
  • Switch ASIC quad
  • Switch ASIC quad (another one if not in the same quad)
  • Switch port media
  • Cable
  • HBA Media (the electronic to optics conversion)
  • Array HBA
  • Array BUS
  • Array CPUs
  • Jibbly bits inside the array code (Programming is a dark art to me)
  • Array Memory
  • Array Internal HBA and media
    • Switching if applicable
  • Array backend Cables
  • Drive Tray switching (Or more CPU/Memory/BUS/Drive Controller if applicable)
  • Drives
  • And back again. (Did I miss anything? – FOBOTS, Routing, dirty media, OSI Layers………..)

    From there on in, it’s up to the array to then deliver the requested information and send it back though the same external path and its corresponding latencies.

Now, let’s look at an example of what VFCache does:

Imagine this rather quick office worker, who suddenly needs a yellow form to fill out, so she rushes off down the hall, into the elevator (lift to the yanks, placard pet to the est cannuks it seems) down, then down the hall to the records department, gets the form from the records keeper and back again, but every time she needs the same form, she does this over and over again:

Now this time, she gets clever and invests in a filing cabinet to store the forms she uses most often locally – Big time saver that:

But say she now needs a green form as well and the yellow on a regular basis, well, she goes off and gets it like the first example, but this time, keeps a copy in the filing cabinet like the yellow form:

Now let’s replace this clever girl, forms, filing cabinet and records department with a data centre and it’d look like this:

Here is an example of how a read request is served in a typical storage environment:

Here’s what happens when VFCache is introduced and serves a read request already known:

And this time, when a new read request is made that is not in cache, but soon will be:

So, we now understand how EMC VFCache works, shall we take a look at another, improved bodged comparison table?:

Category EMC VFCache IBM XIV Gen3 SSD Caching
Servers supported Selected x86-based models of Cisco UCS, Dell PowerEdge, HP ProLiant DL, and IBM xSeries and System x servers – Pretty much most environments out there! Then the VNX Supports quite a few more than XIV; from my recollection even support for 520bytes a sector version for IBM System Z running z/OS and iSeries running All of these, plus any other blade or rack-optimized server currently supported by XIV Gen3, including Oracle SPARC, HP Titanium, IBM POWER systems, and even IBM System z mainframes running Linux
Operating System support Linux RHEL 5.6 and 5.7, VMware vSphere 4.1 and 5.0, and Windows 2008 x64 and R2 – Yup, pretty much anything which needs acceleration! All of these, plus all the other operating systems supported by XIV Gen3, including AIX, IBM i, Solaris, HP-UX, and Mac OS X
Protocol support FCP (With iSCSI, FCoE and more to come I’m sure) FCP and iSCSI
Vendor-supplied driver required on the server Yes, the VFCache driver must be installed to use this feature. No, IBM XIV Gen3 uses native OS-based multi-pathing drivers, not quite as good as the multipath IO drivers from EMC, HDS and Symantec.
Works with a variety of storage solutions from many vendors Yes, VFCache is QUALIFIED with EMC storage at present, but will work with almost all FC storage No, You need an XIV GEN 3 to use SSD cache
External disk storage systems required None, it appears the VFCache has no direct interaction with the back-end disk array, so in theory the benefits are the same whether you use this VFCache card in front of EMC storage or IBM storage XIV Gen3 is required, as the SSD slots are not available on older models of IBM XIV.
Ability to provide data read requests in less than 100 microseconds (< 100μs) latency Yes!!! No, XIV Gen3 is subject to all the same old delivery issues listed earlier.
Able to provide read cached data without introduced latency of interconnection Yes, application > OS > BUS > VFCache No
Able to accelerate VMware guests with ultra-low latency and eliminate read bottle necks in storage networking Yes!!! No, XIV Gen3 is subject to all the same old delivery issues listed earlier, just like any other array.
Ability to support multiple arrays Yes No, you stick the SSD in a XIV Gen3, it’s limited to that array.
Can use higher speed array disks such as 10/15k or even SSD when not in cache Yes, when not in cache, it can use high speed array disks for consistent performance No, if it’s not in cache, you’re stuck with 7.2k  RPM SAS (no faster than what’s in your typical desktop)
Support for multiple servers Yes, put ‘em in as many servers as you want An advantage of the XIV Gen3 SSD caching approach is that the cache can be dynamically allocated to the busiest data from any server or servers. (No difference to that of almost any other array that offers SSD caching)
Support for active/active server clusters Not yet…… but the VNX is, just like it’s designed to be – Tony, this is a localised cache. Yes!
Sequential-access detection Yes, back at the array where it’s designed to be for sequential access; not cache! And the VNX is not crippled by sequential access like the XIV due to only being able to use 7.2k drives vs. The VNX which is able to use 15k and 10k drives as well as 7.2k Yes! XIV algorithms detect sequential access and avoid polluting the SSD with these blocks of data.
Number of SSD supported One, and that’s all you should need, it’s an in-host cache! Oh and add to that, EMC FAST Cache can provide you up to 2TB of array based cache – Still need more than that, EMC VNX can support even more SSD’s as real drives, heck, they’ve even got a full array of nothing but SSD; the EMC VNX 5500-F Only 6 to 15 (one per XIV module).
Pin data in SSD cache Yes, using split-card mode, you can designate a portion of the 300GB to serve as Direct-attached storage (DAS). All data written to the DAS portion will be kept in SSD. However, since only one card is supported per server and the data is unprotected, this should only be used for ephemeral data like logs and temp files. No, there is no option to designate an XIV Gen3 volume to be SSD-only. Consider using Fusion-IO PCIe card as a DAS alternative, or another IBM storage system for that requirement.

Personal note: Tony, I loved how you added the Fusion IO bit to the end of your table AFTER my comment……. gota love all that research…..

See what I did there? Anyone can devise one of the tables tilted to their preference. The truth is; there shouldn’t be a comparison table…. They’re two completely different solutions.

Tony’s blog also has this little gem that I found funny:

Sequential-access detection None identified. However, VFCache only caches blocks 64KB or smaller, so any sequential processing with larger blocks will bypass the VFCache. Yes! XIV algorithms detect sequential access and avoid polluting the SSD with these blocks of data.

However, according to IBM Red Book REDP-4842-00, it appears the IBM XIV Gen3 SSD cache also bypasses the SSD Cache for any read larger than 64k….. hmm…. they have another similarity other than being ssd:

3.2.3 Random reads with SSD Caching enabled:

According to IDC, IBM continues to lose market share on the storage side.  On a recent earnings call, IBM announced (again) that storage revenues had declined in an otherwise rudely robust marketplace where everyone else seems to be going.

Tony is an extraordinarily smart guy, he’s an IBM master inventor for peat sake; why are IBM wasting such an intelligent resource on writing such nonsensical misrepresentation disguised as a factual piece?

Could you imagine what IBM’s results would be if instead they used such a talented person; well they wouldn’t be stuck with an 11.4% and falling share in the data storage market vs. EMC’s solid 29% (According to IDC June 2012)

I guess the final movie I’m reminded of is the Simpsons movie with Comic Book Guy’s sarcastic streak – It reminds me of Tony’s rather peppery comments:


  1. Did I need to do this? No, but the XIV team asked me nicely to write about this, pretty please, with sugar on top, so I did.
  2. This is FUD. No argument there. For those who can’t find the FUD sprinkled throughout my post, it is the list of factual disappointments in the VFCache announcement, including, but not limited to, (a) that it only works on select server models and operating systems, (b) that it only works with FCP protocol, (c) that customers are limited to only one card per server, and (d) that EMC does not recommend anything other than ephemeral data to be placed on the card in split-card DAS mode, to name a few. I agree that sometimes FUD is difficult to find for some readers, but in this post I consolidated the FUD into an easy to read table, in the first column, highlighted in bright yellow.

    Tony, I’d be more concerned about the factual disappointments in your own post.


Best video I could find, sorry.

Anyway, best regards, I hope it helped to clarify things.

Aus Storage Guy!

P.s. Anyone have any flash backs to the 90’s – NetScape, AOL et all – with the animated gifs? J


About ausstorageguy

I am a IT Enterprise Storage Specialist based in Australia, with multi-vendor training and a knack for getting to the truth behind storage. I specialise in and have previously worked for EMC, however; I am also highly skilled and trained in: -NetApp FAS -NetApp E-Series (Formally LSI Enginio) -Dell Compellent and Equalogic -Hitachi USP, VSP and AMS -HP EVA, Lefthand, 3Par (and HDS OEMs) -LSI Engenio OEMs (IBM DS and Sun StorageTek) -TMS RamSAN -IBM XIV As you can imagine, that's alot of training... Thankfully; as a speciallist in storage, I don't have to think about much else. I try (Very hard) to leave any personal/professional attachment to any given product at the door and I have a zero tollerance for FUD. (Fear Uncertainty Doubt) So I beg all vendors commentators to leave the FUD, check the facts and let's just be real about storage. There may be some competetive analysis done on this blog, but I assure you, I will have check, re-check and checked again the information I present. However, should I get it wrong - which, above all else is a much greater tallent - I will correct it as quickly as possible.


6 thoughts on “EMC VFCache versus IBM XIV Gen3 SSD Caching – Setting Tony Straight

  1. Definately, your statement ” I try (Very hard) to leave any personal/professional attachment to any given product at the door” does not hold true. Since instead of giving unbias pro/cons for each product/technology you have shown favorism to one vendor & its product.
    Further irrespective of vendor,I would like to make a point SSD in physical machine to ssd on SCSI bus out side m/c is really slow? If yes, how much. All the latencies at different components in SAN (as you have mentioned) to latency of SSD it self is at what ratio or percentage? Please let us know, with your such vast IT experince,what will be difference between the response time with SSD being part of m/c and in outside in SAN. Is it increased in mili seconds, micro seconds or nano seconds. Please specify the % delay due to SAN storage over total response time.
    Your explanation using Worker, its Desk, Local Cabinet and Lift is awsome, However is this really true? CPU, its cache (L1-L3), Memory (RAM), to SSD does operate at same level? Or is it, as per your graphics, Worker getting the data from local SSD is from 600th floor and getting data fromSAN storage based SSD is 630th Floor.
    I am looking for your valuable input.

    Posted by SamD | April 13, 2013, 20:36
    • Hi SamD,

      Thanks for your comment and feedback.

      When I say “I try”, I really try, but it’s not always easy to leave the personal side when the response – from Tony in this case – is sarcastic in nature; it becomes personal when someone uses a tone intended to make one seem idiotic for disagreeing with their view.

      As for the professional side; in terms of bias, I don’t believe I showed any particular bias, but please accept my apologies if I have. In this post, I felt it necessary to clear up the confusion that I believe Tony has created by by comparing the two solutions as if they were they same. Clearly they are not the same!

      If these solutions were really the same, why would IBM waste their time and effort offering the two different types of solutions -fusionIO/TMS pcie cards and drive based SSD cache- as part of their offerings? Because the solutions and the problems they solve are different!

      Even how they are implemented and approached is different.

      As for the the % delay or ratios would be very difficult to determine without knowing the variable, but just to give you an example, if we were to talk about a host and array separated by 100m with 50m cables each side to the switch, you’d be talking about a 1000ns or 1ms round trip latency per transaction just in the cables alone, then add the latency from the host, drivers etc, the switch, the array and anything else that happens to be in between app and data.

      Conversely, with the pcie host localised cache, you’d instantly reduce that same latency by at least 1ms(1000ns), which in a high transaction environment, could add up to a lot of hours or even days worth of productivity per year saved or many millions of dollars in financial transactions!

      Now, that’s just an example environment, one faced recently, but and example none the less, I’m sure you’ll be able to formulate your own.

      But there are other differences; SLC (VFCache) will typically perform better than MLC SSD. PCIe (VFCache) native better than SATA/SAS (XIV).

      This, for me isn’t about bias, but the right tool for the job. I believe Tony, in his intention, is to confuse and misdirect people to the POSSIBLY the wrong solution when looking for HOST BASED cache. (HIGHLIGHTS NOT SHOUTS).

      But allow me now ask you; would you go back to Tony and question him about how he brings himself to write disingenuous posts or if he truly believes its genuine; then I’d suggest you seek other sources.

      I can speak as someone who has real experience implementing the VNX, VFCache and XIV and from that experience I can spell out the differences clearly and truthfully.

      I hope that you noticed in my post that there was no statement from me to claim one system/solution was better than the other because they each suit different business needs. Rather, that they are different solutions to different problems.

      I only wish Tony would understand and do the same rather than resort to misdirection and then sarcasm when called out.

      The conundrum is, where do your loyalties lie and are they without question? Do you question me to seek a better answer? Do you write to only confirm your own unwavering belief and forsake any possibility that you may have been misled or are you really willing to test yourself to original thought and find the answers for yourself? And see if I’m right or wrong?

      Don’t take me at my word; do your own research and find out for yourself.

      As for “vast experience”, that phrase is the reserve of beginners, mostly used on CV’s to hide a lack of experience – those who have real experience know they are just beginning to scratch the surface but have developed a thirst to learn from our peers and mentors.

      And this is where I leave you; if my post is incorrect in any way, I invite you to correct my – privately or publicly – and I will make amends.

      As a closing thought, I wonder what Tony would think of XIV if EMC had bought XIV instead and followed the same path? I somehow doubt he’d be brimming with the same praise…

      What are your thoughts?

      With deepest wishes

      Aus storage guy.

      Posted by ausstorageguy | April 14, 2013, 07:35
  2. This is just so – GREAT!
    I love all the effort and work you did put down on this one.
    Not just the comparision between VFCache and XIV Gen 3 SSD Cache, but also the great need of killing all the FUD out there. All disregarding and irrelevant bad rumour spreading about competitors is so annoying and bad for the IT business in total.

    Love your work – and will support all kinds of posts like this, no matter which vendor or supplier!!!


    Posted by Johan Robinson | November 13, 2012, 08:54
    • Hi Johan,

      I really appreciate the feed-back, thank you.

      I feel it’s very important to put an end to FUD; if vendors really belive their product is the best, let the product speak for it’s self
      There is no value in trying to speak against a competitors product/s, especially when you’re wrong about it!

      Look forward to saying hej, next time I’m in Stockholm!


      Aus Storage Guy

      Posted by ausstorageguy | November 13, 2012, 09:24
  3. This is awesome

    Posted by Mike | August 18, 2012, 00:53

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: