SAP HANA: an analysis of the major hardware vendors

When I first wrote this blog I didn’t realise a few things. First that it would become the reference location on the internet for HANA material, and Second that it would therefore require updating!

So here we are, some 9 months after the initial post, with a full revision of this information. Hardware Vendors – this is based on the publicly certified systems on SAP’s Product Availability Matrix. SAP update this regularly with new appliances and vendors.

SAP released the first version of their in-memory platform, SAP HANA 1.0 SP02, to the market on June 21st 2011. We are now at the release of SAP HANA 1.0 SP04 in May 2012 and things have moved on hugely. We now have High Availability, system monitoring and scale-out appliances: up to 16TB certified and 100TB in the lab.

SAP has now an open hardware platform, allowing multiple hardware vendors (currently 7), so that customers could choose who they want to procure from. This should theoretically produce a level playing field where the price becomes commoditised and customers get choice and value.

This article gets into the detail of what it looks like if you actually want to purchase an appliance and it’s based on my experience of working with the hardware vendors over the last 18 months.

Note that my high-level message is pretty clear: SAP HANA hardware is ready for the masses and stable for databases up to 16TB of HANA (equivalently 80TB of Oracle compressed data).

What is the SAP HANA Technical Architecture?

SAP HANA is pretty simple. It’s based on the following components:

  • Server – based on Intel’s Nehalem EX or Westmere EX platforms – X7560 or E7-2870, 4870 or 8870 CPUs, respectively. These are big rack-mount systems that take up to 8 CPUs and 80 cores. It’s commodity hardware that you can buy off the web, but for example the Dell PowerEdge R910 with 1TB RAM is $65k list price on their website. I’ve now removed all the Nehalem hardware from this post because it’s no longer sold.
  • RAM – lots of it, and matched to the CPUs. 20 cores allow 256GB RAM, leading to a maximum of 1TB of RAM with current CPUs. Think along the lines of $35k list price for 1TB RAM.
  • Fast Log Storage – Sized to 1x RAM and usually the very expensive Fusion-io ioDrive Duo. These are $15-30k a pop for the 320Gb and 640GB drives, respectively. In some configurations, the log and data storage are shared. Fusion-io ioDrive2 is now released though I have yet to see certified hardware using it. It is half the price of the ioDrive for the same capacity and much faster too.
  • Data Storage – 4x RAM. On all the certified single-node configurations this is cheap SAS direct storage. You need this so you can power down the appliance and do things like backups. Budget $15-20k for at 1TB storage system. For multi-node configurations it uses some form of shared storage – either a SAN or local storage replicated using IBM’s GPFS filesystem. Prices vary for scale-out.

So theoretically at least, you should be looking at $145-150k for a basic 1TB appliance, based on Dell’s website list prices. Note that this is hardware only – and all of the SAP HANA hardware partners offer a pre-built system with installation services and likely require a support contract. It may add up!

The other big difference since I first wrote this blog is that we now get scale-out appliances from Cisco, Fujitsu, HP and IBM from 1TB to 16TB. And in the lab, SAP have a 200-node 100TB test system which means about 1PB of uncompressed data. Things have moved on!

In addition, SAP have invested in a company called Violin, which uses Infiniband and SSD storage. This would be an awesome way to get compact scale-out HANA appliances when Intel’s Ivy Bridge server platform which enables 1TB HANA Blades to arrive.

IBM – remains the safe choice

There’s an adage: “no-one ever got fired for buying IBM”. I’m sure someone has been, but it’s good marketing. IBM sold by far the most of the last generation in-memory appliance from SAP called BIA/BWA and they currently have the greatest choice of SAP HANA appliances.

I have a total of 9 appliances on my list from 128GB to 16TB with various different configurations depending on the customer requirements. At the time of writing they are also the only vendor to scale out to 16TB – at least for now, IBM remain ahead of the game.

My colleagues also tell me that IBM promise any hardware configuration to be available in as little as 5 days. Certainly they provided a 200-node test system for SAP in less than 2 weeks!

That said – IBM isn’t the cheapest of the vendors and there are some hidden costs like the licence cost of their high-performance GPFS filesystem. But as one CIO told me “We priced up SAP HANA appliances and the vendors seemed very varied in price. But as we got close to negotiations, the variance evaporated”.

HP – Solid Hardware, availability concerns?

Since Meg Whitman took the help at HP, things seem to have gotten better (though they could hardly have gotten worse!) and they have consolidated their hardware.

HP now have 5 certified appliances from 128GB to 1TB; they also have a scale-out appliance with up to 8TB. The amount of disk hardware required is concerning – 12 disks per node or 192 disks for a 16-node 8TB system.

The concerns I have heard with HP is that they are now very strict on loan hardware and have extremely long lead times. With Hitachi and IBM being aggressive on delivery schedules, this could put them at a disadvantage.

Dell – Services Provider or Channel Partner?

Dell now have 3 certified single-node appliances from 128GB to 512GB. I heard rumours that they have lost interest in SAP HANA and their services page says “ERP in the Cloud”. Certainly I tried to buy a SAP HANA appliance from them and summarily failed. They said:

With regards to SAP HANA, after a significant amount of research throughout Dell, I must advise that we are not in a position to supply these solutions at this time. While we at Dell strive to offer complete solutions to our customers, we will only do so when we have the capability to do so effectively.

In any case I haven’t seen Dell in any customers. Has anyone seen a Dell SAP HANA appliance in the wild? Let me know!

Fujitsu – Dark Horse

I’ve not had too many dealings with Fujitsu but they have been quick to respond and appear to know what they are doing from a sales-enablement perspective. They have the same 5 appliances as HP and 128Gb to 1TB appliance sizes – just the same.

They also have the same scale-out as HP with the same enormous amount of disks, up to 192 for a 16-node system using either NetApp or Eternus disk fabric.

Cisco – IKEA of HANA appliances?

Cisco have expanded their portfolio and I hear their blade business based on the UCS C260 and C460 blades is doing well in the enterprise. They now have 128, 256 and 512GB appliances as well as a 16-node 512GB (8TB) scale-out appliance like HP and IBM.

Their appliance requires even more disks! Up to 300 for the 16-node system. Wow!

Cisco guarantee fast delivery but your HANA appliance will arrive as a bag of parts that need assembling by a Cisco Services Partner. And then shipping to you.

Hitachi Data Systems

Hitachi used to call their appliance the Blade System 2000 or BS2000 for short. Thankfully they had the common sense to rename it the Compute Blade 2000 and it is available as a blade chassis from 256GB to 1TB, using their AMS2000 shared storage.

Theoretically this should allow them to build out a neat scale-out solution using their HDS storage arrays and 2TB per blade chassis but this has not been released yet.

One thing that is worth noting with Hitachi is that they have SAP HANA hardware on the shelf in standard configurations and have a promised ship time of 2-3 weeks.

NEC – New kid in town

NEC have arrived with a single appliance – a 1TB system using Virident SSD instead of the Fusion-IO. It is bigger than all the other vendors at a massive 7U which can accept 2TB of RAM (which has no value for HANA). I’m guessing NEC have plans to certify more hardware but I have not seen one in the wild.

Conclusions

As I predicted, the hardware market will increase in volume and consolidate and other vendors have indeed come on board. This will continue through 2012.

Scale-out: Scale-out is now a reality and there are systems running IBM’s X5 platform up to 200x512GB nodes or 100TB. The concern I have is that without IBM’s proprietary GPFS technology, a lot of shared storage is required to make HANA work. Can HP and others prove large scale-out capability?

Blades: Let’s face it – SAP HANA was meant to run on blades. But there’s no suitable blade platform yet because you can’t get 8 CPUs and 2TB RAM in a single blade. Plus, you are even more limited from an expansion (i.e. Fusion-io cards) and network bandwidth perspective, if you use a blade chassis. It now looks like when Intel’s Ivy Bridge platform arrives in late 2012, the hardware vendors will have designed high-density systems to run SAP HANA.

But to conclude, the SAP HANA hardware business has come a long way in the last 9 months. If it continues to scale at this rate, Teradata had better be concerned.

This entry was posted in Technology. Bookmark the permalink.

40 Responses to SAP HANA: an analysis of the major hardware vendors

  1. Patrick Bolin says:

    You missed Hitachi Data Systems – newly certified AND runs on blade servers.

    • John Appleby says:

      Didn’t so much miss it as it wasn’t available at the time of writing. You say that HDS runs on blades, which is true, but they are babies – 256GB blades, from what I understand. It’s not clear to me how Hitachi can scale to above 1TB in a single configuration, on that basis, which makes it too small for most of the customers I am talking to. What are your thoughts?

      • Meric says:

        HDS uses Symmetric Multi Processing to physically connect the QPI’s of up to 4 blades together via a front panel connector to aggregate the resources of up to 4 blades together into a single instance. Each of these blades are rated to support 384GB each, for a maximum total of 1.5TB in a single instance. In the case of the Large HANA Appliance, HDS uses 256GB of memory per blade, as well as a pair of E7-8870’s on each blade. providing a total of 80 cores and 1TB of memory in a 4 Blade SMP. This consumes half of a 10U chassis. HDS also is the only vendor to offer a seamless upgrade path from S to M to L Hana Appliances. Scale up from 1 blade , to two (add a 2-blade SMP connector), to four (remove 2 blade connector, add 4 blade connector). All without losing your initial investment, or having to reload the OS or the re-install the software stack.

      • John Appleby says:

        I get that, of course you can’t use more than 25G6B per blade in HANA because you need 20 cores per 256GB RAM, so 1TB max and not 1.5TB.

        What happens if you need more than 1TB? Also, how do you scale the log volume? Presumably whilst you don’t have to wipe the OS, you do have to move the logs off, reformat the log volume and copy the logs back.

  2. moneyxmoney says:

    What is the recommended hardware for test and development purpose. Who is the cheapest vendor for such hardware?

    • John Appleby says:

      Great question. You still need certified hardware, at least today. You should buy this from whoever is supplying your production equipment. Most of the vendors have a 128GB appliance.

      In most cases I’m involved with the cost of T&D equipment isn’t really relevant because most organisations sit in one of three camps:

      1) The technical people want to do a POC. Great, if you have discretionary budget to buy 1 HANA unit from SAP and a box to run it on.
      2) There is a clear cut business case or value case and you should go straight ahead. Especially true for e.g. COPA RDS. Organisations are buying this on value alone.
      3) You have a potentially clear cut business case that requires a POC, proof points and has exit criteria – in this case a hardware vendor will typically loan hardware. Mostly this only works for large organisations that are doing something slightly unusual.

      To answer your question directly – all the hardware vendors are roughly the same cost. Cheapest systems I’m seeing come in at about $80k.

  3. Eric says:

    There were a few “misses” in this blog. In particular, the obvious underestimation of Cisco as a Server player, HANA or otherwise.

    • John Appleby says:

      Why do you think that Cisco is a serious server player? There’s IBM (36% market share), HP (30%), Dell (13%), Oracle (5.5%), Fujitsu (4%). It’s true that Cisco has some decent market share in blade servers (10% or thereabouts, which equates to about 2% of overall server market), but most HANA systems being sold today aren’t blades, for two reasons.

      First, there aren’t blades that can take the memory and CPU density that HANA requires for the high-end systems – density and power problems are getting in the way. Second, blades require scale-out and manufacturers are having problems in the real world with I/O bandwidth to shared storage.

      No doubt with another technology generation these problems will be architected out, but I don’t think that makes Cisco such a serious player today.

      • Jonathan says:

        What is exactly the market share distribution above? Is it for HW platforms of in memory databases?
        Also can you say a few words on absolute $ numbers

      • John Appleby says:

        I can’t provide numbers on market share and I’m not sure who can. SAP don’t record these numbers on a per-vendor basis, but rather on a platform. Since all HANA appliances run on the Intel x86 platform with SuSe Linux, I’m not sure how it could be counted.

      • C. says:

        I’ve heard from local sales teams – IBM has more like 70%+ market share of HANA platforms. Thats Huge. Where do market share numbers come from – how can those be validated? becuase if 70% is true there must be a good reason.

      • John Appleby says:

        I’ve heard rumours, but just that. SAP confirmed “the majority” i.e. >50% when they spoke at an analyst event last week.

  4. Eric says:

    That is okay, John. I rather like the role of underdog.

    I think it is important to look not only at the market-share numbers as a specific point-in-time, but also the trajectory. It’s also important to look at the big picture going from zero customers to over 10,000 in a bit less than 3 years. I encourage this because looking at it any other way is really looking at it through the eyes of “marketing.” It’s also important to compare apples-to-apples. When you look at the overall market-share, you’re including products that not all of the players make — Dell and Cisco, for example, do not make large RISC-based servers — and the trend (remember — trajectory is just as important) has been to move from large, expensive, RISC-style compute to commodity x86-64 based compute.

    That last statement is especially important when it comes to HANA. Today, and the public-facing statements from SAP is that HANA is and will only ever be an Intel architecture appliance. Without prejudice, Oracle recognized this a long time ago as well. Scale-out, on industry standard, open technology can handle the vast majority of workloads required today. You are absolutely correct about some of the problems of Scale-Out HANA, but those problems aren’t too difficult to solve. The real challenge for HANA will be the 3rd-party developers who choose HANA as a real-time back-end for business analysis and, I personally think, mobility solutions.

    The earlier point about Cisco, and more specifically market-share as a point-in-time data-point, vs a trajectory point of view — If we were merely a niche player, or we “didn’t get it”, whatever “it” might be — the trajectory that we’re on couldn’t happen. If you look at other solutions, it’s not hard to draw the conclusion that blades (and the points you make about power, cooling, and equally important – management) from other vendors were “fixed by patch”, meaning — as time progressed, blade chassis, power, networking were all bolted together using old-school technology — just shrunk down to fit into a smaller package. Cisco re-thought the entire stack and I refer to this as “fixed by Design.” As customer learn more about Cisco UCS, that trajectory that we have been on will continue.

    • John Appleby says:

      Lots of interesting points.

      I think many people are interested in what’s available now rather than what the overall market trajectory is, because people want to buy today. In today’s terms, it strikes me that IBM have the simplest and strongest hardware proposition for HANA, partially because of their experience with BWA, with SAP Services and partially because their GPFS filesystem resolves a bunch of real-world problems for customers that want HA, DR and reliable operations. In the UK, IBM are great because they have a knowledgeable reseller, Centiq. And remember, no one ever got fired for buying IBM! HP and Fujitsu are not too far behind, depending on the region.

      In terms of trajectory, it’s much trickier. Partially because there are macro-trends like HP’s current trend towards implosion. Plus a volatile market that can be deeply affected by things like volcanos and tsunamis in the far-east. And partially for HANA because most customers buy a single hardware vendor every 3-5 years and rarely move. I don’t have a single SAP customer that runs Cisco hardware and I deal with many of the Fortune 500. Given Cisco’s market penetration, there must be some large customers using the kit; I just haven’t encountered them.

      HANA has been built to be x64-only and it is highly optimised. I don’t see that changing in the short-medium term, although long-term CPU trends mean SAP can’t bind itself to Intel. This means it is a theoretically level playing field. In reality it’s not though, and I’m seeing 2-3x performance differences between different HANA systems from different vendors with the same baseline specifications.

      Cisco currently still only have one hardware configuration certified: UCS C460 M2 with 512GB RAM. Also there is the B-Series blades which should be able to support 512GB nodes; it will be interesting to see who Cisco decides to partner with on scale-out. Will it be NetApp, like HP and Fujitsu?

      It strikes me that IBM and HP are both a generation of hardware away from HANA blades, but that’s not a long time in software terms and they can easily catch up to what Cisco have as a possibly slightly superior architecture. Do you really think it is so much beyond what IBM have, particularly considering their massive R&D budget?

      Either way, it’s interesting times!

    • Clock$peedy says:

      Hi Cisco Eric,

      I am not aware that Cisco even has an SAP certified HANA appliance, my latest SAP info says you don’t. And no, Cisco’s recently announced “UCS Bridge To SAP HANA” does NOT count..

      http://tinyurl.com/d4phhz3

      I like this part best:

      “…a Cisco Bridge to HANA appliance can be transformed into a HANA scaleout appliance, preserving up to 75 percent of the customer’s investment.”

      ROF, LOL!! I have never seen a more creative marketeering effort!

      Step 1: Package up a bunch of storage and network hardware with blades that won’t (ever) work with HANA.
      Step 2: bundle in pre-approved RMA to return the blades when HANA capable UCS blades someday ARE available.
      Step 3: Call it a “Bridge to HANA”, sell it NOW and promise the customer that at least they wont lose ALL of their investment.

      Nevermind the fact that in the meantime, the Cisco customer ALSO purchased “up to” a couple million dollars of so of high-performance SAN gear that they won’t need anymore, since now HANA is doing all it’s IO work in DRAM. I wonder if Cisco’s “preserving up to 75%” calculation includes writing off up to, let’s say 50,000 – 100,000 storage IOPS that won’t ever be needed again?

      Thanks Cisco marketeering dudes! The old jokes about con-job bridge salesmen used to refer to the Brooklyn Bridge, now you’ve given us a new “golden gate” twist!

      • John Appleby says:

        There is only one location for certified SAP HANA hardware and unfortunately it is not a publicly accessible URL:

        https://service.sap.com/pam

        In the HANA guide there, Cisco have 3 certified single node appliances – 128GB, 256GB and 512GB based on the UCS C260 and C460 blades. In addition they have 2 scale-out appliances – one with an EMC filer and the other with a NetApp filer.

        You have also totally missed the point of the Bridge to HANA. It is designed for those organisations who want to invest in converged infrastructure today, but want to run BWA rather than HANA in the short term. For those people it will reduce the amount of wasted equipment in the move to a minimum.

        I’m not sure in reality how big that market is, but it is a relevant value proposition.

        You also seem to think that SAP HANA doesn’t require disks – makes a lot of sense of your other responses. The SAN hardware is indeed required for HANA and will be reused in the Bridge to HANA scenario. Take a look at my overview here:

        http://www.bluefinsolutions.com/insights/blog/the_sap_hana_hardware_faq/

      • Clock$peedy says:

        Hi John,

        Thanks for correcting me, I just updated my copy of the PAM and I see the Cisco gear on pages 3 and 5.

        Re: “seem to seem to think that SAP HANA doesn’t require disks…”.

        To be clear, what I’ve said is that SAP HANA (as with all IMDB technology) doesn’t need to do lots of random disk IOPS. The purpose of IMDB is to stop doing slow disk IOPS by keeping data in memory. With IMDB, disk IO workloads become almost purely sequential, hence we don’t need huge numbers of fast-spinning disks to deliver the random IOPS that we once did.

        This is what Gene Amdahl meant when he said “the best IO is the one you don’t have to do” and what Jim Gray meant when he said “Tape is dead, disk is tape, flash is disk, but RAM locality is King”.

  5. Michael says:

    Great Article,

    Do you know of any benchmarking status that are available to highlight the pro and cons in the selection process?

    • John Appleby says:

      I’m afraid I do not. There is no official HANA benchmark but I have used TPC-H to compare different vendors. I can say that IBM is, for some reason, faster than other vendors. I hear they found some hidden tuning that the other vendors have not applied…

      • Clock$peedy says:

        Hi John,

        Yes, IBM has some special sauce, but I’m not sure it’s all that hidden, not from customers anyway. Your IBM rep should be happy to arrange a disclosure for you around the IBM exclusive features of their E7 implementation, including some massive investments made in custom silicon. One slide you’ll see shows IBM has actually beaten Intel’s own silicon in inter-socket memory latency in larger configurations. R&D for this work is done in Rochester, Minnesota, where all of IBM’s Power systems R&D is also done.

        I do not think any other vendor but IBM has done custom silicon for E7, any one hear heard otherwise?

  6. Nirav Shah says:

    Thanks for sharing Valuable information and insight of HANA Hardware Vendors !!!
    As John mentioned, CISCO has limited presence with SAP Customers when compared to IBM/HP.
    However, I have heard CISCO is partnering with Large IT(SAP) service provide companies to work on setting server farms for SAP cloud services.
    Generally, SAP HANA will be used only by large Enterprises. (At least in initial days ).
    If CISCO wants to penetrate these accounts (assuming majority of these accounts will be using IBM/HP/Dell etc),CISCO will have to come out with better technology soon.

    • John Appleby says:

      I have seen some penetration now of their Blade chassis in Large Enterprise is good and I suspect they plan to sell the leverage of consolidation to architects.

      They are however well behind the game in scale-out appliances, which are now becoming essential as HANA goes mainstream.

  7. Ah Beng says:

    Interesting, no one mentioned about reliability of the RAM… with so many TB of RAM, memory failure will cause great impact to the system…. Is mirrored RAM supported? Is Intel’s SDDC/DDDC enabled? What about memory hotplug? Surely not all vendors support all these features right?

    • Clock$peedy says:

      @Ah Beng;

      Yes, you are exactly right and this is an aspect that almost everyone seems to overlook!

      The probability of a widget failing increases with the number of widgets you have in the system. With so many little bits of DRAM silicon making up these huge DRAM subsystems, reliability in terms of MTBF, and even worse, MTTDL goes up exponentially.

      SAP and Intel have worked this out though. The Intel E7 (Westmere EX) platform has all the RAS capabilities needed to make Big Memory reliable, but as far as I know only IBM supports the entire E7 RAS feature set, including DDDC, while also implementint PFA (Predictive Failure Alerts) in every major subsystem.

      In case you were wondering, this is why SAP is only certified to run on Nehalem EX and Westmere EX.

      • John Appleby says:

        Sorry this took a while to reply to – there was a NDA in place around the EP platform. Since it has been released I can reply.

        There are 20 available RAS features, of which the EX platform has them all and the EP platform has 15. These mostly relate to the self-healing functions like the MCA Recovery.

        Specifically PFA and Chipkill are also available on EP. So for my money this is to some extent Intel marketing BS – claiming that only the EX is enterprise ready. Having a RAM failure on an application is just as bad…

      • Clock$peedy says:

        Hi John,

        On May 19, your comment was very much incorrect when you said: “There are 20 available RAS features…and the (Sandy-Bridge) EP platform has 15.”

        I am looking at the May 2012 copy of the NDA deck I received directly from Intel as I write this. There are 42 RAS features identified, including 16 for the memory subsystem alone. Of the 16 memory-specific RAS features, Sandy-Bridge EP supports only 5. (there is “partial” support listed for another three of the 16 features. those however are meaningless to SAP.)

        Moreover, the one CRITICAL feature that SAP requires to recover a corrupted in-memory database is hardware based MCAR (Machine Check Architecture Recovery) that you refer to as “MCA”.

        Sandy-Bridge doesn’t have this, and without MCAR + memory RAS the whole idea of an in-memory database falls apart.

        The (E7 only) MCAR feature in conjunction with (E7 only) memory RAS features works like this:

        http://www.youtube.com/watch?v=BDLn5oGBPok (video courtesy of SAP)

        You also said “…this is to some extent Intel marketing BS – claiming that only the EX is enterprise ready. Having a RAM failure on an application is just as bad”

        No! RAM failures cause disk-based database servers to crash all the time, WITHOUT corrupting customer data, because that data is safe on disk, due to universally ACID compliant designs. IMDB can’t rely on ACID though, and it absolutely WILL corrupt a customer’s data unless the memory subsystem RAS features AND recovery mechanisms are in place to prevent it.

        Like any other disruptive technology, IMDB will suffer from unrestrained enthusiasm typical of any ‘hype-cycle’, and the misinformation that always ensues. Please be careful of that, especially when tossing around accusations of “marketing BS”.

        Thanks!

      • John Appleby says:

        I think in all of this you’re missing one really important point. HANA is ACID compliant and it uses disk storage for persistence in the same was as any other RDBMS.

      • Clock$peedy says:

        John,

        Sorry…I should know by now that whenever I say “A.C.I.D.”, folks automatically think I am referring only to Durability. Of course, all IMDBs log transactions to spinning disk or Flash, and that covers the ‘durablity’ aspect, but that’s all it covers.

        Remember there is the ‘A’, the ‘C’ and the ‘I’ in ACID as well. Consider for a moment the complexities of maintaining Atomicity, Consistency and Independence of transactions when (in the case of IMDB) there are now TWO databases to worry about, the one that is on disk, and the one that is cached in memory.

        After stripping away all the IMDB ‘enhancements’ in HANA, SolidDB, TimesTen etc. ultimately IMDB is about caching in DRAM the database that ‘lives’ on physical disk. Now consider the problem of “cache coherency” in the context of a potentially unreliable memory subsystem. Imagine all the problems that could arise in Atomicity, Consistency, and Independence (leaving durability aside for now) if the memory subsystem is unreliable. Think “split brain” and you will have a sense of the potential risks. Maintaining TWO copies of the database and guaranteeing coherency among them is the primary challenge of IMDB, and this is the reason why no enterprise IMDB vendor will certify anything less than Intel E7. The HANA PAM is the first example to look at.

        Memory subsystem RAS features of Intel’s E7 family are utterly necessary for SAP HANA and all other IMDBs. Without them it’s like playing russian roulette with all the chambers loaded.

        Best regards

  8. dbmoore says:

    John – Great blog!

    1. Can all the memory in the machine be used for HANA?
    2. Does SAP offer any sizing guidelines for HANA?
    3. Do you get any significant compression when using HANA’s row storage?
    4. What are customers seeing in the field for realistic compression when using HANA’s column storage?
    5. Are we all still staying mum on the software pricing of HANA?

    THANKS!!!

    • John Appleby says:

      Good questions as always mate!

      1. Yes, it can (and must) all be used for HANA. Though you need space for calculations – usually reckoned to be 50% of the RAM. So only 50% can be used for the database.

      2. Yes absolutely, in the usual place: https://service.sap.com/quicksizer – you need to be a SAP customer. Also check SAP Note 1637145 for a BW on HANA sizing calculator.

      3. No, and that is the intention because the row store is intended for transient data e.g. queues. You still use the column store for any permanent data store.

      4. Varies massively from 3:1 to 20:1 with the average at about 5:1 when compared to an equivalent MSSQL/DB2/Oracle compressed table.

      5. There are some details on Steve Lucas’ blog here: https://www.experiencesaphana.com/community/blogs/blog/2012/04/30/what-oracle-wont-tell-you-about-sap-hana – €40k for HANA Edge (64GB) and as low as €13k per 64GB unit for BW on HANA. The full price list isn’t published as far as I know!

  9. Joseph Banegas says:

    hello, great article. do you know if it’s possible to purchase a ‘personal’ HANA system, or lease it from hardware partners for personal development purposes?

    thank you,

    Joseph Banegas
    SAP BI HANA Solutions Architect

  10. Hi Jon,

    really a very good article which I will use with your permission as well in SalesCalls. Of course I am very biased and whatever is written by me is my sole opinion and does not reflect my employers. That said I believe there are several points which are very important to consider when you choose your Infrastructure vendor when you move to HANA. You have mentioned them in your article but maybe it would be worth while to talk in more detail about the growth path (Scale Up and Scale Out) the vendors can provide, this in conjunction of course with HA solutions. DR seems to be a bit more complex (at least if you want synchronous copy) and we will see what the required bandwidth will be. I have found in all my sales calls that customers are very eager to know what happens to their today investment in HANA when they increment their license which is the most likely thing to happen. In case you would like to write another article on this and need additional material I am more than happy to help.

  11. Eric S says:

    So if every aspect was completely proprietary.. And you just wanted the fastest, unfettered performance — without regard for resiliency. Which platform would you choose?

    (BTW, the latest update wasn’t Cisco blades — but Cisco rack-mounts. I suspect the original author will figure this out and correct as appropriate. It really is important because most aren’t certifying with SAP with Blades. WE are..)

    • John Appleby says:

      IBM has the fastest hardware – but you knew that already since you work for Cisco 🙂

      I don’t think that Blade or not-Blade is interesting on its own – it’s about a platform and relative cost/consolidation. I wrote about it on my corporate blog:

      http://www.bluefinsolutions.com/insights/blog/the_sap_hana_hardware_faq/

      And if you are invested in Cisco infrastructure then the UCS platform is very interesting to run HANA.

      • Eric S says:

        You’re right, it’s not very interesting on its own. If you look at virtually every benchmark, the difference between Vendor X and Vendor Y is almost always a rounding error and has virtually no bearing in terms of real-world compute. They’re certainly good for marketing in the “NASCAR Slide” that all vendors have.

        What makes Cisco’s scale-out solution unique is the performance and manageability for the whole stack. Yes, it does cost more initially if you’re starting with only 3 blades (2 active, 1 standby) but if you’re looking at a larger productive system — it makes more sense. Moreover, if you look at any rack-mount solution, and pair that with various storage arrays (Cisco’s solution requires no more, no less storage than any other vendor), the cabling and complexity are dramatically reduced. Our storage partners (EMC and NetApp) both have proven track-records in terms of data replication. Combine that with the Service Profiles and system configuration that can also be mirrored to a DR facility, and you have a much better RTO.

        You mentioned GPFS from IBM. I’m pretty sure IBM isn’t giving it away for free, which increases the total cost of the appliance. And then, of course, there’s the learning curve to understand and manage that file system. As much as SAP wants HANA to be an appliance, it is only “appliance-like.” The myriad of configurations from the various vendors all work, but each have their own complexity (Cisco|EMC use MPFS, for example. Cisco|NetApp is just pure NFS). You shouldn’t, however, have to keep pace with upgrades to the various system-level things unless SAP publishes an SAP note specifically saying to do so. For example, SLES is currently certified and there have been a number of updates released — SAP doesn’t require (in fact, you might break HANA) you to keep pace with SLES updates, unless they find a specific bug that the update addresses.

        Another reference above mentioned that Blades had density, cooling, power issues. I don’t necessarily agree with that point anymore. About 5 years, I absolutely would agree with you. The Blades (B-Series) from Cisco are capable of more than the 512GB that each blade requires (up to 1TB today, more in the near future). SAP requires that you can’t do more than 128GB per CPU socket, even if the node is capable of it. The B440 also has 2 mezz slots, capable of handling 80GB Ethernet per mezz (160GB per blade.) Since storage is just the persistence layer, it does need to keep up with writing to the log files, and in the event of failure, be able to quickly populate RAM in a standby server, that amount of bandwidth and low-latency actually makes HANA run better. We are certainly not seeing any issues with density, power, or cooling. If other vendors are, I can’t speak to that.

      • John Appleby says:

        Hmm – some useful facts in here and some confusing points. Some clarifications.

        1) Whilst what you say is usually true for benchmarks, this is not the case here. The difference between IBM and Cisco on performance is not a rounding error – go talk to your engineering team but for the scale-out appliance, there is a big gap caused by your storage subsystem and motherboard/BIOS, especially on loading.

        2) That’s really subjective and I think it depends on the organisation and what hardware platform they have bought into, overall. I don’t think that any of the scale-out hardware vendors are using unproven hardware. Certainly the Cisco solution is a good one – and as I have pointed out – the most dense and probably the most power-efficient.

        3) Your cost point isn’t true – I’ve seen quotes from all vendors and they come out very similarly. At SAPPHIRE we learnt that a 16-node IBM system was bought for $640k. What is your price for a 16-node 512GB system?

        4) Whilst it’s an appliance, it needs feeding and watering like any other appliance. If you’re an IBM shop then you probably know GPFS already. Same for monitoring tools, backups, etc. I think that’s just the cost of doing business though I will agree that Cisco has done a good job of the unified blade platform. Those organisations that have already bought into it, will find it a logical step.

        5) As you pointed out, the limiting factor with HANA is that it requires matching 1CPU = 128GB RAM – this is for performance reasons. So the only way you could support 1TB in a B440 would be to have 8 CPUs, which isn’t possible. To get better density you need faster cores or more cores. This is what Ivy Bridge brings. Again as I’ve said, Cisco has the best density of the hardware vendors.

        There is another point which remains unproven, which is how the vendors using SAS-based SAN storage are getting on with larger scale-out appliances. Your version requires 75x300GB disks in each filer = 22TB per filer which implies you keep a full 8TB replica on each filer – with up to 4 filers for a 16-node 8TB appliance. I’m scratching my head to understand how you scale this to – for example – the 200-node appliance that IBM have proven. Love to understand how Cisco plans to do that, because this is one of the nice things that IBM have with GPFS – they can keep partial replicas distributed across nodes.

Leave a reply to John Appleby Cancel reply