HP D2700 enclosure and SSDs. Will any SSD work?Third-party SSD solutions in ProLiant Gen8 serversZFS SAS/SATA...

Recommendation letter by significant other if you worked with them professionally?

Can the alpha, lambda values of a glmnet object output determine whether ridge or Lasso?

Professor forcing me to attend a conference, I can't afford even with 50% funding

How does Ehrenfest's theorem apply to the quantum harmonic oscillator?

How can I manipulate the output of Information?

Can we track matter through time by looking at different depths in space?

Was it really inappropriate to write a pull request for the company I interviewed with?

Expressing logarithmic equations without logs

Making a kiddush for a girl that has hard time finding shidduch

Are small insurances worth it?

Source permutation

How do we create new idioms and use them in a novel?

Can I negotiate a patent idea for a raise, under French law?

How exactly does an Ethernet collision happen in the cable, since nodes use different circuits for Tx and Rx?

(Codewars) Linked Lists - Remove Duplicates

Damage bonus for different weapons

Giving a career talk in my old university, how prominently should I tell students my salary?

Rationale to prefer local variables over instance variables?

Finitely many repeated replacements

Possible to detect presence of nuclear bomb?

How to design an organic heat-shield?

Does an unused member variable take up memory?

When Schnorr signatures are part of Bitcoin will it be possible validate each block with only one signature validation?

What is this diamond of every day?



HP D2700 enclosure and SSDs. Will any SSD work?


Third-party SSD solutions in ProLiant Gen8 serversZFS SAS/SATA controller recommendationsHP storage arrays - multiple channels?What RAID controller comes with DL360 G7?ProLiant DL380p not recognising HP hard drives on bootAre there any SAN vendors that allow third party drives?Third-party SSD in Proliant g8?HP Smart Array P822 w. Dual D2700 - cabling for best performanceHP smart array P812i and storage works enclosure D2700 BAD PERFORMANCESolaris 11 not seeing all SAS disks on HP P212 controllerssd firmware, linux: updating large batch of drivesHP storage arrays - multiple channels?Install HP drive in Dell PowerEdge 2950?Plextor SSDs don't work reliably in HP DL380p Gen8 serversWindows Tool to read S.M.A.R.T. attributes on SATA drive in HP D2700 Enclosure using P812 ControllerAHCI - Will be two HDDs (SSDs) two times faster?Selecting an SSD to act as boot volume in an HP serverpins on HDD/SSD enclosure













6















I've got an HP D2700 enclosure that I'm looking to shove some 2.5" SSD drives in. Looking at the prices of HP's SSD drives vs something like an Intel 710 and even something less 'enterprisey', there's quite a difference in price.



I know the HP SSD's will obviously work, but I've heard rumours that buying an Intel/Crucial/whatever SATA SSD, bunging it in an HP 2.5" caddy and putting it in a D2700 won't work.



Is there an enclosure / disk compatibility issue I should watch out for here?



On the one hand, they're all just SATA devices, so the enclosure should treat them all the same. On the other, I'm not particularly well-versed in the various different SSD flavours to know whether there's a good technical reason why one type of drive would work, yet another one wouldn't. I can also imagine that HP are annoying enough to do firmware checks on any disks and have the controller reject those it doesn't like.



For background, the D2700 already has 12x 300GB 10k SAS drives in it, and I was planning on getting 8x 500GB (or thereabouts) SSDs to create another zpool. Whole thing is connected to an HP X1600 running Solaris 11.










share|improve this question





























    6















    I've got an HP D2700 enclosure that I'm looking to shove some 2.5" SSD drives in. Looking at the prices of HP's SSD drives vs something like an Intel 710 and even something less 'enterprisey', there's quite a difference in price.



    I know the HP SSD's will obviously work, but I've heard rumours that buying an Intel/Crucial/whatever SATA SSD, bunging it in an HP 2.5" caddy and putting it in a D2700 won't work.



    Is there an enclosure / disk compatibility issue I should watch out for here?



    On the one hand, they're all just SATA devices, so the enclosure should treat them all the same. On the other, I'm not particularly well-versed in the various different SSD flavours to know whether there's a good technical reason why one type of drive would work, yet another one wouldn't. I can also imagine that HP are annoying enough to do firmware checks on any disks and have the controller reject those it doesn't like.



    For background, the D2700 already has 12x 300GB 10k SAS drives in it, and I was planning on getting 8x 500GB (or thereabouts) SSDs to create another zpool. Whole thing is connected to an HP X1600 running Solaris 11.










    share|improve this question



























      6












      6








      6


      6






      I've got an HP D2700 enclosure that I'm looking to shove some 2.5" SSD drives in. Looking at the prices of HP's SSD drives vs something like an Intel 710 and even something less 'enterprisey', there's quite a difference in price.



      I know the HP SSD's will obviously work, but I've heard rumours that buying an Intel/Crucial/whatever SATA SSD, bunging it in an HP 2.5" caddy and putting it in a D2700 won't work.



      Is there an enclosure / disk compatibility issue I should watch out for here?



      On the one hand, they're all just SATA devices, so the enclosure should treat them all the same. On the other, I'm not particularly well-versed in the various different SSD flavours to know whether there's a good technical reason why one type of drive would work, yet another one wouldn't. I can also imagine that HP are annoying enough to do firmware checks on any disks and have the controller reject those it doesn't like.



      For background, the D2700 already has 12x 300GB 10k SAS drives in it, and I was planning on getting 8x 500GB (or thereabouts) SSDs to create another zpool. Whole thing is connected to an HP X1600 running Solaris 11.










      share|improve this question
















      I've got an HP D2700 enclosure that I'm looking to shove some 2.5" SSD drives in. Looking at the prices of HP's SSD drives vs something like an Intel 710 and even something less 'enterprisey', there's quite a difference in price.



      I know the HP SSD's will obviously work, but I've heard rumours that buying an Intel/Crucial/whatever SATA SSD, bunging it in an HP 2.5" caddy and putting it in a D2700 won't work.



      Is there an enclosure / disk compatibility issue I should watch out for here?



      On the one hand, they're all just SATA devices, so the enclosure should treat them all the same. On the other, I'm not particularly well-versed in the various different SSD flavours to know whether there's a good technical reason why one type of drive would work, yet another one wouldn't. I can also imagine that HP are annoying enough to do firmware checks on any disks and have the controller reject those it doesn't like.



      For background, the D2700 already has 12x 300GB 10k SAS drives in it, and I was planning on getting 8x 500GB (or thereabouts) SSDs to create another zpool. Whole thing is connected to an HP X1600 running Solaris 11.







      hp ssd sata sas hardware






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Apr 17 '12 at 19:29









      ewwhite

      173k75368719




      173k75368719










      asked Apr 17 '12 at 12:06









      growsegrowse

      5,6901061109




      5,6901061109






















          5 Answers
          5






          active

          oldest

          votes


















          9














          Well, I use a D2700 for ZFS storage and worked a bit to get LEDs and sesctl features to work on it. I also have SAS MPxIO multipath running well.



          I've done quite a bit of SSD testing on ZFS and with this enclosure.



          Here's the lowdown.




          • The D2700 is a perfectly-fine JBOD for ZFS.

          • You will want to have an HP Smart Array controller handy to update the enclosure firmware to the latest revision.

          • LSI controllers are recommended here. I use a pair of LSI 9205-8e for this.

          • I have a pile of HP drive caddies and have tested Intel, OCZ, OWC (sandforce), HP (Sandisk/Pliant), Pliant, STEC and Seagate SAS and SATA SSDs for ZFS use.

          • I would reserve the D2700 for dual-ported 6G disks, assuming you will use multipathing. If not, you're possibly taking a bandwidth hit due to the oversubscription of the SAS link to the host.

          • I tend to leave the SSDs meant for ZIL and L2arc inside of the storage head. Coupled with an LSI 9211-8i, it seems safer.

          • The Intel and Sandforce-based SATA SSDs were fine in the chassis. No temperature probe issues or anything.

          • The HP SAS SSDs (Sandisk/Pliant) require a deep queue that ZFS really can't take advantage of. They are not good pool or cache disks.

          • STEC is great with LSI controllers and ZFS... except for price... They are also incompatible with Smart Array P410 controllers. Weird. I have an open ticket with STEC for that.


          Which controllers are you using? I probably have detailed data for the combination you have.






          share|improve this answer


























          • Controller I believe is a SmartArray P212 (will double-check) which is also potentially on the cards for an upgrade as well. I'm not using multipathing (at the moment), and I'm concious of the bus limits of the single SAS cable between the D2700 and the X1600. Would multipathing require another, separate controller, or could I up the bandwidth by upgrading to a single P812 (for example) - appreciate there's a redundancy argument here as well, but leave that aside for a moment....

            – growse
            Apr 17 '12 at 18:43











          • So you should redesign. The SA P212 is not a good ZFS controller. You'd be better of with an LSI SAS HBA for compatibility and performance reasons. You don't need multipath, but if you have a D2700 unit, it probably has two internal controllers. If so, multipath isn't difficult to achieve. For ZFS, basic SAS controllers are preferred. You will have problems with low-end SSDs and the HP controllers.

            – ewwhite
            Apr 17 '12 at 19:28











          • Interesting - any specific suggestions? Going purely on internal / external connectors (An X1600 has 12 internal SATA bays) it looks like there's a few that might do the trick. The D2700 I assumed does have dual controllers as there's two ports on the back. Be good to chat with you at some point about your experiences with this kit, multipath and Solaris.

            – growse
            Apr 17 '12 at 19:34













          • Yes, lots of suggestions. They may be better suited to Server Fault chat, though.

            – ewwhite
            Apr 17 '12 at 20:17



















          5














          Any drive should "work" but you will need to carefully weigh the pros and cons of using unsupported components in a production system. Companies like Dell and HP can get away with demanding 300-400% profit margins on server drives because they have you over a barrel if you need warranty/contract support and they find unsupported hardware in your array. Are you prepared to be the final point of escalation when something goes wrong?



          If you are already using ZFS, take a long look at the possibility of deploying SSDs as L2ARC and ZIL instead of as a separate zpool. Properly configured, this type of caching can deliver SSD-like performance on a spindle-based array, at a fraction of the cost of exclusively solid state storage.



          Properly configured, a ZFS SAN built on an array of 2TB 7200rpm SAS drives with even the old Intel X25E drives for ZIL and X25M drives for L2ARC will run circles around name-brand proprietary SAN appliances.



          Be sure that your ZIL device is SLC flash. It doesn't have to be big; a 20GB SLC drive like the Intel 313 series, which happens to be designed for use as cache, would work great. L2ARC can be MLC.



          Any time you use MLC flash in an enterprise application, consider selecting a drive that will allow you to track wear percentage via SMART, such as the Intel 320 series. Note that these drives also have a 5-year warranty if you buy the retail box version, so think twice about buying the OEM version just to save five bucks. The warranty is void if you exceed the design write endurance, which is part of why we normally use these for L2ARC but not ZIL.






          share|improve this answer


























          • How is the i/o latency for synchronized writes for a SSD (as ZIL) in comparison with the ram of a BBU hw controller?

            – 3molo
            Apr 17 '12 at 18:16











          • Thanks for the specifics on MLC/SLC - ignore my question above asking you the same thing :) Do the newer Intel MLC drives track wear level in their firmware, or does this require specific OS support? Need to read up on how well Solaris 11 plays with them. I also have two zpools, one 7200rpm SATA and one 10k 2.5" SAS, so I'll need to figure out which would benefit from caching most first.

            – growse
            Apr 17 '12 at 18:27













          • Of course the SATA zpool will benefit more from caching, but you also have the option of dividing your L2ARC and ZIL devices between the two arrays. If you buy a 20GB SLC SSD for your ZIL, then you format into two slices and assign them as 10GB ZIL devices for each zpool. Remember that RAID5 and RAIDZ1 are not a particularly good idea with large SATA drives; for vdevs made up of SATA drives 500GB and larger, I would suggest using mirrors or RAIDZ2.

            – Skyhawk
            Apr 17 '12 at 19:18











          • @3molo Highly subjective. Which SSD? Which RAID controller? How much RAM in the ZFS host? The short answer is that ZIL on a separate physical device solves the synchronous write problem almost entirely, and that the I/O latency for synchronous writes ought to be very small, on the same order of magnitude as the write latency for sequential writes to the ZIL device itself.

            – Skyhawk
            Apr 17 '12 at 19:49



















          3














          First, the enclosure firmware may (and surely will) notice non-HP-branded disks, but in fact it won't impact you too much. I doubt HP hardware will reject your drives (never seen that on HP ever before), so I'd give it a try.



          But, when it comes to any updates (mainly, new enclosure firmware), HP will fix issues with their branded hardware, not with any no-name one.



          Dispute the price, HP-labeled hardware is much robust (have seen several non-enterprise SSDs died after being loaded in enterprise environment - check if you want to pay for the extra risk, or at least ALWAYS backup), so it may worth to over-pay.



          You may also want to consider FusionIO cards, as SATA bandwidth (not only disk-to-controller path, but also keep in mind controller-to-bus-to-CPU path) may impact you while PCI-E cards can be faster.






          share|improve this answer
























          • I'll take a look at FusionIO, thanks. My original idea was to use SSDs as a not-much-more-expensive-but-faster version of 10k 2.5" SAS drives. With HP pricing, I think that spindles come in at a much better price/performance point for my needs.

            – growse
            Apr 17 '12 at 15:12











          • I've seen company that lost all of their last week's new files due to rarely backup and cheap SSDs. You'll won't go their way, I believe :)

            – Alexander
            Apr 17 '12 at 15:19











          • By the way, you won't need zpool for performance

            – Alexander
            Apr 17 '12 at 15:21











          • You can simple add inexpensive SSD to your ZFS as cache - you'll see nice performance impact while won't risk your data.

            – Alexander
            Apr 17 '12 at 15:38











          • I'm going to get some spindles and one of the cheap SSDs and see if they (a) work and (b) are viable as ZFS cache devices.

            – growse
            Apr 17 '12 at 15:43



















          3














          If it's not on the list of supported drives (configuration information, step 4), don't install it. It may or may not work, but it would be a fairly expensive experiment if it didn't work in such a way that something broke.



          They have five SSD drives listed for this box, 2 SLC and three MLC. SLC last longer, but tend to be more expensive.






          share|improve this answer
























          • I take your point, but I'd have a hard time believing that I can break a SATA/SAS host using a regular off-the-shelf SATA disk. That would indicate a broken host to me :(

            – growse
            Apr 17 '12 at 15:11











          • I think @Basil means to say that, if you buy thousands of dollars in SSDs and they subsequently turn out to be unreliable or they don't play well with the RAID controller, you're back to square one with a hit to your reputation and no way to un-spend the money. It is critically important to involve business decision makers in choices that involve saving money at the possible expense of operational reliability. If your boss is a cheapskate and he tells you not to buy what you need to make a system reliable, that's one thing. If you voluntarily design around cheap stuff that fails, you're fired.

            – Skyhawk
            Apr 17 '12 at 18:13













          • Agreed. It's about managing the risk/performance/budget triumvirate. I came into this question thinking that the cost/performance for SSDs was a lot better than it actually appears to be (cheap SSDs are worse than I thought, good SSDs are more expensive than I thought). Management wouldn't agree that the performance benefit of using lots of expensive SSDs as a zpool is worth the cost. However, adding caching is an easier sell.

            – growse
            Apr 17 '12 at 18:21











          • And that's why we test. There are certain solutions that work well. Others that simply don't. A pool of cheap SSDs is okay. Cheap SSDs in L2ARC or ZIL are bad. I tend to use PCIe ZIL and MLC SAS SSD for L2ARC. This is after breaking lots of lower-cost SATA units...

            – ewwhite
            Apr 17 '12 at 22:28











          • If your box is under support (which you paid for), then there are no situations where it's worth installing anything that's not supported.

            – Basil
            Apr 18 '12 at 13:59



















          0














          I sell these - many of the P812 and D2700 controllers and shelves. I have put all brands of SSDs in them, HGST branded SAS and Samsung SATA. They all work fine. SAS is SAS and SATA is SATA. It's the label you pay for.... and qualification prior to an HP label being placed on it. HP/DEC/Compaq tried scare tactics a long time ago with NON HP/DEC/COMPAQ SCSI drives.... Much of the time the drives were ONLY relabeled Fujitsu etc. Not even different firmware. HP makes nothing any longer. They have Intel and LSI make their controller products and Broadcom almost all the HBA etc. You will be fine. In fact, in HP Integrity servers these Samsung Pro SATA drives are fast fast fast






          share|improve this answer








          New contributor




          David T is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
          Check out our Code of Conduct.




















            Your Answer








            StackExchange.ready(function() {
            var channelOptions = {
            tags: "".split(" "),
            id: "2"
            };
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function() {
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled) {
            StackExchange.using("snippets", function() {
            createEditor();
            });
            }
            else {
            createEditor();
            }
            });

            function createEditor() {
            StackExchange.prepareEditor({
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: true,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: 10,
            bindNavPrevention: true,
            postfix: "",
            imageUploader: {
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            },
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            });


            }
            });














            draft saved

            draft discarded


















            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fserverfault.com%2fquestions%2f380187%2fhp-d2700-enclosure-and-ssds-will-any-ssd-work%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown

























            5 Answers
            5






            active

            oldest

            votes








            5 Answers
            5






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            9














            Well, I use a D2700 for ZFS storage and worked a bit to get LEDs and sesctl features to work on it. I also have SAS MPxIO multipath running well.



            I've done quite a bit of SSD testing on ZFS and with this enclosure.



            Here's the lowdown.




            • The D2700 is a perfectly-fine JBOD for ZFS.

            • You will want to have an HP Smart Array controller handy to update the enclosure firmware to the latest revision.

            • LSI controllers are recommended here. I use a pair of LSI 9205-8e for this.

            • I have a pile of HP drive caddies and have tested Intel, OCZ, OWC (sandforce), HP (Sandisk/Pliant), Pliant, STEC and Seagate SAS and SATA SSDs for ZFS use.

            • I would reserve the D2700 for dual-ported 6G disks, assuming you will use multipathing. If not, you're possibly taking a bandwidth hit due to the oversubscription of the SAS link to the host.

            • I tend to leave the SSDs meant for ZIL and L2arc inside of the storage head. Coupled with an LSI 9211-8i, it seems safer.

            • The Intel and Sandforce-based SATA SSDs were fine in the chassis. No temperature probe issues or anything.

            • The HP SAS SSDs (Sandisk/Pliant) require a deep queue that ZFS really can't take advantage of. They are not good pool or cache disks.

            • STEC is great with LSI controllers and ZFS... except for price... They are also incompatible with Smart Array P410 controllers. Weird. I have an open ticket with STEC for that.


            Which controllers are you using? I probably have detailed data for the combination you have.






            share|improve this answer


























            • Controller I believe is a SmartArray P212 (will double-check) which is also potentially on the cards for an upgrade as well. I'm not using multipathing (at the moment), and I'm concious of the bus limits of the single SAS cable between the D2700 and the X1600. Would multipathing require another, separate controller, or could I up the bandwidth by upgrading to a single P812 (for example) - appreciate there's a redundancy argument here as well, but leave that aside for a moment....

              – growse
              Apr 17 '12 at 18:43











            • So you should redesign. The SA P212 is not a good ZFS controller. You'd be better of with an LSI SAS HBA for compatibility and performance reasons. You don't need multipath, but if you have a D2700 unit, it probably has two internal controllers. If so, multipath isn't difficult to achieve. For ZFS, basic SAS controllers are preferred. You will have problems with low-end SSDs and the HP controllers.

              – ewwhite
              Apr 17 '12 at 19:28











            • Interesting - any specific suggestions? Going purely on internal / external connectors (An X1600 has 12 internal SATA bays) it looks like there's a few that might do the trick. The D2700 I assumed does have dual controllers as there's two ports on the back. Be good to chat with you at some point about your experiences with this kit, multipath and Solaris.

              – growse
              Apr 17 '12 at 19:34













            • Yes, lots of suggestions. They may be better suited to Server Fault chat, though.

              – ewwhite
              Apr 17 '12 at 20:17
















            9














            Well, I use a D2700 for ZFS storage and worked a bit to get LEDs and sesctl features to work on it. I also have SAS MPxIO multipath running well.



            I've done quite a bit of SSD testing on ZFS and with this enclosure.



            Here's the lowdown.




            • The D2700 is a perfectly-fine JBOD for ZFS.

            • You will want to have an HP Smart Array controller handy to update the enclosure firmware to the latest revision.

            • LSI controllers are recommended here. I use a pair of LSI 9205-8e for this.

            • I have a pile of HP drive caddies and have tested Intel, OCZ, OWC (sandforce), HP (Sandisk/Pliant), Pliant, STEC and Seagate SAS and SATA SSDs for ZFS use.

            • I would reserve the D2700 for dual-ported 6G disks, assuming you will use multipathing. If not, you're possibly taking a bandwidth hit due to the oversubscription of the SAS link to the host.

            • I tend to leave the SSDs meant for ZIL and L2arc inside of the storage head. Coupled with an LSI 9211-8i, it seems safer.

            • The Intel and Sandforce-based SATA SSDs were fine in the chassis. No temperature probe issues or anything.

            • The HP SAS SSDs (Sandisk/Pliant) require a deep queue that ZFS really can't take advantage of. They are not good pool or cache disks.

            • STEC is great with LSI controllers and ZFS... except for price... They are also incompatible with Smart Array P410 controllers. Weird. I have an open ticket with STEC for that.


            Which controllers are you using? I probably have detailed data for the combination you have.






            share|improve this answer


























            • Controller I believe is a SmartArray P212 (will double-check) which is also potentially on the cards for an upgrade as well. I'm not using multipathing (at the moment), and I'm concious of the bus limits of the single SAS cable between the D2700 and the X1600. Would multipathing require another, separate controller, or could I up the bandwidth by upgrading to a single P812 (for example) - appreciate there's a redundancy argument here as well, but leave that aside for a moment....

              – growse
              Apr 17 '12 at 18:43











            • So you should redesign. The SA P212 is not a good ZFS controller. You'd be better of with an LSI SAS HBA for compatibility and performance reasons. You don't need multipath, but if you have a D2700 unit, it probably has two internal controllers. If so, multipath isn't difficult to achieve. For ZFS, basic SAS controllers are preferred. You will have problems with low-end SSDs and the HP controllers.

              – ewwhite
              Apr 17 '12 at 19:28











            • Interesting - any specific suggestions? Going purely on internal / external connectors (An X1600 has 12 internal SATA bays) it looks like there's a few that might do the trick. The D2700 I assumed does have dual controllers as there's two ports on the back. Be good to chat with you at some point about your experiences with this kit, multipath and Solaris.

              – growse
              Apr 17 '12 at 19:34













            • Yes, lots of suggestions. They may be better suited to Server Fault chat, though.

              – ewwhite
              Apr 17 '12 at 20:17














            9












            9








            9







            Well, I use a D2700 for ZFS storage and worked a bit to get LEDs and sesctl features to work on it. I also have SAS MPxIO multipath running well.



            I've done quite a bit of SSD testing on ZFS and with this enclosure.



            Here's the lowdown.




            • The D2700 is a perfectly-fine JBOD for ZFS.

            • You will want to have an HP Smart Array controller handy to update the enclosure firmware to the latest revision.

            • LSI controllers are recommended here. I use a pair of LSI 9205-8e for this.

            • I have a pile of HP drive caddies and have tested Intel, OCZ, OWC (sandforce), HP (Sandisk/Pliant), Pliant, STEC and Seagate SAS and SATA SSDs for ZFS use.

            • I would reserve the D2700 for dual-ported 6G disks, assuming you will use multipathing. If not, you're possibly taking a bandwidth hit due to the oversubscription of the SAS link to the host.

            • I tend to leave the SSDs meant for ZIL and L2arc inside of the storage head. Coupled with an LSI 9211-8i, it seems safer.

            • The Intel and Sandforce-based SATA SSDs were fine in the chassis. No temperature probe issues or anything.

            • The HP SAS SSDs (Sandisk/Pliant) require a deep queue that ZFS really can't take advantage of. They are not good pool or cache disks.

            • STEC is great with LSI controllers and ZFS... except for price... They are also incompatible with Smart Array P410 controllers. Weird. I have an open ticket with STEC for that.


            Which controllers are you using? I probably have detailed data for the combination you have.






            share|improve this answer















            Well, I use a D2700 for ZFS storage and worked a bit to get LEDs and sesctl features to work on it. I also have SAS MPxIO multipath running well.



            I've done quite a bit of SSD testing on ZFS and with this enclosure.



            Here's the lowdown.




            • The D2700 is a perfectly-fine JBOD for ZFS.

            • You will want to have an HP Smart Array controller handy to update the enclosure firmware to the latest revision.

            • LSI controllers are recommended here. I use a pair of LSI 9205-8e for this.

            • I have a pile of HP drive caddies and have tested Intel, OCZ, OWC (sandforce), HP (Sandisk/Pliant), Pliant, STEC and Seagate SAS and SATA SSDs for ZFS use.

            • I would reserve the D2700 for dual-ported 6G disks, assuming you will use multipathing. If not, you're possibly taking a bandwidth hit due to the oversubscription of the SAS link to the host.

            • I tend to leave the SSDs meant for ZIL and L2arc inside of the storage head. Coupled with an LSI 9211-8i, it seems safer.

            • The Intel and Sandforce-based SATA SSDs were fine in the chassis. No temperature probe issues or anything.

            • The HP SAS SSDs (Sandisk/Pliant) require a deep queue that ZFS really can't take advantage of. They are not good pool or cache disks.

            • STEC is great with LSI controllers and ZFS... except for price... They are also incompatible with Smart Array P410 controllers. Weird. I have an open ticket with STEC for that.


            Which controllers are you using? I probably have detailed data for the combination you have.







            share|improve this answer














            share|improve this answer



            share|improve this answer








            edited Jul 7 '13 at 15:07

























            answered Apr 17 '12 at 18:21









            ewwhiteewwhite

            173k75368719




            173k75368719













            • Controller I believe is a SmartArray P212 (will double-check) which is also potentially on the cards for an upgrade as well. I'm not using multipathing (at the moment), and I'm concious of the bus limits of the single SAS cable between the D2700 and the X1600. Would multipathing require another, separate controller, or could I up the bandwidth by upgrading to a single P812 (for example) - appreciate there's a redundancy argument here as well, but leave that aside for a moment....

              – growse
              Apr 17 '12 at 18:43











            • So you should redesign. The SA P212 is not a good ZFS controller. You'd be better of with an LSI SAS HBA for compatibility and performance reasons. You don't need multipath, but if you have a D2700 unit, it probably has two internal controllers. If so, multipath isn't difficult to achieve. For ZFS, basic SAS controllers are preferred. You will have problems with low-end SSDs and the HP controllers.

              – ewwhite
              Apr 17 '12 at 19:28











            • Interesting - any specific suggestions? Going purely on internal / external connectors (An X1600 has 12 internal SATA bays) it looks like there's a few that might do the trick. The D2700 I assumed does have dual controllers as there's two ports on the back. Be good to chat with you at some point about your experiences with this kit, multipath and Solaris.

              – growse
              Apr 17 '12 at 19:34













            • Yes, lots of suggestions. They may be better suited to Server Fault chat, though.

              – ewwhite
              Apr 17 '12 at 20:17



















            • Controller I believe is a SmartArray P212 (will double-check) which is also potentially on the cards for an upgrade as well. I'm not using multipathing (at the moment), and I'm concious of the bus limits of the single SAS cable between the D2700 and the X1600. Would multipathing require another, separate controller, or could I up the bandwidth by upgrading to a single P812 (for example) - appreciate there's a redundancy argument here as well, but leave that aside for a moment....

              – growse
              Apr 17 '12 at 18:43











            • So you should redesign. The SA P212 is not a good ZFS controller. You'd be better of with an LSI SAS HBA for compatibility and performance reasons. You don't need multipath, but if you have a D2700 unit, it probably has two internal controllers. If so, multipath isn't difficult to achieve. For ZFS, basic SAS controllers are preferred. You will have problems with low-end SSDs and the HP controllers.

              – ewwhite
              Apr 17 '12 at 19:28











            • Interesting - any specific suggestions? Going purely on internal / external connectors (An X1600 has 12 internal SATA bays) it looks like there's a few that might do the trick. The D2700 I assumed does have dual controllers as there's two ports on the back. Be good to chat with you at some point about your experiences with this kit, multipath and Solaris.

              – growse
              Apr 17 '12 at 19:34













            • Yes, lots of suggestions. They may be better suited to Server Fault chat, though.

              – ewwhite
              Apr 17 '12 at 20:17

















            Controller I believe is a SmartArray P212 (will double-check) which is also potentially on the cards for an upgrade as well. I'm not using multipathing (at the moment), and I'm concious of the bus limits of the single SAS cable between the D2700 and the X1600. Would multipathing require another, separate controller, or could I up the bandwidth by upgrading to a single P812 (for example) - appreciate there's a redundancy argument here as well, but leave that aside for a moment....

            – growse
            Apr 17 '12 at 18:43





            Controller I believe is a SmartArray P212 (will double-check) which is also potentially on the cards for an upgrade as well. I'm not using multipathing (at the moment), and I'm concious of the bus limits of the single SAS cable between the D2700 and the X1600. Would multipathing require another, separate controller, or could I up the bandwidth by upgrading to a single P812 (for example) - appreciate there's a redundancy argument here as well, but leave that aside for a moment....

            – growse
            Apr 17 '12 at 18:43













            So you should redesign. The SA P212 is not a good ZFS controller. You'd be better of with an LSI SAS HBA for compatibility and performance reasons. You don't need multipath, but if you have a D2700 unit, it probably has two internal controllers. If so, multipath isn't difficult to achieve. For ZFS, basic SAS controllers are preferred. You will have problems with low-end SSDs and the HP controllers.

            – ewwhite
            Apr 17 '12 at 19:28





            So you should redesign. The SA P212 is not a good ZFS controller. You'd be better of with an LSI SAS HBA for compatibility and performance reasons. You don't need multipath, but if you have a D2700 unit, it probably has two internal controllers. If so, multipath isn't difficult to achieve. For ZFS, basic SAS controllers are preferred. You will have problems with low-end SSDs and the HP controllers.

            – ewwhite
            Apr 17 '12 at 19:28













            Interesting - any specific suggestions? Going purely on internal / external connectors (An X1600 has 12 internal SATA bays) it looks like there's a few that might do the trick. The D2700 I assumed does have dual controllers as there's two ports on the back. Be good to chat with you at some point about your experiences with this kit, multipath and Solaris.

            – growse
            Apr 17 '12 at 19:34







            Interesting - any specific suggestions? Going purely on internal / external connectors (An X1600 has 12 internal SATA bays) it looks like there's a few that might do the trick. The D2700 I assumed does have dual controllers as there's two ports on the back. Be good to chat with you at some point about your experiences with this kit, multipath and Solaris.

            – growse
            Apr 17 '12 at 19:34















            Yes, lots of suggestions. They may be better suited to Server Fault chat, though.

            – ewwhite
            Apr 17 '12 at 20:17





            Yes, lots of suggestions. They may be better suited to Server Fault chat, though.

            – ewwhite
            Apr 17 '12 at 20:17













            5














            Any drive should "work" but you will need to carefully weigh the pros and cons of using unsupported components in a production system. Companies like Dell and HP can get away with demanding 300-400% profit margins on server drives because they have you over a barrel if you need warranty/contract support and they find unsupported hardware in your array. Are you prepared to be the final point of escalation when something goes wrong?



            If you are already using ZFS, take a long look at the possibility of deploying SSDs as L2ARC and ZIL instead of as a separate zpool. Properly configured, this type of caching can deliver SSD-like performance on a spindle-based array, at a fraction of the cost of exclusively solid state storage.



            Properly configured, a ZFS SAN built on an array of 2TB 7200rpm SAS drives with even the old Intel X25E drives for ZIL and X25M drives for L2ARC will run circles around name-brand proprietary SAN appliances.



            Be sure that your ZIL device is SLC flash. It doesn't have to be big; a 20GB SLC drive like the Intel 313 series, which happens to be designed for use as cache, would work great. L2ARC can be MLC.



            Any time you use MLC flash in an enterprise application, consider selecting a drive that will allow you to track wear percentage via SMART, such as the Intel 320 series. Note that these drives also have a 5-year warranty if you buy the retail box version, so think twice about buying the OEM version just to save five bucks. The warranty is void if you exceed the design write endurance, which is part of why we normally use these for L2ARC but not ZIL.






            share|improve this answer


























            • How is the i/o latency for synchronized writes for a SSD (as ZIL) in comparison with the ram of a BBU hw controller?

              – 3molo
              Apr 17 '12 at 18:16











            • Thanks for the specifics on MLC/SLC - ignore my question above asking you the same thing :) Do the newer Intel MLC drives track wear level in their firmware, or does this require specific OS support? Need to read up on how well Solaris 11 plays with them. I also have two zpools, one 7200rpm SATA and one 10k 2.5" SAS, so I'll need to figure out which would benefit from caching most first.

              – growse
              Apr 17 '12 at 18:27













            • Of course the SATA zpool will benefit more from caching, but you also have the option of dividing your L2ARC and ZIL devices between the two arrays. If you buy a 20GB SLC SSD for your ZIL, then you format into two slices and assign them as 10GB ZIL devices for each zpool. Remember that RAID5 and RAIDZ1 are not a particularly good idea with large SATA drives; for vdevs made up of SATA drives 500GB and larger, I would suggest using mirrors or RAIDZ2.

              – Skyhawk
              Apr 17 '12 at 19:18











            • @3molo Highly subjective. Which SSD? Which RAID controller? How much RAM in the ZFS host? The short answer is that ZIL on a separate physical device solves the synchronous write problem almost entirely, and that the I/O latency for synchronous writes ought to be very small, on the same order of magnitude as the write latency for sequential writes to the ZIL device itself.

              – Skyhawk
              Apr 17 '12 at 19:49
















            5














            Any drive should "work" but you will need to carefully weigh the pros and cons of using unsupported components in a production system. Companies like Dell and HP can get away with demanding 300-400% profit margins on server drives because they have you over a barrel if you need warranty/contract support and they find unsupported hardware in your array. Are you prepared to be the final point of escalation when something goes wrong?



            If you are already using ZFS, take a long look at the possibility of deploying SSDs as L2ARC and ZIL instead of as a separate zpool. Properly configured, this type of caching can deliver SSD-like performance on a spindle-based array, at a fraction of the cost of exclusively solid state storage.



            Properly configured, a ZFS SAN built on an array of 2TB 7200rpm SAS drives with even the old Intel X25E drives for ZIL and X25M drives for L2ARC will run circles around name-brand proprietary SAN appliances.



            Be sure that your ZIL device is SLC flash. It doesn't have to be big; a 20GB SLC drive like the Intel 313 series, which happens to be designed for use as cache, would work great. L2ARC can be MLC.



            Any time you use MLC flash in an enterprise application, consider selecting a drive that will allow you to track wear percentage via SMART, such as the Intel 320 series. Note that these drives also have a 5-year warranty if you buy the retail box version, so think twice about buying the OEM version just to save five bucks. The warranty is void if you exceed the design write endurance, which is part of why we normally use these for L2ARC but not ZIL.






            share|improve this answer


























            • How is the i/o latency for synchronized writes for a SSD (as ZIL) in comparison with the ram of a BBU hw controller?

              – 3molo
              Apr 17 '12 at 18:16











            • Thanks for the specifics on MLC/SLC - ignore my question above asking you the same thing :) Do the newer Intel MLC drives track wear level in their firmware, or does this require specific OS support? Need to read up on how well Solaris 11 plays with them. I also have two zpools, one 7200rpm SATA and one 10k 2.5" SAS, so I'll need to figure out which would benefit from caching most first.

              – growse
              Apr 17 '12 at 18:27













            • Of course the SATA zpool will benefit more from caching, but you also have the option of dividing your L2ARC and ZIL devices between the two arrays. If you buy a 20GB SLC SSD for your ZIL, then you format into two slices and assign them as 10GB ZIL devices for each zpool. Remember that RAID5 and RAIDZ1 are not a particularly good idea with large SATA drives; for vdevs made up of SATA drives 500GB and larger, I would suggest using mirrors or RAIDZ2.

              – Skyhawk
              Apr 17 '12 at 19:18











            • @3molo Highly subjective. Which SSD? Which RAID controller? How much RAM in the ZFS host? The short answer is that ZIL on a separate physical device solves the synchronous write problem almost entirely, and that the I/O latency for synchronous writes ought to be very small, on the same order of magnitude as the write latency for sequential writes to the ZIL device itself.

              – Skyhawk
              Apr 17 '12 at 19:49














            5












            5








            5







            Any drive should "work" but you will need to carefully weigh the pros and cons of using unsupported components in a production system. Companies like Dell and HP can get away with demanding 300-400% profit margins on server drives because they have you over a barrel if you need warranty/contract support and they find unsupported hardware in your array. Are you prepared to be the final point of escalation when something goes wrong?



            If you are already using ZFS, take a long look at the possibility of deploying SSDs as L2ARC and ZIL instead of as a separate zpool. Properly configured, this type of caching can deliver SSD-like performance on a spindle-based array, at a fraction of the cost of exclusively solid state storage.



            Properly configured, a ZFS SAN built on an array of 2TB 7200rpm SAS drives with even the old Intel X25E drives for ZIL and X25M drives for L2ARC will run circles around name-brand proprietary SAN appliances.



            Be sure that your ZIL device is SLC flash. It doesn't have to be big; a 20GB SLC drive like the Intel 313 series, which happens to be designed for use as cache, would work great. L2ARC can be MLC.



            Any time you use MLC flash in an enterprise application, consider selecting a drive that will allow you to track wear percentage via SMART, such as the Intel 320 series. Note that these drives also have a 5-year warranty if you buy the retail box version, so think twice about buying the OEM version just to save five bucks. The warranty is void if you exceed the design write endurance, which is part of why we normally use these for L2ARC but not ZIL.






            share|improve this answer















            Any drive should "work" but you will need to carefully weigh the pros and cons of using unsupported components in a production system. Companies like Dell and HP can get away with demanding 300-400% profit margins on server drives because they have you over a barrel if you need warranty/contract support and they find unsupported hardware in your array. Are you prepared to be the final point of escalation when something goes wrong?



            If you are already using ZFS, take a long look at the possibility of deploying SSDs as L2ARC and ZIL instead of as a separate zpool. Properly configured, this type of caching can deliver SSD-like performance on a spindle-based array, at a fraction of the cost of exclusively solid state storage.



            Properly configured, a ZFS SAN built on an array of 2TB 7200rpm SAS drives with even the old Intel X25E drives for ZIL and X25M drives for L2ARC will run circles around name-brand proprietary SAN appliances.



            Be sure that your ZIL device is SLC flash. It doesn't have to be big; a 20GB SLC drive like the Intel 313 series, which happens to be designed for use as cache, would work great. L2ARC can be MLC.



            Any time you use MLC flash in an enterprise application, consider selecting a drive that will allow you to track wear percentage via SMART, such as the Intel 320 series. Note that these drives also have a 5-year warranty if you buy the retail box version, so think twice about buying the OEM version just to save five bucks. The warranty is void if you exceed the design write endurance, which is part of why we normally use these for L2ARC but not ZIL.







            share|improve this answer














            share|improve this answer



            share|improve this answer








            edited Nov 16 '12 at 8:36

























            answered Apr 17 '12 at 17:11









            SkyhawkSkyhawk

            13.5k34591




            13.5k34591













            • How is the i/o latency for synchronized writes for a SSD (as ZIL) in comparison with the ram of a BBU hw controller?

              – 3molo
              Apr 17 '12 at 18:16











            • Thanks for the specifics on MLC/SLC - ignore my question above asking you the same thing :) Do the newer Intel MLC drives track wear level in their firmware, or does this require specific OS support? Need to read up on how well Solaris 11 plays with them. I also have two zpools, one 7200rpm SATA and one 10k 2.5" SAS, so I'll need to figure out which would benefit from caching most first.

              – growse
              Apr 17 '12 at 18:27













            • Of course the SATA zpool will benefit more from caching, but you also have the option of dividing your L2ARC and ZIL devices between the two arrays. If you buy a 20GB SLC SSD for your ZIL, then you format into two slices and assign them as 10GB ZIL devices for each zpool. Remember that RAID5 and RAIDZ1 are not a particularly good idea with large SATA drives; for vdevs made up of SATA drives 500GB and larger, I would suggest using mirrors or RAIDZ2.

              – Skyhawk
              Apr 17 '12 at 19:18











            • @3molo Highly subjective. Which SSD? Which RAID controller? How much RAM in the ZFS host? The short answer is that ZIL on a separate physical device solves the synchronous write problem almost entirely, and that the I/O latency for synchronous writes ought to be very small, on the same order of magnitude as the write latency for sequential writes to the ZIL device itself.

              – Skyhawk
              Apr 17 '12 at 19:49



















            • How is the i/o latency for synchronized writes for a SSD (as ZIL) in comparison with the ram of a BBU hw controller?

              – 3molo
              Apr 17 '12 at 18:16











            • Thanks for the specifics on MLC/SLC - ignore my question above asking you the same thing :) Do the newer Intel MLC drives track wear level in their firmware, or does this require specific OS support? Need to read up on how well Solaris 11 plays with them. I also have two zpools, one 7200rpm SATA and one 10k 2.5" SAS, so I'll need to figure out which would benefit from caching most first.

              – growse
              Apr 17 '12 at 18:27













            • Of course the SATA zpool will benefit more from caching, but you also have the option of dividing your L2ARC and ZIL devices between the two arrays. If you buy a 20GB SLC SSD for your ZIL, then you format into two slices and assign them as 10GB ZIL devices for each zpool. Remember that RAID5 and RAIDZ1 are not a particularly good idea with large SATA drives; for vdevs made up of SATA drives 500GB and larger, I would suggest using mirrors or RAIDZ2.

              – Skyhawk
              Apr 17 '12 at 19:18











            • @3molo Highly subjective. Which SSD? Which RAID controller? How much RAM in the ZFS host? The short answer is that ZIL on a separate physical device solves the synchronous write problem almost entirely, and that the I/O latency for synchronous writes ought to be very small, on the same order of magnitude as the write latency for sequential writes to the ZIL device itself.

              – Skyhawk
              Apr 17 '12 at 19:49

















            How is the i/o latency for synchronized writes for a SSD (as ZIL) in comparison with the ram of a BBU hw controller?

            – 3molo
            Apr 17 '12 at 18:16





            How is the i/o latency for synchronized writes for a SSD (as ZIL) in comparison with the ram of a BBU hw controller?

            – 3molo
            Apr 17 '12 at 18:16













            Thanks for the specifics on MLC/SLC - ignore my question above asking you the same thing :) Do the newer Intel MLC drives track wear level in their firmware, or does this require specific OS support? Need to read up on how well Solaris 11 plays with them. I also have two zpools, one 7200rpm SATA and one 10k 2.5" SAS, so I'll need to figure out which would benefit from caching most first.

            – growse
            Apr 17 '12 at 18:27







            Thanks for the specifics on MLC/SLC - ignore my question above asking you the same thing :) Do the newer Intel MLC drives track wear level in their firmware, or does this require specific OS support? Need to read up on how well Solaris 11 plays with them. I also have two zpools, one 7200rpm SATA and one 10k 2.5" SAS, so I'll need to figure out which would benefit from caching most first.

            – growse
            Apr 17 '12 at 18:27















            Of course the SATA zpool will benefit more from caching, but you also have the option of dividing your L2ARC and ZIL devices between the two arrays. If you buy a 20GB SLC SSD for your ZIL, then you format into two slices and assign them as 10GB ZIL devices for each zpool. Remember that RAID5 and RAIDZ1 are not a particularly good idea with large SATA drives; for vdevs made up of SATA drives 500GB and larger, I would suggest using mirrors or RAIDZ2.

            – Skyhawk
            Apr 17 '12 at 19:18





            Of course the SATA zpool will benefit more from caching, but you also have the option of dividing your L2ARC and ZIL devices between the two arrays. If you buy a 20GB SLC SSD for your ZIL, then you format into two slices and assign them as 10GB ZIL devices for each zpool. Remember that RAID5 and RAIDZ1 are not a particularly good idea with large SATA drives; for vdevs made up of SATA drives 500GB and larger, I would suggest using mirrors or RAIDZ2.

            – Skyhawk
            Apr 17 '12 at 19:18













            @3molo Highly subjective. Which SSD? Which RAID controller? How much RAM in the ZFS host? The short answer is that ZIL on a separate physical device solves the synchronous write problem almost entirely, and that the I/O latency for synchronous writes ought to be very small, on the same order of magnitude as the write latency for sequential writes to the ZIL device itself.

            – Skyhawk
            Apr 17 '12 at 19:49





            @3molo Highly subjective. Which SSD? Which RAID controller? How much RAM in the ZFS host? The short answer is that ZIL on a separate physical device solves the synchronous write problem almost entirely, and that the I/O latency for synchronous writes ought to be very small, on the same order of magnitude as the write latency for sequential writes to the ZIL device itself.

            – Skyhawk
            Apr 17 '12 at 19:49











            3














            First, the enclosure firmware may (and surely will) notice non-HP-branded disks, but in fact it won't impact you too much. I doubt HP hardware will reject your drives (never seen that on HP ever before), so I'd give it a try.



            But, when it comes to any updates (mainly, new enclosure firmware), HP will fix issues with their branded hardware, not with any no-name one.



            Dispute the price, HP-labeled hardware is much robust (have seen several non-enterprise SSDs died after being loaded in enterprise environment - check if you want to pay for the extra risk, or at least ALWAYS backup), so it may worth to over-pay.



            You may also want to consider FusionIO cards, as SATA bandwidth (not only disk-to-controller path, but also keep in mind controller-to-bus-to-CPU path) may impact you while PCI-E cards can be faster.






            share|improve this answer
























            • I'll take a look at FusionIO, thanks. My original idea was to use SSDs as a not-much-more-expensive-but-faster version of 10k 2.5" SAS drives. With HP pricing, I think that spindles come in at a much better price/performance point for my needs.

              – growse
              Apr 17 '12 at 15:12











            • I've seen company that lost all of their last week's new files due to rarely backup and cheap SSDs. You'll won't go their way, I believe :)

              – Alexander
              Apr 17 '12 at 15:19











            • By the way, you won't need zpool for performance

              – Alexander
              Apr 17 '12 at 15:21











            • You can simple add inexpensive SSD to your ZFS as cache - you'll see nice performance impact while won't risk your data.

              – Alexander
              Apr 17 '12 at 15:38











            • I'm going to get some spindles and one of the cheap SSDs and see if they (a) work and (b) are viable as ZFS cache devices.

              – growse
              Apr 17 '12 at 15:43
















            3














            First, the enclosure firmware may (and surely will) notice non-HP-branded disks, but in fact it won't impact you too much. I doubt HP hardware will reject your drives (never seen that on HP ever before), so I'd give it a try.



            But, when it comes to any updates (mainly, new enclosure firmware), HP will fix issues with their branded hardware, not with any no-name one.



            Dispute the price, HP-labeled hardware is much robust (have seen several non-enterprise SSDs died after being loaded in enterprise environment - check if you want to pay for the extra risk, or at least ALWAYS backup), so it may worth to over-pay.



            You may also want to consider FusionIO cards, as SATA bandwidth (not only disk-to-controller path, but also keep in mind controller-to-bus-to-CPU path) may impact you while PCI-E cards can be faster.






            share|improve this answer
























            • I'll take a look at FusionIO, thanks. My original idea was to use SSDs as a not-much-more-expensive-but-faster version of 10k 2.5" SAS drives. With HP pricing, I think that spindles come in at a much better price/performance point for my needs.

              – growse
              Apr 17 '12 at 15:12











            • I've seen company that lost all of their last week's new files due to rarely backup and cheap SSDs. You'll won't go their way, I believe :)

              – Alexander
              Apr 17 '12 at 15:19











            • By the way, you won't need zpool for performance

              – Alexander
              Apr 17 '12 at 15:21











            • You can simple add inexpensive SSD to your ZFS as cache - you'll see nice performance impact while won't risk your data.

              – Alexander
              Apr 17 '12 at 15:38











            • I'm going to get some spindles and one of the cheap SSDs and see if they (a) work and (b) are viable as ZFS cache devices.

              – growse
              Apr 17 '12 at 15:43














            3












            3








            3







            First, the enclosure firmware may (and surely will) notice non-HP-branded disks, but in fact it won't impact you too much. I doubt HP hardware will reject your drives (never seen that on HP ever before), so I'd give it a try.



            But, when it comes to any updates (mainly, new enclosure firmware), HP will fix issues with their branded hardware, not with any no-name one.



            Dispute the price, HP-labeled hardware is much robust (have seen several non-enterprise SSDs died after being loaded in enterprise environment - check if you want to pay for the extra risk, or at least ALWAYS backup), so it may worth to over-pay.



            You may also want to consider FusionIO cards, as SATA bandwidth (not only disk-to-controller path, but also keep in mind controller-to-bus-to-CPU path) may impact you while PCI-E cards can be faster.






            share|improve this answer













            First, the enclosure firmware may (and surely will) notice non-HP-branded disks, but in fact it won't impact you too much. I doubt HP hardware will reject your drives (never seen that on HP ever before), so I'd give it a try.



            But, when it comes to any updates (mainly, new enclosure firmware), HP will fix issues with their branded hardware, not with any no-name one.



            Dispute the price, HP-labeled hardware is much robust (have seen several non-enterprise SSDs died after being loaded in enterprise environment - check if you want to pay for the extra risk, or at least ALWAYS backup), so it may worth to over-pay.



            You may also want to consider FusionIO cards, as SATA bandwidth (not only disk-to-controller path, but also keep in mind controller-to-bus-to-CPU path) may impact you while PCI-E cards can be faster.







            share|improve this answer












            share|improve this answer



            share|improve this answer










            answered Apr 17 '12 at 14:43









            AlexanderAlexander

            4191517




            4191517













            • I'll take a look at FusionIO, thanks. My original idea was to use SSDs as a not-much-more-expensive-but-faster version of 10k 2.5" SAS drives. With HP pricing, I think that spindles come in at a much better price/performance point for my needs.

              – growse
              Apr 17 '12 at 15:12











            • I've seen company that lost all of their last week's new files due to rarely backup and cheap SSDs. You'll won't go their way, I believe :)

              – Alexander
              Apr 17 '12 at 15:19











            • By the way, you won't need zpool for performance

              – Alexander
              Apr 17 '12 at 15:21











            • You can simple add inexpensive SSD to your ZFS as cache - you'll see nice performance impact while won't risk your data.

              – Alexander
              Apr 17 '12 at 15:38











            • I'm going to get some spindles and one of the cheap SSDs and see if they (a) work and (b) are viable as ZFS cache devices.

              – growse
              Apr 17 '12 at 15:43



















            • I'll take a look at FusionIO, thanks. My original idea was to use SSDs as a not-much-more-expensive-but-faster version of 10k 2.5" SAS drives. With HP pricing, I think that spindles come in at a much better price/performance point for my needs.

              – growse
              Apr 17 '12 at 15:12











            • I've seen company that lost all of their last week's new files due to rarely backup and cheap SSDs. You'll won't go their way, I believe :)

              – Alexander
              Apr 17 '12 at 15:19











            • By the way, you won't need zpool for performance

              – Alexander
              Apr 17 '12 at 15:21











            • You can simple add inexpensive SSD to your ZFS as cache - you'll see nice performance impact while won't risk your data.

              – Alexander
              Apr 17 '12 at 15:38











            • I'm going to get some spindles and one of the cheap SSDs and see if they (a) work and (b) are viable as ZFS cache devices.

              – growse
              Apr 17 '12 at 15:43

















            I'll take a look at FusionIO, thanks. My original idea was to use SSDs as a not-much-more-expensive-but-faster version of 10k 2.5" SAS drives. With HP pricing, I think that spindles come in at a much better price/performance point for my needs.

            – growse
            Apr 17 '12 at 15:12





            I'll take a look at FusionIO, thanks. My original idea was to use SSDs as a not-much-more-expensive-but-faster version of 10k 2.5" SAS drives. With HP pricing, I think that spindles come in at a much better price/performance point for my needs.

            – growse
            Apr 17 '12 at 15:12













            I've seen company that lost all of their last week's new files due to rarely backup and cheap SSDs. You'll won't go their way, I believe :)

            – Alexander
            Apr 17 '12 at 15:19





            I've seen company that lost all of their last week's new files due to rarely backup and cheap SSDs. You'll won't go their way, I believe :)

            – Alexander
            Apr 17 '12 at 15:19













            By the way, you won't need zpool for performance

            – Alexander
            Apr 17 '12 at 15:21





            By the way, you won't need zpool for performance

            – Alexander
            Apr 17 '12 at 15:21













            You can simple add inexpensive SSD to your ZFS as cache - you'll see nice performance impact while won't risk your data.

            – Alexander
            Apr 17 '12 at 15:38





            You can simple add inexpensive SSD to your ZFS as cache - you'll see nice performance impact while won't risk your data.

            – Alexander
            Apr 17 '12 at 15:38













            I'm going to get some spindles and one of the cheap SSDs and see if they (a) work and (b) are viable as ZFS cache devices.

            – growse
            Apr 17 '12 at 15:43





            I'm going to get some spindles and one of the cheap SSDs and see if they (a) work and (b) are viable as ZFS cache devices.

            – growse
            Apr 17 '12 at 15:43











            3














            If it's not on the list of supported drives (configuration information, step 4), don't install it. It may or may not work, but it would be a fairly expensive experiment if it didn't work in such a way that something broke.



            They have five SSD drives listed for this box, 2 SLC and three MLC. SLC last longer, but tend to be more expensive.






            share|improve this answer
























            • I take your point, but I'd have a hard time believing that I can break a SATA/SAS host using a regular off-the-shelf SATA disk. That would indicate a broken host to me :(

              – growse
              Apr 17 '12 at 15:11











            • I think @Basil means to say that, if you buy thousands of dollars in SSDs and they subsequently turn out to be unreliable or they don't play well with the RAID controller, you're back to square one with a hit to your reputation and no way to un-spend the money. It is critically important to involve business decision makers in choices that involve saving money at the possible expense of operational reliability. If your boss is a cheapskate and he tells you not to buy what you need to make a system reliable, that's one thing. If you voluntarily design around cheap stuff that fails, you're fired.

              – Skyhawk
              Apr 17 '12 at 18:13













            • Agreed. It's about managing the risk/performance/budget triumvirate. I came into this question thinking that the cost/performance for SSDs was a lot better than it actually appears to be (cheap SSDs are worse than I thought, good SSDs are more expensive than I thought). Management wouldn't agree that the performance benefit of using lots of expensive SSDs as a zpool is worth the cost. However, adding caching is an easier sell.

              – growse
              Apr 17 '12 at 18:21











            • And that's why we test. There are certain solutions that work well. Others that simply don't. A pool of cheap SSDs is okay. Cheap SSDs in L2ARC or ZIL are bad. I tend to use PCIe ZIL and MLC SAS SSD for L2ARC. This is after breaking lots of lower-cost SATA units...

              – ewwhite
              Apr 17 '12 at 22:28











            • If your box is under support (which you paid for), then there are no situations where it's worth installing anything that's not supported.

              – Basil
              Apr 18 '12 at 13:59
















            3














            If it's not on the list of supported drives (configuration information, step 4), don't install it. It may or may not work, but it would be a fairly expensive experiment if it didn't work in such a way that something broke.



            They have five SSD drives listed for this box, 2 SLC and three MLC. SLC last longer, but tend to be more expensive.






            share|improve this answer
























            • I take your point, but I'd have a hard time believing that I can break a SATA/SAS host using a regular off-the-shelf SATA disk. That would indicate a broken host to me :(

              – growse
              Apr 17 '12 at 15:11











            • I think @Basil means to say that, if you buy thousands of dollars in SSDs and they subsequently turn out to be unreliable or they don't play well with the RAID controller, you're back to square one with a hit to your reputation and no way to un-spend the money. It is critically important to involve business decision makers in choices that involve saving money at the possible expense of operational reliability. If your boss is a cheapskate and he tells you not to buy what you need to make a system reliable, that's one thing. If you voluntarily design around cheap stuff that fails, you're fired.

              – Skyhawk
              Apr 17 '12 at 18:13













            • Agreed. It's about managing the risk/performance/budget triumvirate. I came into this question thinking that the cost/performance for SSDs was a lot better than it actually appears to be (cheap SSDs are worse than I thought, good SSDs are more expensive than I thought). Management wouldn't agree that the performance benefit of using lots of expensive SSDs as a zpool is worth the cost. However, adding caching is an easier sell.

              – growse
              Apr 17 '12 at 18:21











            • And that's why we test. There are certain solutions that work well. Others that simply don't. A pool of cheap SSDs is okay. Cheap SSDs in L2ARC or ZIL are bad. I tend to use PCIe ZIL and MLC SAS SSD for L2ARC. This is after breaking lots of lower-cost SATA units...

              – ewwhite
              Apr 17 '12 at 22:28











            • If your box is under support (which you paid for), then there are no situations where it's worth installing anything that's not supported.

              – Basil
              Apr 18 '12 at 13:59














            3












            3








            3







            If it's not on the list of supported drives (configuration information, step 4), don't install it. It may or may not work, but it would be a fairly expensive experiment if it didn't work in such a way that something broke.



            They have five SSD drives listed for this box, 2 SLC and three MLC. SLC last longer, but tend to be more expensive.






            share|improve this answer













            If it's not on the list of supported drives (configuration information, step 4), don't install it. It may or may not work, but it would be a fairly expensive experiment if it didn't work in such a way that something broke.



            They have five SSD drives listed for this box, 2 SLC and three MLC. SLC last longer, but tend to be more expensive.







            share|improve this answer












            share|improve this answer



            share|improve this answer










            answered Apr 17 '12 at 14:48









            BasilBasil

            7,93913271




            7,93913271













            • I take your point, but I'd have a hard time believing that I can break a SATA/SAS host using a regular off-the-shelf SATA disk. That would indicate a broken host to me :(

              – growse
              Apr 17 '12 at 15:11











            • I think @Basil means to say that, if you buy thousands of dollars in SSDs and they subsequently turn out to be unreliable or they don't play well with the RAID controller, you're back to square one with a hit to your reputation and no way to un-spend the money. It is critically important to involve business decision makers in choices that involve saving money at the possible expense of operational reliability. If your boss is a cheapskate and he tells you not to buy what you need to make a system reliable, that's one thing. If you voluntarily design around cheap stuff that fails, you're fired.

              – Skyhawk
              Apr 17 '12 at 18:13













            • Agreed. It's about managing the risk/performance/budget triumvirate. I came into this question thinking that the cost/performance for SSDs was a lot better than it actually appears to be (cheap SSDs are worse than I thought, good SSDs are more expensive than I thought). Management wouldn't agree that the performance benefit of using lots of expensive SSDs as a zpool is worth the cost. However, adding caching is an easier sell.

              – growse
              Apr 17 '12 at 18:21











            • And that's why we test. There are certain solutions that work well. Others that simply don't. A pool of cheap SSDs is okay. Cheap SSDs in L2ARC or ZIL are bad. I tend to use PCIe ZIL and MLC SAS SSD for L2ARC. This is after breaking lots of lower-cost SATA units...

              – ewwhite
              Apr 17 '12 at 22:28











            • If your box is under support (which you paid for), then there are no situations where it's worth installing anything that's not supported.

              – Basil
              Apr 18 '12 at 13:59



















            • I take your point, but I'd have a hard time believing that I can break a SATA/SAS host using a regular off-the-shelf SATA disk. That would indicate a broken host to me :(

              – growse
              Apr 17 '12 at 15:11











            • I think @Basil means to say that, if you buy thousands of dollars in SSDs and they subsequently turn out to be unreliable or they don't play well with the RAID controller, you're back to square one with a hit to your reputation and no way to un-spend the money. It is critically important to involve business decision makers in choices that involve saving money at the possible expense of operational reliability. If your boss is a cheapskate and he tells you not to buy what you need to make a system reliable, that's one thing. If you voluntarily design around cheap stuff that fails, you're fired.

              – Skyhawk
              Apr 17 '12 at 18:13













            • Agreed. It's about managing the risk/performance/budget triumvirate. I came into this question thinking that the cost/performance for SSDs was a lot better than it actually appears to be (cheap SSDs are worse than I thought, good SSDs are more expensive than I thought). Management wouldn't agree that the performance benefit of using lots of expensive SSDs as a zpool is worth the cost. However, adding caching is an easier sell.

              – growse
              Apr 17 '12 at 18:21











            • And that's why we test. There are certain solutions that work well. Others that simply don't. A pool of cheap SSDs is okay. Cheap SSDs in L2ARC or ZIL are bad. I tend to use PCIe ZIL and MLC SAS SSD for L2ARC. This is after breaking lots of lower-cost SATA units...

              – ewwhite
              Apr 17 '12 at 22:28











            • If your box is under support (which you paid for), then there are no situations where it's worth installing anything that's not supported.

              – Basil
              Apr 18 '12 at 13:59

















            I take your point, but I'd have a hard time believing that I can break a SATA/SAS host using a regular off-the-shelf SATA disk. That would indicate a broken host to me :(

            – growse
            Apr 17 '12 at 15:11





            I take your point, but I'd have a hard time believing that I can break a SATA/SAS host using a regular off-the-shelf SATA disk. That would indicate a broken host to me :(

            – growse
            Apr 17 '12 at 15:11













            I think @Basil means to say that, if you buy thousands of dollars in SSDs and they subsequently turn out to be unreliable or they don't play well with the RAID controller, you're back to square one with a hit to your reputation and no way to un-spend the money. It is critically important to involve business decision makers in choices that involve saving money at the possible expense of operational reliability. If your boss is a cheapskate and he tells you not to buy what you need to make a system reliable, that's one thing. If you voluntarily design around cheap stuff that fails, you're fired.

            – Skyhawk
            Apr 17 '12 at 18:13







            I think @Basil means to say that, if you buy thousands of dollars in SSDs and they subsequently turn out to be unreliable or they don't play well with the RAID controller, you're back to square one with a hit to your reputation and no way to un-spend the money. It is critically important to involve business decision makers in choices that involve saving money at the possible expense of operational reliability. If your boss is a cheapskate and he tells you not to buy what you need to make a system reliable, that's one thing. If you voluntarily design around cheap stuff that fails, you're fired.

            – Skyhawk
            Apr 17 '12 at 18:13















            Agreed. It's about managing the risk/performance/budget triumvirate. I came into this question thinking that the cost/performance for SSDs was a lot better than it actually appears to be (cheap SSDs are worse than I thought, good SSDs are more expensive than I thought). Management wouldn't agree that the performance benefit of using lots of expensive SSDs as a zpool is worth the cost. However, adding caching is an easier sell.

            – growse
            Apr 17 '12 at 18:21





            Agreed. It's about managing the risk/performance/budget triumvirate. I came into this question thinking that the cost/performance for SSDs was a lot better than it actually appears to be (cheap SSDs are worse than I thought, good SSDs are more expensive than I thought). Management wouldn't agree that the performance benefit of using lots of expensive SSDs as a zpool is worth the cost. However, adding caching is an easier sell.

            – growse
            Apr 17 '12 at 18:21













            And that's why we test. There are certain solutions that work well. Others that simply don't. A pool of cheap SSDs is okay. Cheap SSDs in L2ARC or ZIL are bad. I tend to use PCIe ZIL and MLC SAS SSD for L2ARC. This is after breaking lots of lower-cost SATA units...

            – ewwhite
            Apr 17 '12 at 22:28





            And that's why we test. There are certain solutions that work well. Others that simply don't. A pool of cheap SSDs is okay. Cheap SSDs in L2ARC or ZIL are bad. I tend to use PCIe ZIL and MLC SAS SSD for L2ARC. This is after breaking lots of lower-cost SATA units...

            – ewwhite
            Apr 17 '12 at 22:28













            If your box is under support (which you paid for), then there are no situations where it's worth installing anything that's not supported.

            – Basil
            Apr 18 '12 at 13:59





            If your box is under support (which you paid for), then there are no situations where it's worth installing anything that's not supported.

            – Basil
            Apr 18 '12 at 13:59











            0














            I sell these - many of the P812 and D2700 controllers and shelves. I have put all brands of SSDs in them, HGST branded SAS and Samsung SATA. They all work fine. SAS is SAS and SATA is SATA. It's the label you pay for.... and qualification prior to an HP label being placed on it. HP/DEC/Compaq tried scare tactics a long time ago with NON HP/DEC/COMPAQ SCSI drives.... Much of the time the drives were ONLY relabeled Fujitsu etc. Not even different firmware. HP makes nothing any longer. They have Intel and LSI make their controller products and Broadcom almost all the HBA etc. You will be fine. In fact, in HP Integrity servers these Samsung Pro SATA drives are fast fast fast






            share|improve this answer








            New contributor




            David T is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
            Check out our Code of Conduct.

























              0














              I sell these - many of the P812 and D2700 controllers and shelves. I have put all brands of SSDs in them, HGST branded SAS and Samsung SATA. They all work fine. SAS is SAS and SATA is SATA. It's the label you pay for.... and qualification prior to an HP label being placed on it. HP/DEC/Compaq tried scare tactics a long time ago with NON HP/DEC/COMPAQ SCSI drives.... Much of the time the drives were ONLY relabeled Fujitsu etc. Not even different firmware. HP makes nothing any longer. They have Intel and LSI make their controller products and Broadcom almost all the HBA etc. You will be fine. In fact, in HP Integrity servers these Samsung Pro SATA drives are fast fast fast






              share|improve this answer








              New contributor




              David T is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
              Check out our Code of Conduct.























                0












                0








                0







                I sell these - many of the P812 and D2700 controllers and shelves. I have put all brands of SSDs in them, HGST branded SAS and Samsung SATA. They all work fine. SAS is SAS and SATA is SATA. It's the label you pay for.... and qualification prior to an HP label being placed on it. HP/DEC/Compaq tried scare tactics a long time ago with NON HP/DEC/COMPAQ SCSI drives.... Much of the time the drives were ONLY relabeled Fujitsu etc. Not even different firmware. HP makes nothing any longer. They have Intel and LSI make their controller products and Broadcom almost all the HBA etc. You will be fine. In fact, in HP Integrity servers these Samsung Pro SATA drives are fast fast fast






                share|improve this answer








                New contributor




                David T is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                Check out our Code of Conduct.










                I sell these - many of the P812 and D2700 controllers and shelves. I have put all brands of SSDs in them, HGST branded SAS and Samsung SATA. They all work fine. SAS is SAS and SATA is SATA. It's the label you pay for.... and qualification prior to an HP label being placed on it. HP/DEC/Compaq tried scare tactics a long time ago with NON HP/DEC/COMPAQ SCSI drives.... Much of the time the drives were ONLY relabeled Fujitsu etc. Not even different firmware. HP makes nothing any longer. They have Intel and LSI make their controller products and Broadcom almost all the HBA etc. You will be fine. In fact, in HP Integrity servers these Samsung Pro SATA drives are fast fast fast







                share|improve this answer








                New contributor




                David T is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                Check out our Code of Conduct.









                share|improve this answer



                share|improve this answer






                New contributor




                David T is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                Check out our Code of Conduct.









                answered 10 mins ago









                David TDavid T

                1




                1




                New contributor




                David T is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                Check out our Code of Conduct.





                New contributor





                David T is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                Check out our Code of Conduct.






                David T is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                Check out our Code of Conduct.






























                    draft saved

                    draft discarded




















































                    Thanks for contributing an answer to Server Fault!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid



                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.


                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function () {
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fserverfault.com%2fquestions%2f380187%2fhp-d2700-enclosure-and-ssds-will-any-ssd-work%23new-answer', 'question_page');
                    }
                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    As a Security Precaution, the user account has been locked The Next CEO of Stack OverflowMS...

                    Список ссавців Італії Природоохоронні статуси | Список |...

                    Українські прізвища Зміст Історичні відомості |...