Please report all spam threads, posts and suspicious members. We receive spam notifications and will take immediate action!
Page 1 of 2 12 LastLast
Results 1 to 10 of 15

Thread: Exclusive Look at Fusion-io ioDrive - PCIe Solid State




  1. #1
    Join Date
    Nov 2006
    Location
    TweakTown Forum
    Posts
    5,580

    Default Exclusive Look at Fusion-io ioDrive - PCIe Solid State

    Please feel free to comment about our story entitled "Exclusive Look at Fusion-io ioDrive - PCIe Solid State"

  2. #2
    Join Date
    Sep 2002
    Location
    Secret Bunker
    Posts
    140

    Default Re: Exclusive Look at Fusion-io ioDrive - PCIe Solid State

    I already know it is coming, "how could you say a 3K Dollar, 80GB drive is a value?"

    Please remember that this is a server product and from time to time I tend to dabble in this area. So, to answer the question before it is asked, 8 Seagate 15K.6 drives plus an Areca controller with online pricing costs more than 2 ioDrives. The benchmarks show the performance comparison.

    If I had 10K to build a new computer and the drives were already bootable, I would spend 3K of it on an ioDrive and tone down some of the other stuff to make budget. 1.5 TB drives are cheap (review coming this week) and that would serve as a storage drive.

  3. #3
    Join Date
    Dec 2008
    Posts
    1

    Default Re: Exclusive Look at Fusion-io ioDrive - PCIe Solid State

    Please upload a video of it loading a level of crysis and/or blending. Then we'll be happy.

    More seriously, I cannot wait for the days of 10GB/s, powered HDD cables.

    Thanks for the writeup.

  4. #4
    Join Date
    Dec 2008
    Posts
    1

    Default Re: Exclusive Look at Fusion-io ioDrive - PCIe Solid State

    I would also love to see some benchmarks on OS loading speeds, be that XP or Vista.

    Thanks

  5. #5
    Join Date
    Dec 2008
    Posts
    5

    Default Re: Exclusive Look at Fusion-io ioDrive - PCIe Solid State

    Could you please specify the driver (1.2.0 or 1.2.1?) and firmware (117911.fff ?) Thank you

  6. #6
    Join Date
    Sep 2002
    Location
    Secret Bunker
    Posts
    140

    Default Re: Exclusive Look at Fusion-io ioDrive - PCIe Solid State

    Good questions.

    I can't get a real OS load speed since the ioDrive is not yet bootable. I did hear just a little while ago that DVNation will be providing another card for long term testing and to help develop / test new features. I know booting to the card is high on Fusion-io's list, at one point I heard Q4 but that has slipped into Q1 or Q2. I don't really mind waiting since the tech is so much more than a baby step like we get from traditional products.

    As for the drive and firmware, I will need to get that for you tonight. I also test NAS products and moved all of the data off of my personal NAS so I could test it. To keep the number of extra drives I needed I compressed everything. Now I am in the process or moving 3.5 TB back.

  7. #7

    Default Re: Exclusive Look at Fusion-io ioDrive - PCIe Solid State

    The Areca 1680ix series is able to connect to both SATA and SAS drives. Why was the ARC-1231 used?

    As it is, the 1680 has a higher throughput. Also, which 1680 card was used? Was the on-card ram upgraded? Was the battery backup used (allows options for how the write cache is handled, I believe)?

    Very interesting review. Can't say that I'll afford a FusionIo drive, but when I RAID 0 a couple of Intel X25-M drives with a 1680ix-12, I'll be sure to refer back to your review to see where I stack up. :)

    Keep up the good work!

  8. #8
    Join Date
    Sep 2002
    Location
    Secret Bunker
    Posts
    140

    Default Re: Exclusive Look at Fusion-io ioDrive - PCIe Solid State

    Areca Technology Corporation


    In the second group down, the 1680I. It is now listed on Areca's site as the 1680-i. I would swear that I thought it was the "1680i" when I double checked that section.

    Anyhow, the card used is an 8-port SAS controller with fix cache and without the battery backup installed. You can actually go in and turn on the write cacheing even without the battery backup and it was enabled.

    The same night after I finished this review I pitched Intel about putting 4 X25-M drives on a RAID controller to see how they compared. With the drives costing a little over 500 each it would be a dollar for dollar comparison with performance. 2K in drives, 1K for the controller. This doesn't really take into consideration the size or power so there are still a few drawbacks not figured in. One the other side of the fence it is proven technology and that goes a long way with system admins.

    If you get your array setup before I do please come back and share some numbers. In a near future article I will show how the tests are performed so everyone is able to run their own tests under the exact same conditions.

  9. #9
    Join Date
    Sep 2002
    Location
    Secret Bunker
    Posts
    140

    Default Re: Exclusive Look at Fusion-io ioDrive - PCIe Solid State

    OK, I now have the driver information.

    DriverVer=07/01/2008,6.0.6001.18000

    This was taken from the INF file.

  10. #10
    Join Date
    Dec 2008
    Posts
    1

    Default Re: Exclusive Look at Fusion-io ioDrive - PCIe Solid State

    I have tested these Fusion cards extensively. They slow down the more you write to them. Giving them idle time allows them to regain some - but not all - of their performance. Part of the reason for the slowdown is that to write a block of flash, you must erase then write. The erase is slow. So in the background, a "groomer" process is erasing empty blocks that are allocated to the "reserved" space.

    When formatting the card (which restores the write performance completely, temporarily, as it erases every block), you can choose a size. If you choose a smaller size, you increase the amount of reserved space.

    Formatting with extra reserved space vastly mitigates the write slowdown.

    You basically have to choose a tradeoff between write performance and size.

    What if you are giving the card time to recover write performance? Well, understanding this needs time-consuming benchmarking (because you have to allow significant amounts of idle time in between tests). Fusion does not give much guidance about the rate at which the cleanup occurs, however they will claim that the slowdown only is relevant to benchmarks.

    However, I have a real-world workload that gradually slowed down every
    day for two weeks on a 160GB card, despite 12 hours a day of idle time.

    You should probably run more tests, focusing on write performance and keeping an eye out for a slowdown.

Page 1 of 2 12 LastLast

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •