PDA

View Full Version : Exclusive Look at Fusion-io ioDrive - PCIe Solid State



Comment Bot
12-10-2008, 02:56 PM
Please feel free to comment about our story entitled "Exclusive Look at Fusion-io ioDrive - PCIe Solid State (http://www.tweaktown.com/reviews/1683/exclusive_look_at_fusionio_iodrive_pcie_solid_stat e/index.html)"

chrisram
12-10-2008, 03:24 PM
I already know it is coming, "how could you say a 3K Dollar, 80GB drive is a value?"

Please remember that this is a server product and from time to time I tend to dabble in this area. So, to answer the question before it is asked, 8 Seagate 15K.6 drives plus an Areca controller with online pricing costs more than 2 ioDrives. The benchmarks show the performance comparison.

If I had 10K to build a new computer and the drives were already bootable, I would spend 3K of it on an ioDrive and tone down some of the other stuff to make budget. 1.5 TB drives are cheap (review coming this week) and that would serve as a storage drive.

Amun
12-11-2008, 06:58 PM
Please upload a video of it loading a level of crysis and/or blending. Then we'll be happy.

More seriously, I cannot wait for the days of 10GB/s, powered HDD cables.

Thanks for the writeup. :thumbsup:

x31forest
12-11-2008, 11:44 PM
I would also love to see some benchmarks on OS loading speeds, be that XP or Vista.

Thanks

efranchi
12-12-2008, 01:34 AM
Could you please specify the driver (1.2.0 or 1.2.1?) and firmware (117911.fff ?) Thank you

chrisram
12-12-2008, 03:54 AM
Good questions.

I can't get a real OS load speed since the ioDrive is not yet bootable. I did hear just a little while ago that DVNation will be providing another card for long term testing and to help develop / test new features. I know booting to the card is high on Fusion-io's list, at one point I heard Q4 but that has slipped into Q1 or Q2. I don't really mind waiting since the tech is so much more than a baby step like we get from traditional products.

As for the drive and firmware, I will need to get that for you tonight. I also test NAS products and moved all of the data off of my personal NAS so I could test it. To keep the number of extra drives I needed I compressed everything. Now I am in the process or moving 3.5 TB back.

GameTraveler
12-12-2008, 06:33 AM
The Areca 1680ix series is able to connect to both SATA and SAS drives. Why was the ARC-1231 used?

As it is, the 1680 has a higher throughput. Also, which 1680 card was used? Was the on-card ram upgraded? Was the battery backup used (allows options for how the write cache is handled, I believe)?

Very interesting review. Can't say that I'll afford a FusionIo drive, but when I RAID 0 a couple of Intel X25-M drives with a 1680ix-12, I'll be sure to refer back to your review to see where I stack up. :)

Keep up the good work!

chrisram
12-12-2008, 12:18 PM
Areca Technology Corporation (http://www.areca.com.tw/products/pcietosas1680series.htm)


In the second group down, the 1680I. It is now listed on Areca's site as the 1680-i. I would swear that I thought it was the "1680i" when I double checked that section.

Anyhow, the card used is an 8-port SAS controller with fix cache and without the battery backup installed. You can actually go in and turn on the write cacheing even without the battery backup and it was enabled.

The same night after I finished this review I pitched Intel about putting 4 X25-M drives on a RAID controller to see how they compared. With the drives costing a little over 500 each it would be a dollar for dollar comparison with performance. 2K in drives, 1K for the controller. This doesn't really take into consideration the size or power so there are still a few drawbacks not figured in. One the other side of the fence it is proven technology and that goes a long way with system admins.

If you get your array setup before I do please come back and share some numbers. In a near future article I will show how the tests are performed so everyone is able to run their own tests under the exact same conditions.

chrisram
12-15-2008, 07:25 AM
OK, I now have the driver information.

DriverVer=07/01/2008,6.0.6001.18000

This was taken from the INF file.

carlonyto
12-16-2008, 11:48 PM
I have tested these Fusion cards extensively. They slow down the more you write to them. Giving them idle time allows them to regain some - but not all - of their performance. Part of the reason for the slowdown is that to write a block of flash, you must erase then write. The erase is slow. So in the background, a "groomer" process is erasing empty blocks that are allocated to the "reserved" space.

When formatting the card (which restores the write performance completely, temporarily, as it erases every block), you can choose a size. If you choose a smaller size, you increase the amount of reserved space.

Formatting with extra reserved space vastly mitigates the write slowdown.

You basically have to choose a tradeoff between write performance and size.

What if you are giving the card time to recover write performance? Well, understanding this needs time-consuming benchmarking (because you have to allow significant amounts of idle time in between tests). Fusion does not give much guidance about the rate at which the cleanup occurs, however they will claim that the slowdown only is relevant to benchmarks.

However, I have a real-world workload that gradually slowed down every
day for two weeks on a 160GB card, despite 12 hours a day of idle time.

You should probably run more tests, focusing on write performance and keeping an eye out for a slowdown.

efranchi
12-17-2008, 08:31 PM
Thank you Chrisram for the informations,
just a question about the IOPS, are you sure that the block size is 512MB and not 512B? Normally the file server (like file printer server, email exchange, Notes, Decision support system) use a 4KB-64KB block size range.
Web servers (like Web services, blog, File RSS, Caddy, Search engine, Storage service) use block size from 512B to 512KB.

Anyway, about the fusionIO:
I had see the same "write amplification" problem with the drive version 1.2.0 (the first official driver realese) here (http://forum.ssdworld.ch/viewtopic.php?f=1&t=59) you can see all the details about the slow down bench of the ioDrive + 1.2.0 drivers.
Thank you carlonyto for the great explication, but I was thinking that with the new drivers, the 1.2.1 realese, the problem was fixed. I don't have test it yet with new drivers, but if I read what carlonyto say I should understand that the problem still exist.

Has you can read in the linked topic, this slow down problem was typical also for the Intel MLC, with permanent slow down (you have to run Secure erase to reset the drive Intel MLC into original performances).
I have finished today the test of the Intel X-25 SLC and also the Intel SLC drive have the same slow down problem, but it regain easy and quickly all the performances. (if you want more details about the intel SLC slow down, just look in the same linked forum and you'll find it soon in the test evaluation section).

I saw 2 relevant points:
1) 1 single Intel SLC have above the same IOPS than the FusionIO, it depends of the access specifications.
One single Intel SLC have

34'086.76 IOPS

for a 512B block size, 80%read and 20% write, 16 outstandings.
At the same access specifications the ioDrive with 1.2.0 driver had "just"

24'515,11 IOPS,

and in order to compare a HDD Western Digital Caviar have 127.63 IOPS)

2) the intel SLC regains its performances very quickly. (just run HDTAch in full write mode).

For the areca, I think that it is a great card, for less than 800$ you can work easy with 2 GB Cache (even 4GB cache) -when you enable the cache, you better have a battery backup or a UPS-
at 1600/1500 MB/s r/w,
no matter wich drive you put in the Raid, even 2 HDD (then when the cache is fulled you will slow down at the drives performances and access times, but it is very difficult to full 4 GB cache!). So, for less than 1'000$ you have something like an Hyperdrive4.
But if you make an array with 8 Intel X-25 SLC you better turn off the cache...(only 2 x Intel X-25 SLC in raid0 reach 500MB/s read performances!).

I think it is fine to have a drive without a cache, only if the drives works faster than the cache (like 8xIntel X25 SLC).
So why does the ioDrive not have a cache? I mean, its Nand works "only" at 700MB/s and for less than 800 $ you can buy an areca that allow you to work with 4GB cache at 1600MB/s.

JAJansenJr
02-24-2009, 06:15 PM
I would like to know if there is a way to add a PCI Express slot to an older PC which has a PCI slot. I sure would like to use a fusion-io board, because every PC I have ever worked with has run poorly (too slow) because of the drag of a slow hard drive.

I suppose I am going to have to buy a newer PC in order to benefit from a fusion-io board - what specs for a new PC are required to do this?

The whole paradigm for accessing a hard drive needs readdressed. I wrote a database application once in VB6 and it ran too slowly to be used. I rewrote the application using DAO (data access objects, available in VB6) to build a RAM table of record ID and physical location information. Then searches for a record were conducted in RAM and DAO was used to "pluck" the physical record as needed. The result was instantaneous execution because searching on the hard drive was eliminated.

Better hard drives, such as the fusion-io, are needed but so are better access techniques.

Will the fusion-io products become available for end users at affordable prices?

efranchi
02-24-2009, 08:16 PM
Yes, the fusion-io is a good solution, it work in a standard PCI slot, but not at the top of performances. You will need an additional 2 GB RAM every 80GB of Fusion-io.

JAJansenJr
02-24-2009, 08:52 PM
It may seem backwards to want to use a fusion-io in a standard PCI slot, but my current system is working pretty well, except that the hard drive
is too slow.

I was unaware, though, that you could plug a PCI Express device into a
PCI slot. Electrically, PCI is a parallel interface and PCI Express is high speed serial. But maybe PCI devices are "smart" and will recognise the slot they are plugged into. If fusion-io is backward compatible to PCI that would be great for me.

I need to look into this further. I have a request for info posted on the fusion-io web site.

chrisram
02-27-2009, 02:01 PM
As far as I know you can't plug a PCIe card into a PCI slot. The voltages are wrong and nothing lines up to even attempt it.