Please report all spam threads, posts and suspicious members. We receive spam notifications and will take immediate action!
Page 1 of 2 12 LastLast
Results 1 to 10 of 18

Thread: SSD vs SAS for performance




  1. #1
    Join Date
    Nov 2009
    Location
    St. Clair Shores, Mi.
    Posts
    89

    Default SSD vs SAS for performance

    I have been doing a lot of research on my next storage upgrade. SSD vs 15K SAS drives. My controller is a LSI Logic MegaRAID 8708EM2 SAS/SATA RAID 128GB DDR667 battery cached controller. I am planning on Windows 7 x64 Ultimate upgrade.

    I have never heard of Runcore in 27 years of workstation and server building.

    SSD drives don't work on many controllers. On my cached controller, I have to disable the write cache. I have talked with LSI Logic, Kingston, Seagate, and others. I am surprised Seagate has not released a SSD drive. They usually don't release a drive unless its perfected. I was looking at the Kingston V+ drives that are similar to this companies SATA based at 240MB/s reads and 170 MB/s writes.

    My current Seagate Savvio 2.5" 146GB 10K.2 SAS drives in RAID 1 on the 8708EM2 outperform SSD drives.

    Benchmark Results
    Drive Index : 104.84MB/s
    Results Interpretation : Higher index values are better.
    Random Access Time : 1ms
    Results Interpretation : Lower index values are better.

    Windows Experience Index
    Current Drive : 5.9
    Results Interpretation : Higher index values are better.

    Performance Test Status
    Run ID : LSI MegaRAID 8708EM2 146GB (RAID, NCQ)
    Platform Compliance : x64
    System Timer : 3.58MHz
    Operating System Disk Cache Used : Yes
    Use Overlapped I/O : Yes
    I/O Queue Depth : 4 request(s)
    Test File Size : 16GB
    File Fragments : 4
    Block Size : 1MB

    Detailed Benchmark Results
    Buffered Read : 1.01GB/s
    Sequential Read : 119.73MB/s
    Random Read : 128.25MB/s
    Buffered Write : 679.29MB/s
    Sequential Write : 31.91MB/s
    Random Write : 31.21MB/s
    Random Access Time : 1ms

    Vista x64 Ultimate installs from a Sony USB 2 DVD drive in 16 minutes. My A/V and CAD applications are incredibly fast on RAID 1.

    I can upgrade to the new Seagate 15K.7 300GB SAS drives for $381 each. The 146GB 15K.6 SAS drives for $180 each. On my 8708EM2, I would 2x my performance most likely maxing out the controller/PCIe x8 slot. I am looking at removing the 8 drive enclosures to quiet the chassis down. The HAF 932 can hold the 3 x RAID 1 pairs + spare I am upgrading to.

    What exactly would the benefits of your SSD drives be? On newer motherboards I am seeing SAS controllers coming standard. Will you release SAS SSD? What is your success rate on hardware installations? What is the issue I see over and over about firmware/password causing user issues? Do you have the software to refresh your SSD drives? What is your warranty period?

  2. #2
    Join Date
    Apr 2009
    Location
    J'Habite En Angleterre
    Posts
    2,431

    Default Re: SSD vs SAS for performance

    Seagate, WD etc haven't released SSD's because they're too heavily invested in magnetic storage.

    RunCore have no control to be able to say that they'll release SAS SSD's. As with every SSD manufacturer except Intel and Samsung, maybe a few other non main stream players, all these SSD's are made using off the shelf parts. As with all other manufacturers, Runcore relies on the controller manufacturers to innovate.

    This is why comparing SSD's by brand is a pit fall. You need to compare them by the Controller chip they use.

    Take for example Kingston. Their original V Series are JMicron based. Their V+ are Samsung based and the new V+ 40 GB Boot Drive is an Intel X25-M with half the NAND removed.

    Runcore's Pro IV series are based on an Indilinx Barefoot controller. These controllers are identical to those others use in theirs. In most cases brands are actually using the reference boards Indilinx designed too. The only one to design their own PCB is SuperTallent. Therefore, if going for an Indilinx based drive one needs to consider price, customer service and warranty. As the hardware is the same, these are the key factors, as I'm sure you're aware by the inteligent questions you're asking.

    The advantages over SAS would be in access time/latency and random IOPS. I'm sure SAS 15K drives can more than keep up with SSD sequential speeds of ~250MB/s Read and ~200MB/s write. The key advantage is SSD's have virtually zero latency and a much higher random IOPS rate. For examle My SSD hcan randomly write (4K) at 55MB/s compared to my regular SATA 3 WD AAKS which struggles at 2.4MB/s. I'm not sure on exact figures for a SAS drive. Certainly faster than my puny WD, but I bet not as quick as my SSD.

    Also consider how much use the array will get. Flash has a limited life. MLC flash stores multiple bits per cell where as SLC just a single bit. Due to how it's written to, over heads etc, MLC has a life of 10,000-30,000 cycles before the flash is no longer able to be erased/written and becomes read only. SLC is more like 100,000 cycles. You will also notice that SLC drives can write faste rand their sequential figures are virtually symmetrical.

    This article explains in detail the issues of wear leveling. The Brave New World of SSDs - LostCircuits

    SSD's will try and write the same amount of times to every block rather than write constantly to the same areas and burn them out too quick. This gives way to multiple issues such as the fact that LBA locations on a SSD can dynamically change. Eg sector 1 on a HDD is right next to sector 2. On a SSD sector 1 could be right in the "middle of the flash bank and sector 2 at the end. Location matters little due to virtually zero latency. What does matter is that it makes it difficult for the OS to tell the SSD what data's actually been deleted by the user and can be over written.

    Flash has to be written to in full 128K blocks. If there is junk in the block then it takes an extra erase/write cycle to get rid of it, slowing things down. this "Write amplification" is what causes so called performance degradation.
    Coolermaster CM 690 II advance Case
    Corsair HX750 (CWT, 91%(80+ Gold rated @230V) single 62A 12V rail
    P55A-UD4 v2.0 @ F14
    Core i5 760 @ 20 x 201, 4.02GHz
    TRUE Black with a single Noctua NF-P12 pumping out 55 CFM @ 19db .
    2 x 2GB Mushkin Ridgeback (996902), @ 7-10-8-27, 2010-DDR, 1.66v
    2 x Gigabyte GTX 460 1024MB in SLI (Pre OC'd to 715MHz core and 1800MHz VRAM) @ 850 Core / 4100 Mem.
    Intel X25-M Boot Drive (OS and Programs) 200MB/s Read & 90MB/s Write
    Corsair X32 200MB/s Read & 100MB/s Write
    WD Caviar Blue 640GB C (Steam, Games, Storage, Temp Files & Folders, etc)
    Samsung F3 500GB Backup/Images
    Noctua 1300RPM 19dB case fan (rear extraction)
    3 x 140 MM Coolermaster LED fans (one front intake, one top extraction, one side intake)
    Dell Ultra Sharp 2209WAf E-IPS @ 1680x1050

  3. #3
    Join Date
    Nov 2009
    Location
    St. Clair Shores, Mi.
    Posts
    89

    Default Re: SSD vs SAS for performance

    Seagate has not reelased a SSD because they are very conservative on their designs. I know a few engineers at Seagate. SSD have too many compatibility issues and write cache issues that Seagate will want to work out before committing to cutting edge technology. The SAS 15K.7 drives are much faster than SSD and more reliable.

    Kingston is a memory manufacturer. They can't even figure out how to lessen the degradation issues on SSD.

    I am an electrical engineer. I look at drives from an engineering standpoint. Not from the hype. The latency issues are great for a corporation with 500 users. My SAS drives on my controller for 1 ms on my 8708EM2. You will not notice 3ms of latency.

    My work is heavy storage IOs. I was burning out IDE and SATA drives at the rate of one per month. I have proper power and cooling for the drives. They are not designed for 12 hours of continuous use like SAS. SSD drives would not last because they are like compact flash, 10,000 write/reads. I can do a few billion storage r/w per day.

    For a home user, what is the difference between less than 1 ms vs 1 ms? You are talking seconds of performance increase for most applications loading.

    SAS 15K.7 6 Gbit/s
    300 GB @ $381 each
    1.49 - 2.37 Gbit/s (186 - 309 MB/s)

    These drives are slightly more than SSD and are faster.

    SAS 15K.6
    146GB @ $180
    110 - 164 MB/s

    1/2 the price of fastest SSD and only 25% slower. However, on a SAS cached controller, they high GB/s transfer speeds and do degrade.

    The problem with SSD is compatibility with controllers. Performance degradation over short period of usage. Disabling many features in the OS to protect performance of the drive. Limited life for heavy storage IOs.

    Do you want a drive that will last 5 years without problems or a SSD drive that starts degrading within months. Even, if you don't defrag SAS, you don't lose that much in performance. Read past all the hype on SSD just like video cards.

  4. #4
    Join Date
    Apr 2009
    Location
    J'Habite En Angleterre
    Posts
    2,431

    Default Re: SSD vs SAS for performance

    A SSD| won't start degrading in months at the hardware level. Performance degradation is due to write amplification and the fact that it takes longer to deal with a block that hasn't been flagged as empty even though it contains now useless data.

    You asked for the advantages of SSD's and I gave you that information. There's no hype in my post anywhere just solid facts. Whether latency makes a difference or not is irrelevant, I merely pointed out it's lower on SSD's, as it is.

    No disrespect, but I can't help but wonder why you ask about the advantages of a SSD and then proceed to tell me how poor they are compared to SAS drives. Don't forget that you already have a SAS controller. If someone else doesn't then the cost has to factor in one. I'm sure you don't get much change out of 100-150 for a decent SAS controller with cache. Stack enough spinners and you'll get monster sequential speeds. Can any SAS drives do a constant 260MB/s per drive across the whole disk? Three drives completely bogs down my ICH10R at its ~650MB/s limit.

    Like I said, if life time is a concern, buy a drive with SLC flash. If performance degradation is a concern and you're running raid buy a drive with Garbage Collection and Block consolidation. Better still get a set of X25-E SLC drives. The wear leveling is tuned in such a way that performance degradation is within 10% of a "fresh" drive.

    I posted to answer your questions outlining key points. It isn't meant to be utterly comprehensive. It's not "impartial" because the subject is soley SSD's not a comparison of SSD VS HDD. It sounds like you're running the ideal solution for you already. Runcore aren't going to know much about the next SSD evolution. The next gen of drives is much more likely to be using SATA 6G than SAS. Even Corporate drives will probably use this interface. Bottom line is, controller manufacturers supply to the main stream user and corporations and the expert user and will go for the lowest common denominator. To support SAS would require engineering for that. I doubt whether they would make a controller that supports both as it'd add to cost of production.

    As for Seagate not making SSD's, sure the engineering aspect may be part of the reason, but the fact they're a major magnetic storage player is a factor. It takes big bucks, clean rooms, ultra precise engineering to make magnetic storage. A lot of money and time is invested. Research, development of perpendicular recording in 2004-2005 etc etc. They aren't going to go into the SSD business unless they know they can dominate and not worry about de valuing their current investment. They care about bringing good quality products to market, I believe that, but they are a business first and fore most.

    A SSD is the best bang for buck performance upgrade any regular desktop user can make IMO. If you're using one for any other purpose, such as intensive IO then consider your options and think carefully before you buy. Boot time going from 65 seconds to 35 seconds and my applications loading almost instantly plus the complete lack of HDD crunching at startup has left me extremely happy with my investment. SSD's have their issues, of course they do, it's an emerging tech. It's a little unfair to be too harsh on them compared to HDD's that have been around for many decades. One of the first HDD's on release cost the equiv. of $190,000 per Gigabyte. I think SSD's are not too bad vaue for what you get seeing as their time in the mainstream is so short.

    This post and my previous are not adverts for SSD's. there are other storage means which may be superiour for your particular purpose.
    Coolermaster CM 690 II advance Case
    Corsair HX750 (CWT, 91%(80+ Gold rated @230V) single 62A 12V rail
    P55A-UD4 v2.0 @ F14
    Core i5 760 @ 20 x 201, 4.02GHz
    TRUE Black with a single Noctua NF-P12 pumping out 55 CFM @ 19db .
    2 x 2GB Mushkin Ridgeback (996902), @ 7-10-8-27, 2010-DDR, 1.66v
    2 x Gigabyte GTX 460 1024MB in SLI (Pre OC'd to 715MHz core and 1800MHz VRAM) @ 850 Core / 4100 Mem.
    Intel X25-M Boot Drive (OS and Programs) 200MB/s Read & 90MB/s Write
    Corsair X32 200MB/s Read & 100MB/s Write
    WD Caviar Blue 640GB C (Steam, Games, Storage, Temp Files & Folders, etc)
    Samsung F3 500GB Backup/Images
    Noctua 1300RPM 19dB case fan (rear extraction)
    3 x 140 MM Coolermaster LED fans (one front intake, one top extraction, one side intake)
    Dell Ultra Sharp 2209WAf E-IPS @ 1680x1050

  5. #5
    Join Date
    Nov 2009
    Location
    St. Clair Shores, Mi.
    Posts
    89

    Default Re: SSD vs SAS for performance

    I look at the complaints with SSD drives on many forums. I have seen many new boards with SAS controllers. I bet by next year, you will see SAS RAID on-board many non-server boards.

    Probably 2-3 generations of SSD from now will have all the issues worked out. It was like SATA when they first started. Until the 5th generation of SATA did the cache work properly and performance issues get resolved. SSD is getting closer to being good for a replacement.

    I remember similar arguments to the 10K SATA Raptors replacing SCSI. I have seen many users have serious issues with the new 6 Gbps SATA drives failing. I think they are pushing SATA technology beyond the ability to make a quality cheap drive. I have seen many switch to SAS with the prices dropping. LaCie Firewire 800 drives are even more reliable than the new SATA drives being released. Some of the new 7200 SATA drives are having reliability issues and dying in months.

    The problem is how cheap can you make a drive and still maintain reliability? Most users can't backup their computers any more. They can't afford extra external drives or it takes too long to accomplish over USB2. I hope USB 3 helps with performance and slow writes and overhead of USB2.

    Why am I asking about SSD? I am getting ready to upgrade my workstation. SSD are lower voltage, little heat and quiet. Price is not a huge concern for me. The issue is reliability and performance. I do A/V and CAD work that uses a lot of storage IOs per day. I can crucify a hard drive over 12-18 hours of constant R/Ws. I have read a lot about SSD drives. I have seen highend ones in production SAN arrays. I am curious to see how they will resolve write performance issues and how they handle Windows Vista and 7. Both OS have performance boosts that will be disabled with SSD drives. From and EE standpoint, I am always interested in new technology and how they overcome design issues.

    You mention wear leveling. 10% degradation over new. On SAS and SATA, you can defrag the drives with O&O Defrag and the algorithm can reconfigure the drives to place files in faster r/w locations on the drives. What about the 10,000 usage R/W per bit. I have seen compact flash on my cameras fail because of this. I would expect them to develop a algorithm to spread data across the drives equally to limit this issue. Keep moving data writes further down the memory to limit reusing same area constantly like on a magnetic hard drive. On a SSD drive, no bit should be slower than the next bit. first to last bit should have same performance. It is not like a hard drive that inner clusters have a smaller diameter than a outer cluster.

  6. #6
    Join Date
    Apr 2009
    Location
    J'Habite En Angleterre
    Posts
    2,431

    Default Re: SSD vs SAS for performance

    I agree completely with all your points RE SSD draw backs. Don't get me wrong and think I'm a SSD fan boy. Every one is different and has different usage patterns etc meaning that some highly specialized users would be much worse off using SSD's than even a regular 7200RPM 120-80MB/s R/W consumer drive.

    Current wear leveling algorithms, at the most basic level, do exactly as you describe in your last paragraph. They make sure no one flash block gets un due wear. The algorithms vary in how aggressive they do this.

    Take my Intel drive as an example. It is very agressive on its use of free space. It will completely use up all available "clean" blocks before even considering using one that is either partially filled with data or one that is totally/partially filled with junk. As you know, when the file is deleted in the OS/by the user on a magnetic drive all that's removed is the reference t the file. The actual data's still there until you "go over the top" of the location, unless you use a shredder app to over write with random data (secure erase style - not the ATA command Secure Erase). This is why file recovery apps work.

    Flash is the same but the draw back is that before a cell can be written to it must at least be marked as erased/free. Write amplification is the cause of performance degradation. Instead of a write just being a case of "Write Operation", on a block with trash in it it becomes "Copy block to cache > Erase Block > sort out what data in cache is valid > combine with current data to be written > Write Operation. As you can see, this is much more intense. TRIM allows the OS to know what files have been deleted and pass this to the Controller and visa versa, reducing most writes back to "Write Operation" even on trashy blocks.

    The Garbage Collection and Block consolidation does partly what you describe in when you mentioned O&O. You have to leave the PC to idle, either logged off, in S1 sleep (not S3) or sitting at the BIOS screen for a period of time. The controller will then have time to go through flash blocks, TRIMMING/wiping those with garbage in them and consolidating data that is valid, similar to the Consolidate Free space function may work in Perfect disk (which I use occasionally). The wear leveling of spreading info across the drive is the cause of an OS and SSD not being able to put their collective heads together and decide what is needed and what is not. Disk locations as Windows/Linux knows them are constantly moving and being re-mapped by the controller.

    Anandtech's experiments seem to point to TRIM maintaining an average of 94% of "New" speed where as BC & GC can be up to 97%. BC & GC is the only option when using RAID or when Vista/XP are used as no RAID drivers or third party drivers pass TRIM through.

    There are other ways of gaining your initial performance back too. A combination of first running Perfect Disk's consolidate free space followed by using the tool AS-Cleaner (written by an OCZ forum member) will restore all un used flash blocks to new after first maximising the amount of completely free blocks. To do this though, you need to set the "FF" option in AS-Cleaner. To blank a SSD flash block it needs to be set to all 1's not all 0's. See the "Punch Card" reference in the link I posted for a really good non scientific explanation of this. I think the punch card analogy is a very neat way of explaining it.

    The only other option for restoration is destructive. One must backup/image the disk or array, set Legacy IDE mode, run Diskpart's "Clean" command followed by using either HDDErase in DOS or Secure Erase in a 32bit Windows environment (this includes XP's Recovery Console or the command prompt obtained by booting the Vista/& 32bit DVD and choosing repair). Either of these programs will send ATA-Secure Erase to all flash blocks, making them blank.

    The performance boosts that are disabled by 7 (Vista isn't SSD aware, you need to do it manually) are initiated when running the WEI (Experience Index). They include disabling boot and app prefetching/super fetch. If it detects a SSD as the boot drive it also stops you using ReadyBoost completely. The prefetcher and boot defrag etc are great on a spinner, they make a good deal of difference for most. The performance benefit in % is much lower for a SSD if say some app data is pre cached rather than it all having to be loaded to RAM. Most features that guides say should be disabled are a choice thing. They say disable Indexing. I leave it on. The amount of writes after a complete index are miniscule for my usage. If you create loads of files, you may want to turn it off. It provides again less benefit for a SSD.

    I know how easy an OS racks up writes etc. After about 8 hours on, and multiple people surfing the net, Chrome has registered 78,000,000 IO Write Bytes in task manager. This has yet to be an issue. I have moved the cache to a RAM drive (256MB), not so much to reduce writes but for performance too.

    Below are before and after shots of my Intel SSD. The first is when there are loads of blocks clean, just after the HDDErase steps I mentioned. For the sacond I had IOMeter create 10GB test files until almost full then filled the rest, followed by deleting the test files and benching. Last pic is of Intel Perf degradation on random 4K writes comparing Gen 1 (50 nm flash) to Gen 2 (different algorithm and a smaller 32nm manufacturing process for the flash):







    Coolermaster CM 690 II advance Case
    Corsair HX750 (CWT, 91%(80+ Gold rated @230V) single 62A 12V rail
    P55A-UD4 v2.0 @ F14
    Core i5 760 @ 20 x 201, 4.02GHz
    TRUE Black with a single Noctua NF-P12 pumping out 55 CFM @ 19db .
    2 x 2GB Mushkin Ridgeback (996902), @ 7-10-8-27, 2010-DDR, 1.66v
    2 x Gigabyte GTX 460 1024MB in SLI (Pre OC'd to 715MHz core and 1800MHz VRAM) @ 850 Core / 4100 Mem.
    Intel X25-M Boot Drive (OS and Programs) 200MB/s Read & 90MB/s Write
    Corsair X32 200MB/s Read & 100MB/s Write
    WD Caviar Blue 640GB C (Steam, Games, Storage, Temp Files & Folders, etc)
    Samsung F3 500GB Backup/Images
    Noctua 1300RPM 19dB case fan (rear extraction)
    3 x 140 MM Coolermaster LED fans (one front intake, one top extraction, one side intake)
    Dell Ultra Sharp 2209WAf E-IPS @ 1680x1050

  7. #7
    Join Date
    Nov 2009
    Location
    St. Clair Shores, Mi.
    Posts
    89

    Default Re: SSD vs SAS for performance

    Adandtech and Tom's Hardware are poor sites to get benchmarks from. They can't be replicated in a lab. They don't reveal their methods for testing so other can get similar results. Both sites can be paid for better reviews.

    IOMeter is fine to replicate a SQL Server. It is useless for home user computers and highend workstations. They don't have similar R/W patterns for IOMeter to copy.

  8. #8
    Join Date
    Apr 2009
    Location
    J'Habite En Angleterre
    Posts
    2,431

    Default Re: SSD vs SAS for performance

    I've got no problem with Anandtechs or THW's test methods. I don't personally want to replecate them. If I did I would look at the results people get posted in forums using things like AS SSD and CrystalDiskMark.

    That Anandtech chart accurately represents the trend in degradation in 4K writes seen on both the G1 abd G2, form my own experience.

    You can set IOMeter up to replicate virtually any scenario. Go to OCZ's SSD forum for some pre made profiles such as Boot Up, Application server etc.

    The best way to bench for a workstation is to do what ever you do on that work station. Using PCMark Vantage is also a bit more real world.
    Coolermaster CM 690 II advance Case
    Corsair HX750 (CWT, 91%(80+ Gold rated @230V) single 62A 12V rail
    P55A-UD4 v2.0 @ F14
    Core i5 760 @ 20 x 201, 4.02GHz
    TRUE Black with a single Noctua NF-P12 pumping out 55 CFM @ 19db .
    2 x 2GB Mushkin Ridgeback (996902), @ 7-10-8-27, 2010-DDR, 1.66v
    2 x Gigabyte GTX 460 1024MB in SLI (Pre OC'd to 715MHz core and 1800MHz VRAM) @ 850 Core / 4100 Mem.
    Intel X25-M Boot Drive (OS and Programs) 200MB/s Read & 90MB/s Write
    Corsair X32 200MB/s Read & 100MB/s Write
    WD Caviar Blue 640GB C (Steam, Games, Storage, Temp Files & Folders, etc)
    Samsung F3 500GB Backup/Images
    Noctua 1300RPM 19dB case fan (rear extraction)
    3 x 140 MM Coolermaster LED fans (one front intake, one top extraction, one side intake)
    Dell Ultra Sharp 2209WAf E-IPS @ 1680x1050

  9. #9
    Join Date
    Nov 2009
    Location
    St. Clair Shores, Mi.
    Posts
    89

    Default Re: SSD vs SAS for performance

    Most review sites don't have anything good to say about how Anand and THW do reviews. Why can't anyone repeat their results on most reviews. They also are 10-20% better than anyone else's reviews.

    IOMeter can't accurately do a computer or workstation because you don't have constant disk activity for most users. Logs are too random. You can't simulation a workstation.

    My bootup is done in seconds.

    The problem with benchmarks don't account for real world conditions with anti-virus, services, and applications running in the background. Most don't handle my 8 cores properly. They don't handle quad cores properly.

    Most drive benchmarks don't account for a RAID cache properly.

  10. #10
    Join Date
    Apr 2009
    Location
    J'Habite En Angleterre
    Posts
    2,431

    Default Re: SSD vs SAS for performance

    That's the whole essence of benchmarks. Theyre synthetic indicators not real world usage. All synthetic benchmarks behave like that, ie may not reflect real world use. Doesn't make them useless.

    I don't particularly care what other sites say about anand etc. I don't parrot what other sites say I read then make my own mind up.

    On a side note I can replicate Anands Vantage results on my SSD with almost perfect accuracy. If they supplied us with the config files they used for IOMeter those would be reproducible too.

    Everyone's bootup is done in "second"s. That's one of the most ambiguous statements I've ever heard. How many seconds is the key thing.

    Like I said, if you are so convinced that SAS is better than SSD's then stick with them.

    Rotational latency IS important for most things a normal users OS will do, ie mainly random reads and writes. It's going to be almost a certainty that |SSD's will make the faster OS and Apps user experience. having 3-4 times (minimum) the random transfer speeds guarantees this. Adding RAID cards, cache etc changes things. You should be able to factor this yourself. Nobody is going to be able to give you an answer you're satisfied with. Only you know what's best for you.
    Last edited by Psycho101; 11-17-2009 at 07:10 AM.
    Coolermaster CM 690 II advance Case
    Corsair HX750 (CWT, 91%(80+ Gold rated @230V) single 62A 12V rail
    P55A-UD4 v2.0 @ F14
    Core i5 760 @ 20 x 201, 4.02GHz
    TRUE Black with a single Noctua NF-P12 pumping out 55 CFM @ 19db .
    2 x 2GB Mushkin Ridgeback (996902), @ 7-10-8-27, 2010-DDR, 1.66v
    2 x Gigabyte GTX 460 1024MB in SLI (Pre OC'd to 715MHz core and 1800MHz VRAM) @ 850 Core / 4100 Mem.
    Intel X25-M Boot Drive (OS and Programs) 200MB/s Read & 90MB/s Write
    Corsair X32 200MB/s Read & 100MB/s Write
    WD Caviar Blue 640GB C (Steam, Games, Storage, Temp Files & Folders, etc)
    Samsung F3 500GB Backup/Images
    Noctua 1300RPM 19dB case fan (rear extraction)
    3 x 140 MM Coolermaster LED fans (one front intake, one top extraction, one side intake)
    Dell Ultra Sharp 2209WAf E-IPS @ 1680x1050

Page 1 of 2 12 LastLast

Thread Information

Users Browsing this Thread

There are currently 4 users browsing this thread. (0 members and 4 guests)

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •