In my article from Yesterday, To SSD or Not to SSD, I received some valid questions about the practicality of SSDs in some scenarios. These are some additional thoughts about SSDs and whether they should or can replace hard drives in all scenarios.
The most common question people have about SSDs relate to reliability. Any new technology is subject to initial growing pains and problems. This has been true with SSDs as well. When SSDs first started fitting the mainstream, first generation controllers were pretty bad. There were frequent issues. Sometimes system slowdowns occurred. Kernel panics or the Blue-Screen-of-Death showed up in some cases.
While it is still possible to run into problems, the current generation of SSD controllers are light years ahead of where they started. The industry’s shining star is SandForce. In an interesting turn of events, LSI Corporation acquired SandForce this past week on January 4. SandForce has used a slightly different methodology for SSDs than the previous designs from Intel and others.
The SandForce design over-provisions the flash memory to account for garbage collection, failed flash memory cells, and write wear leveling. This is typically at about seven percent over-provisioning. This means that 128GB of flash memory yields 120GB of usable capacity or 256GB yields 240GB of actual capacity. For a while, OWC sold its RE edition SSDs with thirty percent over-provisioning designed specifically for RAID 0 applications.
This design, although different from the other industry players, has allowed SandForce to meet performance levels previously unrealized for SSDs using the more prevalent and economical MLC flash memory. More expensive SLC flash memory has been available in some very expensive SSDs such as Intel’s previous X25-E line and allows for fantastic performance at very high prices but capacities are also substantially lower than MLC-based SSDs. Some of the industry’s other enterprise-level SSDs also use SLC.
Even Intel has now switched their enterprise level SSD lines to high quality MLC rather than SLC. SLC is still used in some much smaller capacity SSDs such as the 20GB Intel 311 SSDs designed for Intel’s Smart Response Technology (essentially SSD caching to speed up hard drive operations and allow you to still have primary storage on hard drives) as found in the Z68 chipset for the Sandy Bridge platform.
What does all of this mean for you? Clearly, MLC is here to stay and SandForce has established itself as the market leader on performance and reliability. Are SSDs still prone to failure? Less so than a hard drive but they are not immune from failure. Would I trust an SSD over a hard drive? Yes, most definitely. But, remember that any storage method requires good backups. Never use any storage technology without a backup system in place.
Like I pointed out in yesterday’s post, it often does not make economic sense to put all of your storage on SSDs unless you have very tiny storage needs. The best of both worlds is to put your system and applications on the SSD and keep either your home directory entirely on hard drive or just your larger data on hard drive. My personal preference is to leave my home directory on the SSD but move my iTunes media and Aperture library to the hard drive. In addition, most of the time I put my downloads folder on the hard drive as well.
Mitch Haile has done this in a very high performance way. He uses an 8-disk 8TB SAS array attached to his Mac Pro using an Areca SAS RAID controller (he uses the Areca 1680x which has been superseded by the Areca 1880 series). You can get all the details about what he uses from his office page. Mitch recently moved to an SSD for boot and applications and loves it. This is an expensive but exceptional example of using both an SSD and hard drives. In his case, you get 5TB of usable space (RAID 6 plus hot spare) on hard drives and a 240GB SSD.
A smaller scale example would be what I did two years ago when I was doing some work from an Apple Mac mini. The Mac mini is a great compact system but only supports 2.5-inch hard drives (or SSDs) and so I was limited in what performance I could get from 2.5-inch hard drive. I elected to put an OWC Mercury Extreme Pro 40GB SSD in the system and use a 1TB external drive for bulk storage connected using FireWire 800.
This setup actually worked very well because the majority of my work on that system was connecting to remote systems using a terminal session and doing various software development work. I ran some VMs from the external drive and performance was reasonable for most tasks. This type of scenario is really a low-end work setup and I would not typically recommend that design because of the limitations of FireWire. Thunderbolt gives new and better options for external storage.
During this same time I also used a 2.93 GHz Core 2 Duo 17-inch MacBook Pro with a 256GB SSD. That system was very fast and I sold it just a few months ago in preparation for new Ivy Bridge systems this year. This MacBook Pro often had external storage through eSATA.
I would recommend purchasing at least a 120GB SSD for your system and applications. Anything smaller and it might be a tight fit. You can work with a 40GB or 60GB SSD but be prepared to spend more time carefully arranging your system and moving data around to try and deal with the limitations.
Once you have a 120GB or larger SSD, consider your additional space requirements. If you are using a laptop, replacing your optical drive with a 750GB 7200RPM 2.5-inch hard drive or a 1TB 5400RPM 2.5-inch hard drive are the best choices. The 750GB option is a better choice if you want better performance from your hard drive while the 1TB option is typically a little better on battery life but a little bit slower. _Be aware that differences in the design of different drives can cause widely varied performance levels even for drives that are almost identical on paper.
If you are using a desktop system, I would still suggest at least a 120GB SSD paired with at least one 1TB, 2TB or 3TB hard drive. It really all depends on what your storage needs are. My iTunes library at one point was over 2TB which made it very difficult to work with. I have since slimmed it down to less than 900GB which makes it much more manageable. Once you add in a substantial number of high resolution photos, some additional video or other multimedia files, a 2TB drive can get small quickly.
SSDs continue to be expensive per gigabyte compared to hard drives but the prices keep dropping. Since I published yesterday’s post, SSD prices have dropped further which has realigned the best value recommendations.
I suggested a choice from the excellent Corsair Force GT series of SSDs which use high quality flash memory as well as the SandForce 2200 series 6Gb/s SSD controller. Yesterday I stated that the 120GB and 240GB SSDs were the best value. Today that has changed slightly.
The prices on the 60GB and 90GB SSDs have dropped to $110.49 and $149.99 respectively which puts their cost per gigabyte at $1.84 and $1.66. The 120GB is still the same at $179.99 which keeps it at $1.50 per gigabyte which is currently the best cost per gigabyte. The 240GB remains unchanged at $374.99 which puts it at $1.56 per gigabyte. The final SSD to drop in price was the 180GB which dropped to $269.99. This puts it at the same $1.50 per gigabyte as the 120GB SSD.
Based on these price drops, I now recommend either the 120GB SSD or 180GB for best value and the 240GB as a close third. The 90GB is also a good option if you want to save a little money overall and can live with a little less space on your SSD. The 60GB is good for the cheapest absolute cost and the 480GB is left for those that must have the largest SSD no matter what.
Whatever you decide, moving to an SSD will make a huge difference in performance and will make your hours spent on the computer a much better experience. I should warn you about one thing. Once you start using an SSD, you will never go back to a hard drive alone.