A brief solid state storage system history and update

Persistent memory is nothing new to the data storage industry.  Mainframes used small amount of persistent memory for decades. But it was extremely expensive and capacities where quite small in the byte range.  Once engineers figured out how to make larger capacities of persistent memory in the early 2000’s and we started seeing solid state drives (SSDs) on the market around 2009 or so.  These drives gave a boost in performance for most applications but did not harness the really speed and durability of memory because SSDs were put in a hard disk drive (HDD) form factor and made to communicate like hard disk drives which are much slower devices.  So even through the average SSD performed magnitudes faster this performance was slowed by the SCSI protocol stack in the operating system and a translation layer used in the SSD to translate SCSI protocol language into memory language. In some early systems the firmware in some storage controller systems used to manage and communicate with HDDs would throttle performance to slow down the rate at which data was stored in order not to overwhelm HDDs.  This firmware was soon bypassed in later storage systems to add performance and raise the rate data is stored to SSDs. While this is happening to traditional arrays a whole new modern design for all flash arrays were being architected and build from the ground up to only communicate with NAND flash media. Performance is better with these modern designs but they still communicate with a SCSI protocol and translation today.

The next generation in solid state storage technologies coming on the market, NVM and NVMe for PCIe based NVM SSDs, and persistent memory NVDIMMs can in some use cases be a magnitude faster than SCSI based NAND flash SSDs.  There needs to be more study and measurement in how well PCIe NVM based SSDs performance with today’s application workloads. Early studies have shown some promise for certain applications and not so much others but we will talk about that in a later blog.

Over 80% of the world’s data is stored as secondary storage.  PCIe based NVM technologies are coming on the market to be the new primary storage and flash becoming cheaper and denser (32TB, 64TB, and beyond). This is an opportunity to use NAND flash SSDs as secondary storage for backup/recovery, near line, and active archive storage.  In some cases to enhance the use of tape libraries or even an extend option for active archive. Before we know it all levels of storage from primary (hot) to archive (cold) storage will be using some form of solid state technology. This will change the landscape of how the datacenter infrastructure looks and operates to access, store, and protect data.


Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s