Exploiting parallelism in I/O scheduling for access conflict minimization in flash-based solid state drives

Solid state drives (SSDs) have been widely deployed in personal computers, data centers, and cloud storages. In order to improve performance, SSDs are usually constructed with a number of channels with each channel connecting to a number of NAND flash chips. Despite the rich parallelism offered by multiple channels and multiple chips per channel, recent studies show that the utilization of flash chips (i.e. the number of flash chips being accessed simultaneously) is seriously low. Our study shows that the low chip utilization is caused by the access conflict among I/O requests. In this work, we propose Parallel Issue Queuing (PIQ), a novel I/O scheduler at the host system, to minimize the access conflicts between I/O requests. The proposed PIQ schedules I/O requests without conflicts into the same batch and I/O requests with conflicts into different batches. Hence the multiple I/O requests in one batch can be fulfilled simultaneously by exploiting the rich parallelism of SSD. And because PIQ is implemented at the host side, it can take advantage of rich resource at host system such as main memory and CPU, which makes the overhead negligible. Extensive experimental results show that PIQ delivers significant performance improvement to the applications that have heavy access conflicts.

[1]  Mahmut T. Kandemir,et al.  Physically addressed queueing (PAQ): Improving parallelism in solid state disks , 2012, 2012 39th Annual International Symposium on Computer Architecture (ISCA).

[2]  Hong Jiang,et al.  Performance impact and interplay of SSD parallelism through advanced commands, allocation strategy and data granularity , 2011, ICS '11.

[3]  Mahmut T. Kandemir,et al.  An Evaluation of Different Page Allocation Strategies on High-Speed SSDs , 2012, HotStorage.

[4]  Antony I. T. Rowstron,et al.  Migrating server storage to SSDs: analysis of tradeoffs , 2009, EuroSys '09.

[5]  Kai Shen,et al.  FIOS: a fair, efficient flash I/O scheduler , 2012, FAST.

[6]  Mahmut T. Kandemir,et al.  Revisiting widely held SSD expectations and rethinking system-level implications , 2013, SIGMETRICS '13.

[7]  Michael Isard,et al.  A design for high-performance flash disks , 2007, OPSR.

[8]  Kyu Ho Park,et al.  CAVE: channel-aware buffer management scheme for solid state disk , 2011, SAC '11.

[9]  Kai Shen,et al.  FlashFQ: A Fair Queueing I/O Scheduler for Flash-Based SSDs , 2013, USENIX Annual Technical Conference.

[10]  Seung Ryoul Maeng,et al.  A buffer replacement algorithm exploiting multi-chip parallelism in solid state disks , 2009, CASES '09.

[11]  Jongmoo Choi,et al.  Disk schedulers for solid state drivers , 2009, EMSOFT '09.

[12]  Luca Faust,et al.  Modern Operating Systems , 2016 .

[13]  Joonwon Lee,et al.  Exploiting Internal Parallelism of Flash-based SSDs , 2010, IEEE Computer Architecture Letters.

[14]  Marcus P. Dunn,et al.  A New I/O Scheduler for Solid State Devices , 2010 .

[15]  Nong Xiao,et al.  SAC: rethinking the cache replacement policy for SSD-based storage systems , 2012, SYSTOR '12.

[16]  Xiaodong Zhang,et al.  Essential roles of exploiting internal parallelism of flash memory based solid state drives in high-speed data processing , 2011, 2011 IEEE 17th International Symposium on High Performance Computer Architecture.

[17]  Hong Jiang,et al.  Exploring and Exploiting the Multilevel Parallelism Inside SSDs for Improved Performance and Endurance , 2013, IEEE Transactions on Computers.

[18]  Feng Chen,et al.  Hystor: making the best use of solid state drives in high performance storage systems , 2011, ICS '11.

[19]  Seung Ryoul Maeng,et al.  FTL design exploration in reconfigurable high-performance SSD for server applications , 2009, ICS.

[20]  Bruce Jacob,et al.  The performance of PC solid-state disks (SSDs) as a function of bandwidth, concurrency, device architecture, and system organization , 2009, ISCA '09.

[21]  Rina Panigrahy,et al.  Design Tradeoffs for SSD Performance , 2008, USENIX Annual Technical Conference.