GUO Chang, LI Ying, LIU Hongzhi, WU Zhonghai. An Application-Oriented Cache Allocation and Prefetching Method for Long-Running Applications in Distributed Storage Systems[J]. Chinese Journal of Electronics, 2019, 28(4): 773-780. doi: 10.1049/cje.2019.05.004
Citation: GUO Chang, LI Ying, LIU Hongzhi, WU Zhonghai. An Application-Oriented Cache Allocation and Prefetching Method for Long-Running Applications in Distributed Storage Systems[J]. Chinese Journal of Electronics, 2019, 28(4): 773-780. doi: 10.1049/cje.2019.05.004

An Application-Oriented Cache Allocation and Prefetching Method for Long-Running Applications in Distributed Storage Systems

doi: 10.1049/cje.2019.05.004
Funds:  This work is supported by the National Key R&D Program of China (No.2017YFB1002002).
  • Received Date: 2018-05-11
  • Rev Recd Date: 2018-08-14
  • Publish Date: 2019-07-10
  • Characteristics of long-running applications in cloud and big data environment are various and significantly influence the performance of cache systems. The gap between existing cache systems and the increasing performance requirements motivates us to propose the Application-oriented cache allocation and prefetching method (ACAP) to improve data access performance. An application-oriented cache allocation approach is designed based on hit count growth rates for a higher overall hit rate. Two application-oriented sequential prefetching approaches are proposed to improve the hit rate and prefetching accuracy by learning average read sizes of long-running applications. Based on correlation of data accesses, a parallelized correlated-directed prefetching approach is proposed to further increase the hit rate. Above approaches are intergrated to obtain the maximized hit rate and prefetching accuracy. Experimental results on 12 public real system traces show that ACAP achieves 14.03% (up to 33.82%) higher prefetching accuracy and 2.01% (up to 7.54%) higher hit rate compared with the best combination of baselines.
  • loading
  • S. Ghemawat, H. Gobioff and S.T. Leung, “The Google file system”, Proc. of the 19th Acm Symposium on Operating Systems Principles (SOSP), Bolton Landing, USA, Vol.37, No.5, pp.29–43, 2003.
    L.A. Barroso, J. Clidaras and U. Hölzle, “The datacenter as a computer: An introduction to the design of warehouse-scale machines”, Synthesis lectures on computer architecture, Vol.8, No.3, pp.1–154, 2013.
    B. Fitzpatrick, “Distributed caching with memcached”, Linux Journal, Vol.2004, No.124, pp.72–76, 2004.
    S. Zhang, J. Han, Z. Liu, et al., “Accelerating mapreduce with distributed memory cache”, Proc. of the 15th International Conference on Parallel and Distributed Systems (ICPADS), Shenzhen, China, pp.472–478, 2009.
    C. Xu, X. Huang, N. Wu, et al., “Using memcached to promote read throughput in massive small-file storage system”, Proc. of the ninth International Conference on Grid and Cooperative Computing (GCC), Nanjing, China, pp.24–29, 2010.
    M. Kunjir, B. Fain, K. Munagala, et al., “ROBUS: Fair cache allocation for data-parallel workloads”, Proc. of the 2017 ACM International Conference on Management of Data (SIGMOD/PODS), Chicago, USA, pp.219–234, 2017.
    V. Selfa, J. Sahuquillo, L. Eeckhout, et al., “Application clustering policies to address system fairness with Intel’ s cache allocation technology”, Proc. of the 14th IEEE International Conference on Parallel Architectures and Compilation Techniques (PACT), Nizhni Novgorod, Russia, pp.194–205, 2017.
    SNIA, “Microsoft Production Server Traces”, http://iotta.snia.org/traces/158, 2011.
    UMASS, “OLTP Application I/O”, http:traces.cs.umass.edu/ index.php/Storage/Storage, 2007-6-1.
    D. Narayanan, A. Donnelly and A. Rowstron, “Write offloading: Practical power management for enterprise storage”, ACM Transactions on Storage(TOS), Vol.4, No.3, pp.1–23, 2008.
    R.C. Chiang, A.J. Uppal and H.H. Huang, “An adaptive IO prefetching approach for virtualized data centers”, IEEE Transactions on Services Computing(TSC), Vol.10, No.3, pp.328–340, 2017.
    F. Wu, H. Xi and C. Xu. “On the design of a new Linux readahead framework”, ACM SIGOPS Operating Systems Review, Vol.42, No.5, pp.75–84, 2008.
    Z. Li, Z. Chen and Y. Zhou, “Mining block correlations to improve storage performance”, ACM Transactions on Storage (TOS), Vol.1, No.2, pp.213–245, 2005.
    D. Dai, F. Bao, J. Zhou, et al., “Block2Vec: A deep learning strategy on mining block correlations in storage systems”, Proc. of the 45th IEEE International Conference on Parallel Processing Workshops (ICPPW), Philadelphia, USA, pp.230–239, 2016.
    T.M. Kroeger and D.D.E. Long. “The case for efficient file access pattern modeling”, Proc. of the Seventh Workshop on Hot Topics in Operating Systems (HotOS), Rio Rico, USA, pp.14–19, 1999.
    C. Bian, J. Yu, C. Ying, et al., “Self-adaptive strategy for cache management in spark”, Acta Electronica Sinica, Vol.45, No.2, pp.278–284, 2017. (in Chinese)
    Z. Tang, W. Wang, L. Sun, et al., “IO dependent SSD cache allocation for elastic Hadoop applications”, Science China Information Sciences, Vol.61, pp.1–17, 2018.
    V. Venkatesan, Y.C. Tay and Q. Wei, “Sizing cleancache allocation for virtual machines’ transcendent memory”, IEEE Transactions on Computers (TC), Vol.65, No.6, pp.1949–1963, 2016.
    L. Qian, Z. Ji, Z. Fu, et al., “Pre-judgment and incomplete allocation approach for query result cache”, Chinese Journal of Electronics, Vol.25, No.6, pp.1101–1108, 2016.
    S. Mehta, Z. Fang, A. Zhai, et al., “Multi-stage incoordinated prefetching for present-day processors”, Proc. of the 32nd ACM International Conference on Supercomputing (ICS), Beijing, China, pp.73–82, 2014.
    J. Han, J. Pei and M. Kamber, “Data mining: Concepts and techniques 3rd Edition”, Elsevier, Waltham, USA, pp.243–278, 2011.
  • 加载中

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Article Metrics

    Article views (142) PDF downloads(251) Cited by()
    Proportional views
    Related

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return