Revisiting the Cache Effect on Multicore Multithreaded Network Processors

Caching mechanism has achieved great success in general purpose processor; however, its deployment in Network Processor (NP) raises questions over its effectiveness under the new context. In this study, we thoroughly evaluate the performance of caches in NP with architectural features like multicore, multithread, and integrated packet interface. Our major findings include: (1) In general, a sufficiently large cache effectively reduces the number of memory requests and improves the utilization of the NP computation power. (2) The lower efficiency of private caches caused by duplicate information deteriorates the NP performance under certain circumstances. (3) The appropriate cache block size is constrained by the low spatial locality of network applications. (4) For workloads involving large amount of data movement, increasing cache size cannot bring more benefits when the bottleneck in interconnection bus is reached. In short, caching mechanism in NP can be helpful under appropriate usage.

[1]  Laxmi N. Bhuyan,et al.  NePSim: a network processor simulator with a power evaluation framework , 2004, IEEE Micro.

[2]  Mukesh Singhal,et al.  A novel cache architecture to support layer-four packet classification at memory access speeds , 2000, Proceedings IEEE INFOCOM 2000. Conference on Computer Communications. Nineteenth Annual Joint Conference of the IEEE Computer and Communications Societies (Cat. No.00CH37064).

[3]  Harrick M. Vin,et al.  Managing memory access latency in packet processing , 2005, SIGMETRICS '05.

[4]  R. Govindarajan,et al.  A heterogeneously segmented cache architecture for a packet forwarding engine , 2005, ICS '05.

[5]  Patrick Crowley,et al.  Network Processor Design: Issues and Practices , 2002 .

[6]  Raj Yavatkar,et al.  A highly flexible, distributed multiprocessor architecture for network processing , 2003, Comput. Networks.

[7]  Tzi-cker Chiueh,et al.  Cache Memory Design for Internet Processors , 2000, IEEE Micro.