Search engine implications for network processor efficiency

Network processors are programmable devices with special architectural features that are optimized to perform packet forwarding decisions. Those decisions are often based on data accessed from various table structures. Access to these structures typically requires one of several search methods, each of which consists of multiple individual memory accesses, leading to significant latency to complete the process. One of the architectural features of network processors is multithreading of each processing element in order to hide the effects of these long latency searches. Hardware search engines can significantly reduce the latency of such searches, and are shown to have a significant impact on the number of threads required in each processing element.

[1]  Butler W. Lampson,et al.  IP lookups using multiway and multicolumn search , 1999, TNET.

[2]  Venkatachary Srinivasan,et al.  Packet classification using tuple space search , 1999, SIGCOMM '99.

[3]  Jean Calvignac,et al.  Fundamental architectural considerations for network processors , 2003, Comput. Networks.

[4]  Nick McKeown,et al.  Packet classification on multiple fields , 1999, SIGCOMM '99.

[5]  Paul M. Aoki Generalizing "search" in generalized search trees , 1998, Proceedings 14th International Conference on Data Engineering.

[6]  Bernhard Plattner,et al.  Scalable high speed IP routing lookups , 1997, SIGCOMM '97.

[7]  Donald R. Morrison,et al.  PATRICIA—Practical Algorithm To Retrieve Information Coded in Alphanumeric , 1968, J. ACM.

[8]  Nick McKeown,et al.  Routing lookups in hardware at memory access speeds , 1998, Proceedings. IEEE INFOCOM '98, the Conference on Computer Communications. Seventeenth Annual Joint Conference of the IEEE Computer and Communications Societies. Gateway to the 21st Century (Cat. No.98.

[9]  Jan van Lunteren Searching very large routing tables in wide embedded memory , 2001, GLOBECOM.