SIP Parsing Offload: Design and Performance

TCP offload has attracted a great deal of industrial interest. However, little research has been done to examine the benefits of offload for today's popular application layer protocols, in particular, Session Initiation Protocol (SIP). In this paper, we profile the processing of SIP stacks and find that for typical SIP scenarios and despite of different SIP stack implementations, SIP parsing would take a significant percentage (20%-40%) of CPU time. Based on this fact, a SIP offload scheme termed SIP offload engine (SOE) is proposed to offload SIP parser from SIP stack. A prototype of SOE is implemented and the benchmarking results indicate that throughput gain varies greatly depending on server's pipelining architectures. We also observe that SIP retransmission will incur "receive livelock" in overloaded high performance SIP servers.

[1]  K. K. Ramakrishnan,et al.  Eliminating receive livelock in an interrupt-driven kernel , 1996, TOCS.

[2]  Henning Schulzrinne,et al.  Session Initiation Protocol (SIP): Locating SIP Servers , 2002, RFC.

[3]  Henning Schulzrinne,et al.  SIPstone: Benchmarking SIP Server Performance , 2002 .

[4]  Jeffrey C. Mogul,et al.  TCP Offload Is a Dumb Idea Whose Time Has Come , 2003, HotOS.

[5]  Greg J. Regnier,et al.  TCP performance re-visited , 2003, 2003 IEEE International Symposium on Performance Analysis of Systems and Software. ISPASS 2003..

[6]  Mauricio Cortes,et al.  On SIP performance , 2004, Bell Labs Technical Journal.