Handling high throughput and low latency network data packets in a traffic management device
원문보기
IPC분류정보
국가/구분
United States(US) Patent
등록
국제특허분류(IPC7판)
H04L-012/64
H04L-012/46
H04L-012/879
H04L-012/935
H04L-012/863
H04L-012/933
H04L-012/861
H04L-012/54
H04L-012/701
H04L-012/947
출원번호
US-0613783
(2009-11-06)
등록번호
US-9313047
(2016-04-12)
발명자
/ 주소
Michels, Tim S.
Schmitt, Dave
Szabo, Paul I.
출원인 / 주소
F5 Networks, Inc.
대리인 / 주소
LeClairRyan, a Professional Corporation
인용정보
피인용 횟수 :
1인용 특허 :
211
초록▼
Handling network data packets classified as being high throughput and low latency with a network traffic management device is disclosed. Packets are received from a network and classified as high throughput or low latency based on packet characteristics or other factors. Low latency classified packe
Handling network data packets classified as being high throughput and low latency with a network traffic management device is disclosed. Packets are received from a network and classified as high throughput or low latency based on packet characteristics or other factors. Low latency classified packets are generally processed immediately, such as upon receipt, while the low latency packet processing is strategically interrupted to enable processing coalesced high throughput classified packets in an optimized manner. The determination to cease processing low latency packets in favor of high throughput packets may be based on a number of factors, including whether a threshold number of high throughput classified packets are received or based on periodically polling a high throughput packet memory storage location.
대표청구항▼
1. A method for processing network data packets destined for applications with a plurality of throughput and latency requirements, the method comprising: receiving by an application delivery controller apparatus data packets from a network;classifying by the application delivery controller apparatus
1. A method for processing network data packets destined for applications with a plurality of throughput and latency requirements, the method comprising: receiving by an application delivery controller apparatus data packets from a network;classifying by the application delivery controller apparatus the data packets as high throughput classified and low latency classified based on one or more characteristics of each of the data packets, wherein the low latency classified packets are processed by a first processor and the high throughput classified packets are processed by a second processor;storing by the application delivery controller apparatus the data packets in a respective one of a low latency packet queue or a high throughput packet queue based on the classification;processing by the application delivery controller apparatus low latency classified packets from the low latency packet queue;determining by the application delivery controller apparatus when a predetermined number of the data packets are stored in the high throughput packet queue; andwhen it is determined that the predetermined number of the data packets are stored in the high throughput packet queue: interrupting by the application delivery controller apparatus the processing of the low latency classified packets and processing one or more high throughput classified packets from the high throughput packet queue; andresuming by the application delivery controller apparatus the processing of the low latency classified packets upon processing a number of the high throughput classified packets. 2. The method of claim 1, wherein the determining further comprises at least one of: polling a memory to determine whether the predetermined number of the data packets are stored in the high throughput packet queue; ordetermining when any other condition exists such that high throughput classified packets should be processed instead of low latency classified packets. 3. The method of claim 1, further comprising interrupting by the application delivery controller apparatus a processor upon at least one of the classifying the data packets or the storing the data packets in the low latency packet queue. 4. The method of claim 3, wherein there are fewer data packets stored in the low latency packet queue than the high throughput packet queue when the interrupting of the processing of the low latency classified packets occurs. 5. The method of claim 1, wherein the receiving, classifying, storing, processing, determining, interrupting, and resuming steps are performed by a blade coupled to a chassis apparatus of the application delivery controller apparatus. 6. The method of claim 1, wherein the one or more characteristics are selected by a network traffic management application executed by a processor. 7. A non-transitory computer-readable medium having instructions stored thereon, which when executed by a processor of an application delivery controller device, causes the application delivery controller device to perform steps to and that comprise: receive data packets from a network;classify the data packets as high throughput classified and low latency classified based on one or more characteristics of each of the data packets, wherein the low latency classified packets are processed by a first processor and the high throughput classified packets are processed by a second processor;store the data packets in a respective one of a low latency packet queue or a high throughput packet queue based on the classification;process low latency classified packets from the low latency packet queue;determine when a predetermined number of the data packets are stored in the high throughput packet queue;when it is determined that the predetermined number of the data packets are stored in the high throughput packet queue: interrupt the processing of the low latency classified packets and processing one or more high throughput classified packets from the high throughput packet queue; andresume the processing of the low latency classified packets upon processing a number of the high throughput classified packets. 8. The computer-readable medium of claim 7, wherein the determining further comprises at least one of: poll a memory to determine whether the predetermined number of the data packets are stored in the high throughput packet queue; ordetermine when any other condition exists such that high throughput classified packets should be processed instead of low latency classified packets. 9. The computer-readable medium of claim 7, further comprises interrupt a processor upon at least one of the classifying the data packets or the storing the data packets in the low latency packet queue. 10. The computer-readable medium of claim 9, wherein there are fewer data packets stored in the low latency packet queue than the high throughput packet queue when the interrupting of the processing of the low latency classified packets occurs. 11. The computer-readable medium of claim 7, wherein the receiving, classifying, storing, processing, determining, interrupting, and resuming steps are performed by a blade coupled to a chassis apparatus of an application delivery controller device. 12. The computer-readable medium of claim 7, wherein the one or more characteristics are selected by a network traffic management application executed by a processor. 13. An application delivery controller apparatus comprising: one or more processors configured to be capable of executing one or more traffic management applications;a memory;a network interface controller coupled to the one or more processors and the memory and configured to be capable of receiving data packets from a network that relate to the one or more network traffic management applications; andat least one of the one or more processors or the network interface controller configured to execute programmed instructions stored in the memory to and that comprise: classify the data packets as high throughput classified and low latency classified based on one or more characteristics of each of the data packets, wherein the low latency classified packets are processed by a first processor and the high throughput classified packets are processed by a second processor;store the data packets in a respective one of a low latency packet queue or a high throughput packet queue in the memory based on the classification;process low latency classified packets from the low latency packet queue;determine when a predetermined number of the data packets are stored in the high throughput packet queue; andwhen it is determined that the predetermined number of the data packets are stored in the high throughput packet queue: interrupt the processing of the low latency classified packets and processing one or more high throughput classified packets from the high throughput packet queue; andresume processing the low latency classified packets upon processing a number of the high throughput classified packets. 14. The apparatus of claim 13, wherein the determining further comprises at least one of: poll the memory to determine whether the predetermined number of the data packets are stored in the high throughput packet queue; ordetermine when any other condition exists such that high throughput classified packets should be processed instead of low latency classified packets. 15. The apparatus of claim 13, wherein at least one of the one or more processors or the network interface controller is further configured to execute programmed instructions stored in the memory further to and that further comprises interrupt at least one of the one or more processors upon at least one of the classifying the data packets or the storing the data packets in the low latency packet queue. 16. The apparatus of claim 15, wherein there are fewer data packets stored in the low latency packet queue than the high throughput packet queue when the interrupting of the processing of the low latency classified packets occurs. 17. The apparatus of claim 13, wherein the network interface controller comprises at least one of a programmable logic device (PLD), a field programmable logic device (FPLD), a field programmable gate array (FPGA), stored executable instructions, or any other configurable logic. 18. The apparatus of claim 13, wherein classifying the data packets further comprises: write a first pointer to a memory location of the data packets stored in the low latency packet queue into a first buffer index area in the memory; andwrite a second pointer to another memory location of the data packets stored in the high throughput packet queue into a second buffer index area in the memory. 19. The apparatus of claim 13, wherein the application delivery controller apparatus comprises at least one of a blade coupled to an application delivery controller chassis device or a standalone application delivery controller device. 20. The apparatus of claim 13, wherein the low latency classified packets are processed by a first processor and the high throughput classified packets are processed by a second processor. 21. The apparatus of claim 13, wherein the one or more characteristics are selected by the one or more network traffic management applications.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (211)
White Richard E. (2591 College Hill Cir. Schaumburg IL 60193) Buchholz Dale R. (1441 E. Anderson Palatine IL 60067) Freeburg Thomas A. (416 N. Belmont Ave. Arlington Heights IL 60004) Chang Hungkun J, Addressing technique for storing and referencing packet data.
Sohn Sung Won,KRX ; Doh Yoon Mi,KRX ; Kim Jong Oh,KRX, Asynchronous transfer mode (ATM) layer function processing apparatus with an enlarged structure.
Sathaye Shirish S. (North Chelmsford MA) Hannigan Brendan (West Newton MA) Hawe William R. (Pepperell MA), Automatic assignment of addresses in a computer communications network.
Yang Henry S. (Andover MA) Sathaye Shirish S. (North Chelmsford MA) Ben-Nun Michael (Jerusalem ILX) De-Leon Moshe (Jerusalem ILX) Ben-Michael Simoni (Givaat Zeev ILX), Buffer descriptor prefetch in network and I/O design.
Alverson, Gail A.; Callahan, II, Charles David; Kahan, Simon H.; Koblenz, Brian D.; Porterfield, Allan; Smith, Burton J., Detecting access to a memory location in a multithreaded environment.
Fitzgerald Albion J. (Ridgewood NJ) Fitzgerald Joseph J. (New Paltz NY), Distributed computer network including hierarchical resource information structure and related method of distributing re.
Dobbins Kurt ; Grant Thomas A. ; Ruffen David J. ; Kane Laura ; Len Theodore ; Andlauer Philip ; Bahi David H. ; Yohe Kevin ; Fee Brendan ; Oliver Chris ; Cullerot David L. ; Skubisz Michael, Distributed connection-oriented services for switched communications networks.
Shi Shaw-Ben ; Ault Michael Bradford ; Plassmann Ernst Robert ; Rich Bruce Arland ; Rosiles Mickella Ann ; Shrader Theodore Jack London, Distributed file system web server user authentication with cookies.
Couland Ghislaine,FRX ; Hunt Guerney Douglass Holloway ; Levy-Abegnoli Eric Michel,FRX ; Jean-Marie Mauduit Daniel Georges,FRX, Distributed scalable device for selecting a server from a server cluster and a switched path to the selected server.
Ledebohm,Herbert O.; Einkauf,Mark A.; Diard,Franck R.; Doughty,Jeffrey C., Dynamically creating or removing a physical-to-virtual address mapping in a memory of a peripheral device.
Moskalev, Anatoly; Venkataraghaven, Parakalan, Emulation of independent active DMA channels with a single DMA capable bus master hardware and firmware.
Albert, Mark; Howes, Richard A.; Jordan, James A.; Kersey, Edward A.; LeBlanc, William M.; Menditto, Louis F.; O'Rourke, Chris; Tiwari, Pranav Kumar; Tsang, Tzu-Ming, Handling packet fragments in a distributed network service environment.
Isfeld Mark S. ; Mallory Tracy D. ; Mitchell Bruce W. ; Seaman Michael J. ; Arunkumar Nagaraj ; Srisuresh Pyda, High throughput message passing process using latency and reliability classes.
Tokuyo, Masanaga; Nakagawa, Itaru; Chikuma, Satoru; Fujino, Nobutsugu; Taniguchi, Tetsuya; Hisanaga, Takanori; Chikada, Michiyasu; Kuwata, Daisuke, IP router device having a TCP termination function and a medium thereof.
Husak, David J.; Melton, Matthew S.; Barton, David F.; Nuechterlein, David; Shah, Syed I.; Fluker, Jon L., Manipulating data streams in data stream processors.
Fuhs,Ronald E.; Paynton,Calvin C.; Rogers,Steven L.; Sellin,Nathaniel P.; Willenborg,Scott M., Method and apparatus for coalescing acknowledge packets within a server.
Daniel Arthur A. (Rochester MN) Moore Robert E. (Durham NC) Anderson Catherine J. (Raleigh NC) Gelm Thomas J. (Raleigh NC) Kiter Raymond F. (Poughkeepsie NY) Meeham John P. (Raleigh NC) Stevenson Joh, Method and apparatus for communication network alert message construction.
Attanasio Clement R. (Peekskill NY) Smith Stephen E. (Mahopac NY), Method and apparatus for making a cluster of computers appear as a single host on a network.
Walter A. Hubis ; William G. Deitz, Method and system for controlling access share storage devices in a network environment by configuring host-to-volume mapping data structures in the controller memory for granting and denying access .
Colby Steven ; Krawczyk John J. ; Nair Raj Krishnan ; Royce Katherine ; Siegel Kenneth P. ; Stevens Richard C. ; Wasson Scott, Method and system for directing a flow between a client and a server.
Linville John Walter ; Makrucki Brad Alan ; Suffern Edward Stanley ; Warren Jeffrey Robert, Method and system for monitoring and controlling data flow in a network congestion state by changing each calculated pause time by a random amount.
Leighton Frank T. (459 Chestnut Hill Ave. Newtonville MA) Micali Silvio (459 Chestnut Hill Ave. Brookline MA 02146), Method for enabling users of a cryptosystem to generate and use a private pair key for enciphering communications betwee.
Zhang,Hui; de la Iglesia,Erik; Gomez,Miguel; Liu,Liang; Lowe,Rick K.; Wallace,Mark Aaron; Wang,Wei, Method of and system for allocating resources to resource requests.
Choquier Philippe,FRX ; Peyroux Jean-Francios ; Griffin William J., Method of redirecting a client service session to a second application server without interrupting the session by forwa.
Oskouy Rasoul M. ; Lyon Tom ; Kashyap Prakash, Multi-virtual DMA channels, multi-bandwidth groups, host based cellification and reassembly, and asynchronous transfer mode network interface.
Albert, Mark; Howes, Richard A.; Jordan, James A.; Kersey, Edward A.; LeBlanc, William M.; McGuire, Jacob Mark; Menditto, Louis F.; O'Rourke, Chris; Tiwari, Pranav Kumar; Tsang, Tzu-Ming, Network address translation using a forwarding agent.
Allen, Jr., James Johnson; Bass, Brian Mitchell; Calvignac, Jean Louis; Gaur, Santosh Prasad; Heddes, Marco C.; Siegel, Michael Steven; Verplanken, Fabrice Jean, Network processor interface for building scalable switching systems.
Cummings Kevin D. (Phoenix AZ) Johnson William A. (Paradise Valley AZ) Laird Daniel L. (Madison WI), Pattern writing method during X-ray mask fabrication.
Allen, Jr., James Johnson; Bass, Brian Mitchell; Davis, Gordon Taylor; Jeffries, Clark Debs; Nair, Jitesh Ramachandran; Sabhikhi, Ravinder Kumar; Siegel, Michael Steven; Yedavalli, Rama Mohan, Retro flow control for arriving traffic in computer networks.
Grove, Adam J.; Kharitonov, Michael; Tumarkin, Alexei, SYSTEM AND METHOD FOR HIGH-PERFORMANCE DELIVERY OF WEB CONTENT USING HIGH-PERFORMANCE COMMUNICATIONS PROTOCOL BETWEEN THE FIRST AND SECOND SPECIALIZED INTERMEDIATE NODES TO OPTIMIZE A MEASURE OF COMM.
Helmer, Jr., Leonard W.; Heywood, Patricia E.; DiNicola, Paul; Martin, Steven J.; Salyer, Gregory; Soto, Carol L., Speculative method and system for rapid data communications.
Arora Sanjeev (Berkeley CA) Knight ; Jr. Thomas F. (Belmont MA) Leighton Frank T. (Newton Center MA) Maggs Bruce M. (Princeton NJ) Upfal Eliezer (Palo Alto CA), Switching networks with expansive and/or dispersive logical clusters for message routing.
Liu, Fu-Hua; Cheng, Shih-An; Chang, Chen-Huei; Lee, Chih-Ping, System and method for determining a connectionless communication path for communicating audio data through an address and port translation device.
Leonard, Judson S.; Gingold, David; Stewart, Lawrence C., System and method for remote direct memory access without page locking by the operating system.
Bommareddy, Satish; Kale, Makarand; Chaganty, Srinivas, System and method for routing message traffic using a cluster of routers sharing a single logical IP address distinct from unique IP addresses of the routers.
Chang Albert (Austin TX) Neuman Grover H. (Austin TX) Shaheen-Gouda Amal A. (Austin TX) Smith Todd A. (Austin TX), System and method for using cached data at a local node after re-opening a file at a remote node in a distributed networ.
Pitts William M. (780 Mora Dr. Los Altos CA 94024), System for accessing distributed data cache channel at each network node to pass requests and data.
Lundberg Eric P. ; Placek Joseph M., System for arbitrating packetized data from the network to the peripheral resources and prioritizing the dispatching of.
Short, Joel E.; Delley, Frederic; Logan, Mark F.; Pagan, Florence C. I., Systems and methods for redirecting users having transparent computer access to a network using a gateway device having redirection capability.
Jha, Ashutosh K.; Danilak, Radoslav; Gyugyi, Paul J.; Maufer, Thomas A.; Nanda, Sameer; Rajagopalan, Anand; Sidenblad, Paul J., Transmitting commands and information between a TCP/IP stack and an offload unit.
Brown Charles Allan ; Burns John Martin ; Nagaraj Holavanahally Seshachar ; O'Neill James Joseph ; Ullah Muhammad Inayet ; Volpe Leo ; Wendt Herman Russell, Vacuum baking process.
Brendel Juergen ; Kring Charles J. ; Liu Zaide ; Marino Christopher C., World-wide-web server with delayed resource-binding for resource-based load balancing on a distributed resource multi-n.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.