Embodiments are directed towards improving the performance of network traffic management devices by optimizing the management of hot connection flows. A packet traffic management device (“PTMD”) may employ a data flow segment (“DFS”) and control segment (“CS”). The CS may perform high-level control
Embodiments are directed towards improving the performance of network traffic management devices by optimizing the management of hot connection flows. A packet traffic management device (“PTMD”) may employ a data flow segment (“DFS”) and control segment (“CS”). The CS may perform high-level control functions and per-flow policy enforcement for connection flows maintained at the DFS, while the DFS may perform statistics gathering, per-packet policy enforcement (e.g., packet address translations), or the like, on connection flows maintained at the DFS. The DFS may include high-speed flow caches and other high-speed components that may be comprised of high-performance computer memory. Making efficient use of the high speed flow cache capacity may be improved by maximizing the number of hot connection flows and minimizing the number of malicious and/or in-operative connections flows (e.g., non-genuine flows) that may have flow control data stored in the high-speed flow cache.
대표청구항▼
1. A method for managing connection flows over a network with a traffic management device (TMD) that includes one or more processors, wherein execution of logic by the one or more processors performs actions, comprising: employing one or more control segment (CS) components to perform actions, inclu
1. A method for managing connection flows over a network with a traffic management device (TMD) that includes one or more processors, wherein execution of logic by the one or more processors performs actions, comprising: employing one or more control segment (CS) components to perform actions, including: generating one or more connection flow metrics based on one or more received network packets for one or more managed connection flows; andemploying the one or more connection flow metrics to determine one or more hot connection flows in the managed connection flows based on a predicted capacity of the one or more CS components, wherein a percentile of the connection flows are identified as the one or more hot connection flows; andemploying one or more data flow segment (DFS) components to perform actions, including maintaining packet level flow handling for one or more of the connection flows. 2. The method of claim 1, further comprising: when a flow cache that is at less than full capacity and the one or more DFS components receive new flow control data for a flow connection from the one or more CS components, inserting the new flow control data in the flow cache; andwhen the flow cache is at full capacity and the new flow control data is received by the one or more DFS components, employing the one or more DFS components to insert the new flow control data into the flow cache and remove flow control data for another flow connection from the flow cache, wherein an evict message is sent to the one or more CS components. 3. The method of claim 1, further comprising: when the one or more CS components receive an evict message for flow control data from the one or more DFS components, performing actions, including: when a flow connection is closed and corresponds to the flow control data, discarding the flow control data that corresponds to the closed flow connection at the one or more CS components; andwhen the flow connection is open and corresponds to the flow control data, transferring the managing of the open flow connection to the one or more CS components from the one or more DFS components. 4. The method of claim 1, wherein generating the at least one connection flow metric further comprises, examining contents of one or more received network packets to identify at one or more of a data pattern, or meta data that indicates when one or more of the connection flows is a hot connection flow. 5. The method of claim 1, further comprising: when the managed connection flows exceeds a capacity of the one or more DFS components, determining each hot connection flow handled by the one or more DFS components; andsorting a ranked ordering of the managed connection flows based on a total amount of data exchanged over a time interval. 6. The method of claim 1, wherein employing the one or more connection flow metrics further comprises: determining a median bit-rate of data communicated for one or more connection flows handled by the one or more CS components; andemploying the median bit-rate of data communicated for the one or more connection flows and at least a bit-rate capacity of the one or more CS components to estimate a maximum number of connection flows for the one or more CS components. 7. The method of claim 1, wherein determining each hot connection flow to be handled by the one or more DFS components based on a total amount of data communicated over a time interval, wherein N number of connection flows having a top total amount of data communicated over the time interval are identified as hot connection flows. 8. A traffic management computer (TMC) that includes a plurality of components to manage connection flows over a network, wherein the TMC employs one or more processors to execute logic that performs actions, comprising: employing one or more control segment (CS) components to perform actions, including: generating one or more connection flow metrics based on one or more received network packets for one or more managed connection flows; andemploying the one or more connection flow metrics to determine one or more hot connection flows in the managed connection flows based on a predicted capacity of the one or more CS components, wherein a percentile of the connection flows are identified as the one or more hot connection flows; andemploying one or more data flow segment (DFS) components to perform actions, including maintaining packet level flow handling for one or more of the connection flows. 9. The TMC of claim 8, performs further actions comprising: when a flow cache that is at less than full capacity and the one or more DFS components receive new flow control data for a flow connection from the one or more CS components, inserting the new flow control data in the flow cache; andwhen the flow cache is at full capacity and the new flow control data is received by the one or more DFS components, employing the one or more DFS components to insert the new flow control data into the flow cache and remove flow control data for another flow connection from the flow cache, wherein an evict message is sent to the one or more CS components. 10. The TMC of claim 8, performs further actions comprising: when the one or more CS components receive an evict message for flow control data from the one or more DFS components, performing actions, including: when a flow connection is closed and corresponds to the flow control data, discarding the flow control data that corresponds to the closed flow connection at the one or more CS components; andwhen the flow connection is open and corresponds to the flow control data, transferring the managing of the open flow connection to the one or more CS components from the one or more DFS components. 11. The TMC of claim 8, wherein generating the at least one connection flow metric further comprises, examining contents of one or more received network packets to identify at one or more of a data pattern, or meta data that indicates when one or more of the connection flows is a hot connection flow. 12. The TMC of claim 8, performs further actions comprising: when the managed connection flows exceeds a capacity of the one or more DFS components, determining each hot connection flow handled by the one or more DFS components; andsorting a ranked ordering of the managed connection flows based on a total amount of data exchanged over a time interval. 13. A system that includes a plurality of components to manage connection flows over a network, wherein the system employs one or more processors to execute logic that performs actions, comprising: one or more control segment (CS) components that perform actions, including: generating one or more connection flow metrics based on one or more received network packets for one or more managed connection flows; andemploying the one or more connection flow metrics to determine one or more hot connection flows in the managed connection flows based on a predicted capacity of the one or more CS components, wherein a percentile of the connection flows are identified as the one or more hot connection flows; andone or more data flow segment (DFS) components to perform actions, including maintaining packet level flow handling for one or more of the connection flows. 14. The system of claim 13, performs further actions comprising: when a flow cache that is at less than full capacity and the one or more DFS components receive new flow control data for a flow connection from the one or more CS components, inserting the new flow control data in the flow cache; andwhen the flow cache is at full capacity and the new flow control data is received by the one or more DFS components, employing the one or more DFS components to insert the new flow control data into the flow cache and remove flow control data for another flow connection from the flow cache, wherein an evict message is sent to the one or more CS components. 15. The system of claim 13, performs further actions comprising: when the one or more CS components receive an evict message for flow control data from the one or more DFS components, performing actions, including: when a flow connection is closed and corresponds to the flow control data, discarding the flow control data that corresponds to the closed flow connection at the one or more CS components; andwhen the flow connection is open and corresponds to the flow control data, transferring the managing of the open flow connection to the one or more CS components from the one or more DFS components. 16. The system of claim 13, wherein generating the at least one connection flow metric further comprises, examining contents of one or more received network packets to identify at one or more of a data pattern, or meta data that indicates when one or more of the connection flows is a hot connection flow. 17. The system of claim 13, performs further actions comprising: when the managed connection flows exceeds a capacity of the one or more DFS components, determining each hot connection flow handled by the one or more DFS components; andsorting a ranked ordering of the managed connection flows based on a total amount of data exchanged over a time interval. 18. The system of claim 13, wherein employing the one or more connection flow metrics further comprises: determining a median bit-rate of data communicated for one or more connection flows handled by the one or more CS components; andemploying the median bit-rate of data communicated for the one or more connection flows and at least a bit-rate capacity of the one or more CS components to estimate a maximum number of connection flows for the one or more CS components. 19. The system of claim 13, wherein determining each hot connection flow to be handled by the one or more DFS components based on a total amount of data communicated over a time interval, wherein N number of connection flows having a top total amount of data communicated over the time interval are identified as hot connection flows. 20. The system of claim 13, wherein the one or more CS components perform further actions, comprising: dividing each connection flow into an upload portion and a download portion;generating separate connection flow metrics for each upload portion and each download portion of each of the managed connection flows;employing each connection flow metric to determine each hot download portion and each hot upload portion of the managed connection flows;determining each hot upload portion and each download portion of the managed connection flows to be handled by the one or more DFS components; andemploying the one or more DFS components to handle each determined hot upload portion and each download portion of the managed connection flows.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (79)
Hawkinson Christopher D., Apparatus and method for providing a binary range tree search.
Sathaye Shirish S. (North Chelmsford MA) Hannigan Brendan (West Newton MA) Hawe William R. (Pepperell MA), Automatic assignment of addresses in a computer communications network.
Yang Henry S. (Andover MA) Sathaye Shirish S. (North Chelmsford MA) Ben-Nun Michael (Jerusalem ILX) De-Leon Moshe (Jerusalem ILX) Ben-Michael Simoni (Givaat Zeev ILX), Buffer descriptor prefetch in network and I/O design.
Fitzgerald Albion J. (Ridgewood NJ) Fitzgerald Joseph J. (New Paltz NY), Distributed computer network including hierarchical resource information structure and related method of distributing re.
Shi Shaw-Ben ; Ault Michael Bradford ; Plassmann Ernst Robert ; Rich Bruce Arland ; Rosiles Mickella Ann ; Shrader Theodore Jack London, Distributed file system web server user authentication with cookies.
Couland Ghislaine,FRX ; Hunt Guerney Douglass Holloway ; Levy-Abegnoli Eric Michel,FRX ; Jean-Marie Mauduit Daniel Georges,FRX, Distributed scalable device for selecting a server from a server cluster and a switched path to the selected server.
Albert, Mark; Howes, Richard A.; Jordan, James A.; Kersey, Edward A.; LeBlanc, William M.; Menditto, Louis F.; O'Rourke, Chris; Tiwari, Pranav Kumar; Tsang, Tzu-Ming, Handling packet fragments in a distributed network service environment.
Daniel Arthur A. (Rochester MN) Moore Robert E. (Durham NC) Anderson Catherine J. (Raleigh NC) Gelm Thomas J. (Raleigh NC) Kiter Raymond F. (Poughkeepsie NY) Meeham John P. (Raleigh NC) Stevenson Joh, Method and apparatus for communication network alert message construction.
Attanasio Clement R. (Peekskill NY) Smith Stephen E. (Mahopac NY), Method and apparatus for making a cluster of computers appear as a single host on a network.
Colby Steven ; Krawczyk John J. ; Nair Raj Krishnan ; Royce Katherine ; Siegel Kenneth P. ; Stevens Richard C. ; Wasson Scott, Method and system for directing a flow between a client and a server.
Leighton Frank T. (459 Chestnut Hill Ave. Newtonville MA) Micali Silvio (459 Chestnut Hill Ave. Brookline MA 02146), Method for enabling users of a cryptosystem to generate and use a private pair key for enciphering communications betwee.
Choquier Philippe,FRX ; Peyroux Jean-Francios ; Griffin William J., Method of redirecting a client service session to a second application server without interrupting the session by forwa.
Albert, Mark; Howes, Richard A.; Jordan, James A.; Kersey, Edward A.; LeBlanc, William M.; McGuire, Jacob Mark; Menditto, Louis F.; O'Rourke, Chris; Tiwari, Pranav Kumar; Tsang, Tzu-Ming, Network address translation using a forwarding agent.
Allen, Jr., James Johnson; Bass, Brian Mitchell; Calvignac, Jean Louis; Gaur, Santosh Prasad; Heddes, Marco C.; Siegel, Michael Steven; Verplanken, Fabrice Jean, Network processor interface for building scalable switching systems.
Cummings Kevin D. (Phoenix AZ) Johnson William A. (Paradise Valley AZ) Laird Daniel L. (Madison WI), Pattern writing method during X-ray mask fabrication.
Arora Sanjeev (Berkeley CA) Knight ; Jr. Thomas F. (Belmont MA) Leighton Frank T. (Newton Center MA) Maggs Bruce M. (Princeton NJ) Upfal Eliezer (Palo Alto CA), Switching networks with expansive and/or dispersive logical clusters for message routing.
Bommareddy, Satish; Kale, Makarand; Chaganty, Srinivas, System and method for routing message traffic using a cluster of routers sharing a single logical IP address distinct from unique IP addresses of the routers.
Pitts William M. (780 Mora Dr. Los Altos CA 94024), System for accessing distributed data cache channel at each network node to pass requests and data.
Short, Joel E.; Delley, Frederic; Logan, Mark F.; Pagan, Florence C. I., Systems and methods for redirecting users having transparent computer access to a network using a gateway device having redirection capability.
Brown Charles Allan ; Burns John Martin ; Nagaraj Holavanahally Seshachar ; O'Neill James Joseph ; Ullah Muhammad Inayet ; Volpe Leo ; Wendt Herman Russell, Vacuum baking process.
Brendel Juergen ; Kring Charles J. ; Liu Zaide ; Marino Christopher C., World-wide-web server with delayed resource-binding for resource-based load balancing on a distributed resource multi-n.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.