An apparatus and method are provided to offload TCP/IP-related processing, where a server is connected to a plurality of clients, and the plurality of clients is accessed via a TCP/IP network. TCP/IP connections between the plurality of clients and the server are accelerated. The apparatus includes
An apparatus and method are provided to offload TCP/IP-related processing, where a server is connected to a plurality of clients, and the plurality of clients is accessed via a TCP/IP network. TCP/IP connections between the plurality of clients and the server are accelerated. The apparatus includes an accelerated connection processor and a target channel adapter. The accelerated connection processor bridges TCP/IP transactions between the plurality of clients and the server, where the accelerated connection processor accelerates the TCP/IP connections by prescribing remote direct memory access operations to retrieve/provide transaction data from/to the server. The target channel adapter is coupled to the accelerated connection processor. The target channel adapter executes the remote direct memory access operations to retrieve/provide the transaction data. The TCP/IP transactions are accelerated by offloading TCP/IP processing otherwise performed by the server to retrieve/provide transaction data.
대표청구항▼
What is claimed is: 1. An apparatus, for accelerating TCP/IP connections between a plurality of clients and a server, the plurality of clients being accessed via a TCP/IP network, the apparatus comprising: an accelerated connection processor, configured to bridge TCP/IP transactions between the plu
What is claimed is: 1. An apparatus, for accelerating TCP/IP connections between a plurality of clients and a server, the plurality of clients being accessed via a TCP/IP network, the apparatus comprising: an accelerated connection processor, configured to bridge TCP/IP transactions between the plurality of clients and the server, wherein said accelerated connection processor accelerates the TCP/IP connections by bypassing a TCP/IP stack employed in the server by issuing remote direct memory access operations to retrieve/provide transaction data from/to the server, and wherein said accelerated connection processor comprises: a connection correlator, configured to map TCP/IP connection parameters with a target work queue number for each of a plurality of accelerated TCP/IP connections, wherein said target work queue number corresponds to a work queue pair; and a target channel adapter, coupled to said accelerated connection processor, configured to retrieve/provide said transaction data responsive to said remote direct memory access operations issued to said work queue pair, wherein said accelerated connection processor handles TCP/IP processing of said transaction data; whereby the TCP/IP connections are accelerated by offloading TCP/IP processing performed by the server to retrieve/provide said transaction data. 2. The apparatus as recited in claim 1, wherein said accelerated connection processor comprises: a plurality of native network ports, each of said native network ports communicating with the plurality of clients in a native network protocol corresponding to the plurality of clients. 3. The apparatus as recited in claim 2, wherein said native network protocol comprises one of the following protocols: Ethernet, Wireless Ethernet, Fiber Distributed Data Interconnect (FDDI), Attached Resource Computer Network (ARCNET), Synchronous Optical Network (SONET), Asynchronous Transfer Mode (ATM), and Token Ring. 4. The apparatus as recited in claim 2, wherein said accelerated connection processor supports TCP/IP transactions with the plurality of clients by receiving/transmitting native transactions in accordance with said native network protocol. 5. The apparatus as recited in claim 4, wherein said each of a plurality of accelerated TCP/IP connections comprises: a plurality of said remote direct memory access operations to retrieve/provide particular transaction data from/to the server; and corresponding native transactions between said accelerated connection processor and a particular client to provide/retrieve said particular transaction data to/from said particular client. 6. The apparatus as recited in claim 1, wherein said TCP/IP connection parameters comprise: source TCP port number, destination TCP port number, source IP address, and destination IP address. 7. An apparatus within a client-server environment for managing an accelerated TCP/IP connection between a server and a client, the client being connected to a TCP/IP network, the apparatus comprising: a host driver, for providing a work queue pair through which transaction data corresponding to the accelerated TCP/IP connection is transmitted/received; and a TCP-aware target adapter, coupled to said host driver, for executing a remote direct memory access operation to receive/transmit said transaction data, wherein said TCP-aware target adapter receives/transmits said transaction data responsive to said remote direct memory access operation issued to said work queue pair, and wherein said TCP-aware target adapter handles TCP/IP processing of said transaction data, said TCP-aware target adapter comprising: a plurality of native network ports, each of said native network ports communicating with TCP/IP clients via a corresponding native network protocol; an accelerated connection processor, for supporting TCP/IP transactions with the client by receiving/transmitting native transactions in accordance with said native network protocol, and a connection correlator, for mapping TCP/IP connection parameters for the accelerated TCP/IP connection with a work queue number corresponding to said work queue pair; whereby the accelerated TCP/IP connection offloads TCP/IP processing performed by the server by bypassing a TCP/IP stack employed in the server to retrieve/transmit said transaction data. 8. The apparatus as recited in claim 7, wherein said corresponding native network protocol comprises one of the following protocols: Ethernet, Wireless Ethernet, Fiber Distributed Data Interconnect (FDDI), Attached Resource Computer Network (ARCNET), Synchronous Optical Network (SONET), Asynchronous Transfer Mode (ATM), and Token Ring. 9. The apparatus as recited in claim 7, wherein said host driver comprises: connection correlation logic, for associating said TCP/IP connection parameters for the accelerated TCP/IP connection with said work queue number. 10. The apparatus as recited in claim 7, wherein said TCP/IP connection parameters comprise: source TCP port number, destination TCP port number, source IP address, and destination IP address. 11. A method for accelerating TCP/IP connections in a client-server environment having clients that are connected to a TCP/IP network, the method comprising: mapping TCP/IP connection parameters for accelerated connections to a corresponding work queue number that is associated with a work queue pair; offloading TCP/IP processing performed by a server by bypassing a TCP/IP stack employed in the server by executing remote direct memory access operations to retrieve/transmit data associated with the accelerated connections from/to memory within the server; issuing remote direct memory access operations to said work queue pair; and providing the data to/from a TCP-aware target adapter responsive to said issuing; wherein the TCP-aware target adapter handles TCP/IP processing of said data. 12. The method as recited in claim 11, wherein said mapping comprises: intercepting the TCP/IP connection parameters from requests to send/receive data from/to the server; and correlating source TCP port number, destination TCP port number, source IP address, and destination IP address with the corresponding work queue number. 13. The method as recited in claim 12, wherein said executing comprises: providing memory locations within the server for transmission/reception of the data; transmitting the remote direct memory access operations to the servers; and from the server, providing remote direct memory access responses. 14. The method as recited in claim 11, further comprising: generating TCP/IP transactions in a native network protocol to provide the data to the clients. 15. A method for offloading server TCP/IP processing in a client-server environment, comprising: bypassing a TCP/IP stack employed in a server by utilizing remote direct memory access operations to directly access data from/to server memory, wherein the data is provided to/from a TCP-aware target adapter, the TCP-aware target adapter providing native network ports that connect to clients, wherein said utilizing comprises: mapping TCP/IP connection parameters for a particular TCP/IP connection with a work queue number that corresponds to a work queue pair, wherein the TCP/IP connection parameters comprise source TCP port number, destination TCP port number, source IP address, and destination IP address; and issuing remote direct memory access operations to said work queue pair; providing the data to/from the TCP-aware target adapter responsive to said issuing, wherein the TCP-aware target adapter handles TCP/IP processing of said data; and via the TCP-aware target adapter, generating native network transactions to transfer the data to/from clients. 16. The method as recited in claim 15, wherein said generating comprises: formulating TCP headers, IP headers, and native network headers for messages to/from the clients based upon the TCP/IP connection parameters provided by said associating. 17. A TCP-aware target adapter, for accelerating TCP/IP connections between a plurality of clients and a server, the plurality of clients being accessed via a TCP/IP network, the TCP-aware target adapter comprising: an accelerated connection processor, configured to bridge TCP/IP transactions between the plurality of clients and the server, wherein said accelerated connection processor accelerates the TCP/IP connections by bypassing a TCP/IP stack employed in the server by issuing remote direct memory access operations to retrieve/provide transaction data from/to the server, wherein said accelerated connection processor comprises: a connection correlator, configured to map TCP/IP connection parameters which uniquely identify the TCP/IP connections with corresponding work queue numbers, wherein said work queue numbers correspond to work queue pairs; and a target channel adapter, coupled to said accelerated connection processor, configured to retrieve/provide said transaction data responsive to said remote direct memory access operations issued to said work queue pairs, wherein said accelerated connection processor handles TCP/IP processing of said transaction data, and configured to route said transaction data to/from the plurality of clients; whereby the TCP/IP connections are accelerated by offloading TCP/IP processing performed by the server to retrieve/provide said transaction data. 18. The TCP-aware target adapter as recited in claim 17, wherein said accelerated connection processor supports said TCP/IP transactions with the plurality of clients by formatting and processing native transactions in accordance with a native network protocol corresponding to the plurality of clients. 19. The TCP-aware target adapter as recited in claim 17, wherein said TCP/IP connection parameters comprise: source TCP port number, destination TCP port number, source IP address, and destination IP address. 20. A connection acceleration apparatus, for routing TCP/IP transactions between a plurality of clients and a server, the plurality of clients being accessed via a TCP/IP network, the connection acceleration apparatus comprising: a connection correlator, within an accelerated connection processor, for mapping TCP/IP connection parameters to work queue numbers, said work queue numbers corresponding to work queue pairs, wherein said accelerated connection processor handles TCP/IP processing of transaction data; and a target channel adapter, coupled to said connection correlator, configured to retrieve/provide said transaction data responsive to remote direct memory access operations issued to said work queue pairs, and configured to receive/transmit the TCP/IP transactions from/to the plurality of clients; whereby the TCP/IP transactions are accelerated by offloading TCP/IP processing performed by the server by bypassing a TCP/IP stack employed in the server to retrieve/provide said transaction data. 21. The connection acceleration apparatus as recited in claim 20, wherein said TCP/IP connection parameters comprise: source TCP port number, destination TCP port number, source IP address, and destination IP address.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (12)
Byers, Charles Calvin; Hinterlong, Stephen Joseph; Novotny, Robert Allen, Backplane configuration without common switch fabric.
Laurence B. Boucher ; Stephen E. J. Blightman ; Peter K. Craft ; David A. Higgen ; Clive M. Philbrick ; Daryl D. Starr, Intelligent network interface device and system for accelerated communication.
Craft, Peter K.; Philbrick, Clive M.; Boucher, Laurence B.; Higgen, David A., Protocol processing stack for use with intelligent network interface device.
Peter K. Craft ; Olive M. Philbrick ; Laurence B. Boucher ; David A. Higgen, Protocol processing stack for use with intelligent network interface device.
Hu, Lee Chuan; Ros, Jordi; Shen, Calvin; Thorpe, Roger; Tsai, Wei Kang, System for bypassing a server to achieve higher throughput between data network and data storage system.
Hausauer, Brian S.; Gross, Tristan T.; Keels, Kenneth G.; Wandler, Shaun V., Apparatus and method for packet transmission over a high speed network supporting remote direct memory access operations.
Ashmore, Paul Andrew; Davies, Ian Robert; Maine, Gene; Vedder, Rex Weldon, Certified memory-to-memory data transfer between active-active raid controllers.
Pandya, Ashish A., Dynamic random access memory (DRAM) that comprises a programmable intelligent search memory (PRISM) and a cryptography processing engine.
Blackmore, Robert S.; Chang, Fu Chung; Chaudhary, Piyush; Gildea, Kevin J.; Goscinski, Jason E.; Govindaraju, Rama K.; Grice, Donald G.; Helmer, Jr., Leonard W.; Heywood, Patricia E.; Hochschild, Peter H.; Houston, John S.; Kim, Chulho; Martin, Steven J., Half RDMA and half FIFO operations.
Sharp, Robert O.; Keels, Kenneth G.; Hausauer, Brian S.; Lacombe, John S., Method and apparatus for using a single multi-function adapter with different operating systems.
Sharp, Robert O.; Keels, Kenneth G.; Hausauer, Brian S.; Lacombe, John S., Method and apparatus for using a single multi-function adapter with different operating systems.
Sharp, Robert O.; Keels, Kenneth G.; Hausauer, Brian S.; Lacombe, John S., Method and apparatus for using a single multi-function adapter with different operating systems.
Sharp, Robert O.; Keels, Kenneth G.; Hausauer, Brian S.; Lacombe, John S., Method and apparatus for using a single multi-function adapter with different operating systems.
Davies, Ian Robert, Safe message transfers on PCI-Express link from RAID controller to receiver-programmable window of partner RAID controller CPU memory.
Johnson, Michael Ward; Currid, Andrew; Kanuri, Mrudula; Minami, John Shigeto, Sequence tagging system and method for transport offload engine data lists.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.