Protocol independent programmable switch (PIPS) software defined data center networks
원문보기
IPC분류정보
국가/구분
United States(US) Patent
등록
국제특허분류(IPC7판)
G06F-017/30
H04L-012/933
G06F-003/06
G06F-017/27
H04L-012/715
H04L-012/741
H04L-029/06
H04L-029/08
H04L-012/935
출원번호
US-0067139
(2016-03-10)
등록번호
US-9825884
(2017-11-21)
발명자
/ 주소
Hutchison, Guy Townsend
Gandhi, Sachin
Daniel, Tsahi
Schmidt, Gerald
Fishman, Albert
White, Martin Leslie
Shah, Zubin
출원인 / 주소
Cavium, Inc.
대리인 / 주소
Haverstock & Owens LLP
인용정보
피인용 횟수 :
0인용 특허 :
53
초록▼
A software-defined network (SDN) system, device and method comprise one or more input ports, a programmable parser, a plurality of programmable lookup and decision engines (LDEs), programmable lookup memories, programmable counters, a programmable rewrite block and one or more output ports. The prog
A software-defined network (SDN) system, device and method comprise one or more input ports, a programmable parser, a plurality of programmable lookup and decision engines (LDEs), programmable lookup memories, programmable counters, a programmable rewrite block and one or more output ports. The programmability of the parser, LDEs, lookup memories, counters and rewrite block enable a user to customize each microchip within the system to particular packet environments, data analysis needs, packet processing functions, and other functions as desired. Further, the same microchip is able to be reprogrammed for other purposes and/or optimizations dynamically.
대표청구항▼
1. A switch microchip for a software-defined network, the microchip comprising: a programmable parser that parses desired packet context data from headers of a plurality of incoming packets, wherein the headers that are recognized by the parser based on a software-defined parse graph of the parser;o
1. A switch microchip for a software-defined network, the microchip comprising: a programmable parser that parses desired packet context data from headers of a plurality of incoming packets, wherein the headers that are recognized by the parser based on a software-defined parse graph of the parser;one or more lookup memories having a plurality of tables, wherein the lookup memories are configured as a logical overlay such that the scaling and width of the lookup memories are software-defined by a user;a pipeline of a plurality of programmable lookup and decision engines that receive and modify the packet context data based on data stored in the lookup memories and software-defined logic programmed into the engines by the user;a programmable rewrite block that based on the packet context data received from one of the engines rebuilds and prepares the packet headers as processed within the switch for output; anda programmable counter block used for counting operations of the lookup and decision engines, wherein the operations that are counted by the counter block are software-defined by the user. 2. The microchip of claim 1, wherein starting from the same initial node of the parse graph, each path through the parse graph represents a combination of layer types of one of the headers that is able to be recognized by the parser. 3. The microchip of claim 2, wherein portions of the paths overlap. 4. The microchip of claim 1, wherein the rewrite block expands each layer of each of the headers parsed by the parser to form a expanded layer type of a generic size based on a protocol associated with the layer. 5. The microchip of claim 4, wherein the rewrite block generates a bit vector that indicates which portions of the expanded layer type contain valid data and which portions of the expanded layer type contain data added during the expanding by the rewrite block. 6. The microchip of claim 1, wherein the tables of the lookup memories are each able to be independently set in hash, direct access or longest prefix match operational modes. 7. The microchip of claim 6, wherein the tables of the lookup memories are able to be dynamically reformatted and reconfigured by the user such that a number of tiles of the lookup memories partitioned and allocated for lookup paths coupled to the lookup memories is based on memory capacity needed by each of the lookup paths. 8. The microchip of claim 1, wherein each of the lookup and decision engines comprise: a Key Generator configured to generate a set of lookup keys for each input token; andan Output Generator configured to generate an output token by modifying the input token based on content of lookup results associated with the set of lookup keys. 9. The microchip of claim 8, wherein each of the lookup and decision engines comprise: an Input Buffer for temporarily storing input tokens before input tokens are processed by the lookup and decision engine;a Profile Table for identifying positions of fields in each of the input tokens;a Lookup Result Merger for joining the input token with the lookup result and for sending the joined input token with the lookup result to the Output Generator;a Loopback Checker for determining whether the output token should be sent back to the current lookup and decision engine or to another lookup and decision engine; anda Loopback Buffer for storing loopback tokens. 10. The microchip of claim 9, wherein Control Paths of both the Key Generator and the Output Generator are programmable such that users are able to configure the lookup and decision engine to support different network features and protocols. 11. The microchip of claim 1, wherein the counter block comprises: N wrap-around counters, wherein each of the N wrap-around counters is associated with a counter identification; andan overflow FIFO used and shared by the N wrap-around counters, wherein the overflow FIFO stores the associated counter identifications of all counters that are overflowing. 12. A method of operating a switch microchip for a software-defined network, the method comprising: parsing desired packet context data from headers of a plurality of incoming packets with a programmable parser, wherein the headers that are recognized by the parser based on a software-defined parse graph of the parser;receiving and modifying the packet context data with a pipeline of a plurality of programmable lookup and decision engines based on data stored in lookup memories having a plurality of tables and software-defined logic programmed into the engines by a user;transmitting one or more data lookup requests to and receiving processing data based on the requests from the lookup memories with the lookup and decision engines, wherein the lookup memories are configured as a logical overlay such that the scaling and width of the lookup memories are software-defined by the user;performing counting operations based on actions of the lookup and decision engines with a programmable counter block, wherein the counter operations that are counted by the counter block are software-defined by the user; andrebuilding the packet headers as processed within the switch with a programmable rewrite block for output, wherein the rebuilding is based the packet context data received from one of the lookup and decision engines. 13. The method of claim 12, wherein starting from the same initial node of the parse graph, each path through the parse graph represents a combination of layer types of one of the headers that is able to be recognized by the parser. 14. The method of claim 13, wherein portions of the paths overlap. 15. The method of claim 12, wherein the rewrite block expands each layer of each of the headers parsed by the parser to form a expanded layer type of a generic size based on a protocol associated with the layer. 16. The method of claim 15, wherein the rewrite block generates a bit vector that indicates which portions of the expanded layer type contain valid data and which portions of the expanded layer type contain data added during the expanding by the rewrite block. 17. The method of claim 12, wherein the tables of the lookup memories are each able to be independently set in hash, direct access or longest prefix match operational modes. 18. The method of claim 17, wherein the tables of the lookup memories are able to be dynamically reformatted and reconfigured by the user such that a number of tiles of the lookup memories partitioned and allocated for lookup paths coupled to the lookup memories is based on memory capacity needed by each of the lookup paths. 19. The method of claim 12, wherein each of the lookup and decision engines comprise: a Key Generator configured to generate a set of lookup keys for each input token; andan Output Generator configured to generate an output token by modifying the input token based on content of lookup results associated with the set of lookup keys. 20. The method of claim 19, wherein each of the lookup and decision engines comprise: an Input Buffer for temporarily storing input tokens before input tokens are processed by the lookup and decision engine;a Profile Table for identifying positions of fields in each of the input tokens;a Lookup Result Merger for joining the input token with the lookup result and for sending the joined input token with the lookup result to the Output Generator;a Loopback Checker for determining whether the output token should be sent back to the current lookup and decision engine or to another lookup and decision engine; anda Loopback Buffer for storing loopback tokens. 21. The method of claim 20, wherein Control Paths of both the Key Generator and the Output Generator are programmable such that users are able to configure the lookup and decision engine to support different network features and protocols. 22. The method of claim 12, wherein the counter block comprises: N wrap-around counters, wherein each of the N wrap-around counters is associated with a counter identification; andan overflow FIFO used and shared by the N wrap-around counters, wherein the overflow FIFO stores the associated counter identifications of all counters that are overflowing. 23. A top of rack switch microchip comprising: a programmable parser that parses desired packet context data from headers of a plurality of incoming packets, wherein the headers that are recognized by the parser based on a software-defined parse graph of the parser and wherein, starting from the same initial node of the parse graph, each path through the parse graph represents a combination of layer types of one of the headers that is able to be recognized by the parser;one or more lookup memories having a plurality of tables, a Key Generator configured to generate a set of lookup keys for each input token and an Output Generator configured to generate an output token by modifying the input token based on content of lookup results associated with the set of lookup keys, wherein the lookup memories are configured as a logical overlay such that the scaling and width of the lookup memories are software-defined by a user, and further wherein each of the lookup memories are configured to selectively operate in hash, direct access or longest prefix match operational modes;a pipeline of a plurality of programmable lookup and decision engines that receive and modify the packet context data based on data stored in the lookup memories and software-defined logic programmed into the engines by the user;a programmable rewrite block that based on the packet context data received from one of the engines rebuilds and prepares the packet headers as processed within the switch for output, wherein the rewrite block expands each layer of each of the headers parsed by the parser to form a expanded layer type of a generic size based on a protocol associated with the layer; anda programmable counter block used for counting operations of the lookup and decision engines, wherein the counter block comprises N wrap-around counters, wherein each of the N wrap-around counters is associated with a counter identification and an overflow FIFO used and shared by the N wrap-around counters, wherein the overflow FIFO stores the associated counter identifications of all counters that are overflowing, and further wherein the operations that are performed by the counter block are software-defined by the user.
Hurley, Kean P.; Saklecha, Bhavi; Ip, Alfonso Y., Apparatus and system for coupling and decoupling initiator devices to a network using an arbitrated loop without disrupting the network.
Tran Thang M. ; Muthusamy Karthikeyan ; Narayan Rammohan ; McBride Andrew, Cache holding register for delayed update of a cache line into an instruction cache.
Rankin Linda J. (Beaverton OR) Bonasera Joseph (Portland OR) Borkar Nitin Y. (Beaverton OR) Ernst Linda C. (Portland OR) Kapur Suvansh K. (Beaverton OR) Manseau Daniel A. (Portland OR) Verhoorn Frank, Method and apparatus for providing remote memory access in a distributed memory multiprocessor system.
Thomas, Philip A.; Thomas, Sarin; Frailong, Jean-Marc; Sindhu, Pradeep, Methods and apparatus for randomly distributing traffic in a multi-path switch fabric.
Jiang, Tianyu; Jernigan, IV, Richard P.; Hamilton, Eric, System and method for providing space availability notification in a distributed striped volume set.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.