IPC분류정보
국가/구분 |
United States(US) Patent
등록
|
국제특허분류(IPC7판) |
|
출원번호 |
US-0156007
(2008-05-29)
|
등록번호 |
US-8181003
(2012-05-15)
|
발명자
/ 주소 |
- Wang, Xiaolin
- Wu, Qian
- Marshall, Benjamin
- Wang, Fugui
- Pitarys, Gregory
- Ning, Ke
|
출원인 / 주소 |
|
대리인 / 주소 |
|
인용정보 |
피인용 횟수 :
1 인용 특허 :
39 |
초록
▼
Improved instruction set and core design, control and communication for programmable microprocessors is disclosed, involving the strategy for replacing centralized program sequencing in present-day and prior art processors with a novel distributed program sequencing wherein each functional unit has
Improved instruction set and core design, control and communication for programmable microprocessors is disclosed, involving the strategy for replacing centralized program sequencing in present-day and prior art processors with a novel distributed program sequencing wherein each functional unit has its own instruction fetch and decode block, and each functional unit has its own local memory for program storage; and wherein computational hardware execution units and memory units are flexibly pipelined as programmable embedded processors with reconfigurable pipeline stages of different order in response to varying application instruction sequences that establish different configurations and switching interconnections of the hardware units.
대표청구항
▼
1. A method of clock cycle synchronized flexible programmable execution of a data processing program, the method comprising: providing a processor containing a plurality of functional units, each functional unit being a computation unit, memory unit, full access switch unit for interconnecting execu
1. A method of clock cycle synchronized flexible programmable execution of a data processing program, the method comprising: providing a processor containing a plurality of functional units, each functional unit being a computation unit, memory unit, full access switch unit for interconnecting execution units with memory units, or a control unit, each functional unit having its own program counter, its own instruction fetch and decode unit, and its own dedicated local program memory for storing instructions that control the functional unit during program execution;dividing the data processing program into one or more data processing modules;connecting different functional units to form predetermined control paths and data pipelines in a hierarchical manner;using a common instruction set to program at least some of the functional units, wherein the instruction set directly codes instruction sequencing and directly or indirectly codes hardware controls;setting up distributed program sequencing in the dedicated local program memories of each of the programmed functional units, the dedicated local program memory in each programmed functional unit being loaded before execution of each processing module with all instructions required during execution of that processing module;generating control vectors that control the control paths and data pipelines of said functional units every clock cycle;configuring multiple memory units to operate in different memory access modes and connecting them to the computation units through said switch units to maximize programmability; andexecuting the processing modules. 2. The method of claim 1 wherein each of the data processing modules is handled by a data pipeline constructed with one or more of said functional units in the processor, and with each data pipeline constructed with single or multiple clock cycle synchronized instruction sequences, one for each of the functional units along the data pipeline. 3. The method of claim 2 wherein different data processing modules can be mapped as follows: (a) onto different hardware blocks with connections between the blocks used for progression from one module to a next module, (b) different data processing modules mapped onto a same hardware block with execution of modules multiplexed onto different time slots in order of module progression, and (c) a combination of both (a) and (b). 4. The method of claim 3 wherein control and synchronization of progression from one data processing module to the next in time slots, is achieved through an instruction sequence in a parent control unit for all the functional units used to construct the data pipeline. 5. The method of claim 1 wherein hardware of the processor is dynamically organized into one or more data pipelines. 6. The method of claim 5 wherein each data pipeline employs a hierarchical structure with data flow first established between different parent units and then established between subsidiary units of a parent unit such that the entire data pipeline is constructed one level after another. 7. The method of claim 1 wherein the functional units in the processor are dynamically organized into a control hierarchy in which a parent control unit is provided that only exchanges control and synchronization messages and signals with corresponding subsidiary units. 8. The method of claim 7 wherein a ring-type bus, is employed between the processor parent control unit and the functional units it controls including execution units, switch units and memory units. 9. The method of claim 1 wherein data processing can be performed by parallel execution of instruction sequences in heterogeneous hardware blocks. 10. The method of claim 1 wherein the functional units are flexibly arranged to enable different types of data flow and arithmetic and logic operation sequences in a data pipeline to eliminate buffering in memory and to reduce traffic to and from data memory. 11. The method of claim 1 wherein the common instruction set is used to program diverse functional hardware blocks requiring different numbers of bits for their respective control coding. 12. The method of claim 11 wherein the instruction set employs an instruction format that allows direct coding of instruction sequencing, direct coding of a subset of the hardware controls, and indirect coding of hardware controls through either an address pointer to a control vector memory or a register read and write command. 13. The method of claim 12 wherein the hardware controls coded in an instruction specify an organization of and interconnection between sub blocks within a block to form a specific hardware construct. 14. The method of claim 1 wherein a memory unit includes two dual-ported memory banks each of which can alternate between being part of a data pipeline construct and interfacing with external memory for bulk loading and unloading of data. 15. The method of claim 14 wherein each of two memory banks is programmed by instruction to connect to either of two read side interfaces with different functionality and either of two write side interfaces with different functionality. 16. The method of claim 15 wherein each of the four interfaces comprises an arithmetic unit that supports a set of functionalities specific to each interface and other elements to enable different modes of memory address calculation or simple data processing. 17. The method of claim 15 wherein each memory unit is used to construct a hierarchical data pipeline that comprises, programming a DSE for data and status exchange amongst the two read side interfaces and two write side interfaces supporting the two memory banks for necessary coordination and timing alignment amongst the four interfaces; andprogramming each of the four interfaces to establish a data pipeline within an interface wherein the arithmetic unit can be used for simple data processing before data are written to a memory bank or after data are read from a memory bank. 18. The method of claim 1 wherein a finite state machine is implemented using one memory bank in said memory unit to hold state table entries, and an instruction sequence for the memory unit to operate on input data bit streams in another memory bank to traverse from one state entry to the next. 19. A clock-cycle synchronized flexible programmable data processor comprising: a plurality of different functional units, each functional unit being a computation unit, a memory unit, a full access switch unit for interconnecting computation units with memory units, or a control unit each functional unit having its own program counter, its own instruction fetch and decode unit, and its own dedicated local program memory for storing all instructions required for controlling the functional unit during execution of a processing module, the functional units being interconnectable to form predetermined control paths and data pipelines in a hierarchical manner;a common instruction set for programming the functional units, wherein the instruction set directly codes instruction sequencing and directly or indirectly codes hardware controls;means for generating control vectors that control the control paths and data pipelines of said functional units every clock cycle; andmeans for configuring multiple memory units to operate in different memory access modes and means for connecting them to computation units through said switch unit to maximize programmability. 20. The processor of claim 19 wherein a data processing program is generated in which the data processing program is divided into different data processing modules, each handled by a data pipeline constructed with one or more said functional units in the processor, and with each data pipeline constructed with single or multiple clock cycle synchronized instruction sequences, one for each of the functional units along the data pipeline. 21. The processor of claim 20 wherein different data processing modules can be mapped onto different hardware blocks with connections between the blocks used for progression from one module to a next module. 22. The processor of claim 20 wherein different data processing modules are mapped onto a same hardware block with execution of modules multiplexed onto different time slots in order of module progression. 23. The processor of claim 21 or 22 wherein control and synchronization of progression from one data processing module to the next in time slots, is achieved through an instruction sequence in the parent control unit for all the functional units used to construct the data pipeline. 24. The processor of claim 19 wherein hardware of the processor is dynamically organized into one or more data pipelines. 25. The processor of claim 24 wherein each data pipeline employs a hierarchical structure with data flow first established between different parent units and then established between subsidiary units of a parent unit such that the data pipeline is constructed one level after another. 26. The processor of claim 19 wherein the functional units in the processor are dynamically organized into a control hierarchy in which each parent control unit only exchanges control and synchronization messages and signals with its subsidiary units. 27. The processor of claim 19 wherein data processing is performed by parallel execution of instruction sequences in heterogeneous hardware blocks. 28. The processor of claim 19 wherein the functional units are flexibly arranged to enable different types of data flow and arithmetic and logic operation sequences in a data pipeline to eliminate buffering in memory and to reduce traffic to and from data memory. 29. The processor of claim 19 wherein the common instruction set is used to program diverse functional hardware blocks requiring different numbers of bits for their respective control coding. 30. The processor of claim 29 wherein the common instruction set employs an instruction format that allows direct coding of instruction sequencing, direct coding of a subset of the hardware controls, and indirect coding of hardware controls through one of an address pointer to a control vector memory or a register read and write command. 31. The processor of claim 30 wherein the hardware control coded in an instruction specifies the organization of and interconnection between sub blocks within a block to form a specific hardware construct. 32. The processor of claim 19 wherein a memory unit includes two dual-ported memory banks which can alternate between being part of a data pipeline construct and interfacing with external memory for bulk loading and unloading of data. 33. The processor of claim 32 wherein each of two memory banks can be programmed by instruction to connect to either of two read side interfaces with different functionality and either of two write side interfaces with different functionality. 34. The processor of claim 33 wherein each of the four interfaces comprises an arithmetic unit that supports a set of functionalities specific to each interface and other elements to enable different modes of memory address calculation or simple data processing. 35. The processor of claim 19 wherein each memory unit is used to construct a hierarchical data pipeline that comprises, means for programming a DSE for data and status exchange amongst two read side interfaces and two write side interfaces supporting two memory banks for necessary coordination and timing alignment amongst their interfaces; and means for programming each of the interfaces to establish a data pipeline within an interface where the arithmetic unit can be used for simple data processing before data are written to a memory bank or after data are read from a memory bank. 36. The processor of claim 19 wherein a finite state machine is implemented using one memory bank in a memory unit to hold state table entries, and an instruction sequence for the memory unit to operate on input data bit streams in another memory bank to traverse from one state entry to the next. 37. The processor of claim 19 wherein a ring-type bus is employed between a processor parent control unit and the functional units it controls including execution units, switch units and memory units. 38. The processor of claim 19, wherein a plurality of the memory units are organizationally programmed either to operate as independent data memory storage units for corresponding different and independent functional blocks, or to operate in synchronization to provide a unified memory storage with appropriate two-dimensional addressing and rotate addressing modes with interconnected functional blocks. 39. The processor of claim 38 wherein the memory units and the functional blocks are matrix-switch inter-connectable.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.