최소 단어 이상 선택하여야 합니다.
최대 10 단어까지만 선택 가능합니다.
다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
NTIS 바로가기다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
DataON 바로가기다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
Edison 바로가기다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
Kafe 바로가기국가/구분 | United States(US) Patent 등록 |
---|---|
국제특허분류(IPC7판) |
|
출원번호 | US-0523764 (2003-07-24) |
등록번호 | US-8156284 (2012-04-10) |
우선권정보 | DE-102 36 269 (2002-08-07); DE-102 36 271 (2002-08-07); DE-102 36 272 (2002-08-07); WO-PCT/EP02/10065 (2002-08-16); DE-102 38 172 (2002-08-21); DE-102 38 173 (2002-08-21); DE-10238 174 (2002-08-21); DE-102 40 000 (2002-08-27); DE-102 40 022 (2002-08-27); WO-PCT/DE02/03278 (2002-09-03); DE-102 41 812 (2002-09-06); WO-PCT/EP02/10084 (2002-09-09); DE-102 43 322 (2002-09-18); WO-PCT/EP02/10464 (2002-09-18); WO-PCT/EP02/10479 (2002-09-18); WO-PCT/EP02/10536 (2002-09-19); WO-PCT/EP02/10572 (2002-09-19); EP-02022692 (2002-10-10); EP-02027277 (2002-12-06); DE-103 00 380 (2003-01-07); WO-PCT/DE03/00152 (2003-01-20); WO-PCT/EP03/00624 (2003-01-20); WO-PCT/DE03/00489 (2003-02-18); DE-103 10 195 (2003-03-06); WO-PCT/DE03/00942 (2003-03-21); DE-103 15 295 (2003-04-04); EP-03009906 (2003-04-30); DE-103 21 834 (2003-05-15); EP-03013694 (2003-06-17); EP-03015015 (2003-07-02) |
국제출원번호 | PCT/EP03/08080 (2003-07-24) |
§371/§102 date | 20050802 (20050802) |
국제공개번호 | WO2004/015568 (2004-02-19) |
발명자 / 주소 |
|
출원인 / 주소 |
|
대리인 / 주소 |
|
인용정보 | 피인용 횟수 : 32 인용 특허 : 540 |
In a data-processing method, first result data may be obtained using a plurality of configurable coarse-granular elements, the first result data may be written into a memory that includes spatially separate first and second memory areas and that is connected via a bus to the plurality of configurabl
In a data-processing method, first result data may be obtained using a plurality of configurable coarse-granular elements, the first result data may be written into a memory that includes spatially separate first and second memory areas and that is connected via a bus to the plurality of configurable coarse-granular elements, the first result data may be subsequently read out from the memory, and the first result data may be subsequently processed using the plurality of configurable coarse-granular elements. In a first configuration, the first memory area may be configured as a write memory, and the second memory area may be configured as a read memory. Subsequent to writing to and reading from the memory in accordance with the first configuration, the first memory area may be configured as a read memory, and the second memory area may be configured as a write memory.
1. A data processing arrangement, comprising: at least one standard processor adapted for processing data in a sequential manner, the at least one standard processor including an instruction pipeline;a reconfigurable array including a plurality of data processing cells coupled to the at least one st
1. A data processing arrangement, comprising: at least one standard processor adapted for processing data in a sequential manner, the at least one standard processor including an instruction pipeline;a reconfigurable array including a plurality of data processing cells coupled to the at least one standard processor via the instruction pipeline;a plurality of IRAM memory elements coupled to the reconfigurable array, at least some of the IRAM memory elements forming a data cache for storing local cache copies of a main memory; anda preloadable configuration cache for storing configuration data, wherein at least some of the data processing cells of the reconfigurable array are coupled to the configuration cache, thereby supporting at least one of preloading of a configuration and a fast configuration switch. 2. The data processing arrangement of claim 1, wherein at least some of the IRAM memory elements include multiple IRAM instances. 3. The data processing arrangement of claim 2, wherein at least some of the IRAM memory elements are associated with a starting address. 4. The data processing arrangement of claim 2, wherein at least some of the IRAM memory elements are associated with a state information. 5. The data processing arrangement of claim 2, wherein at least some of the IRAM memory elements are associated with an address TAG. 6. The data processing arrangement of claim 5, wherein, if no address TAG matches a data access, a corresponding memory area is newly loaded into an empty IRAM instance. 7. The data processing arrangement of claim 6, wherein, if no empty IRAM instance is available, an unmodified instance is declared empty and overwritten with the newly loaded memory area. 8. The data processing arrangement of claim 6, wherein, if no empty IRAM instance is available, a modified instance is cleaned by writing back its data to the main memory. 9. The data processing arrangement of claim 8, wherein a state machine cleans unused IRAM instances by writing back their content in unused memory cycles. 10. The data processing arrangement of claim 2, wherein at least some of the IRAM memory elements are associated with a starting address and with state information. 11. The data processing arrangement of claim 2, wherein at least some of the IRAM memory elements are associated with a starting address and an address TAG. 12. The data processing arrangement of claim 2, wherein at least some of the IRAM memory elements are associated with a starting address, state information, and an address TAG. 13. The data processing arrangement of claim 2, wherein at least some of the IRAM memory elements are associated with state information and an address TAG. 14. The data processing arrangement of any of claims 1, 2, and 3 to 13, wherein the plurality of data processing cells are coarse-grained. 15. The data processing arrangement of claim 14, wherein the reconfigurable array is runtime reconfigurable. 16. The data processing arrangement of claim 15, wherein the reconfigurable array is one of an FPGA, a DSP, and a data-flow-processor. 17. A data processing method, comprising: providing a reconfigurable array of data processing cells for data processing;providing at least one standard processor for processing data in a sequential manner, the at least one standard processor including an instruction pipeline via which the reconfigurable array is coupled to the at least one standard processor;providing a preloadable configuration cache for storing configuration data;coupling the reconfigurable array to a cache for data processing, the cache containing local cache copies of a main memory, including a plurality of IRAM memory elements, and being explicitly software managed; andcoupling at least some of the data processing cells of the reconfigurable array to the configuration cache, thereby supporting at least one of preloading a configuration and a fast configuration switch. 18. The data processing method of claim 17, wherein the data processing cells are coarse-grained cells. 19. The data processing method of claim 18, wherein the reconfigurable array is runtime reconfigurable. 20. The data processing method of any one of claims 17 to 19, wherein the reconfigurable array is an FPGA. 21. The data processing method of any one of claims 17 to 19, wherein the reconfigurable array is a DSP. 22. The data processing method of any one of claims 17 to 19, wherein the reconfigurable array is a data-flow-processor. 23. The data processing method of claim 17, wherein configurations executed on the reconfigurable array are not pre-emptive. 24. The data processing method of claim 17, wherein the cache containing local cache copies of a main memory eliminates explicit store instructions using cache write-back operations. 25. The data processing method of claim 24, wherein store is executed delayed as cache write back. 26. The data processing method of claim 24, wherein the reconfigurable array supports pipeline stages of Load, Execute, and Store. 27. The data processing method of claim 26, wherein the Store pipeline stage is executed as a cache write-back operation. 28. The data processing method of claim 24, wherein the de-centralized explicitly preloaded configuration cache supports preloading of at least one of a configuration and a fast configuration switch. 29. The data processing system of claim 24, wherein the de-centralized explicitly preloaded configuration cache is implemented as FIFO. 30. The data processing method of claim 24, wherein the reconfigurable array operates asynchronously to the at least one standard processor. 31. The data processing method of claim 24, wherein the reconfigurable array and the at least one standard processor are synchronized via at least one explicit instruction. 32. The data processing method of claim 17, wherein the explicitly software managed cache is preloadable via a configuration configured onto the reconfigurable array. 33. The data processing method of claim 32, wherein the explicitly software managed cache is preloadable via a burst-preload instruction triggered by the at least one standard processor. 34. The data processing method of claim 17, wherein the at least one standard processor is a RISC processor. 35. The data processing method of claim 17, wherein the explicitly software managed cache is written back to the main memory via a synchronization instruction. 36. The data processing method of claim 17, wherein the reconfigurable array supports pipeline stages of Load, Execute, and Store. 37. The data processing method of claim 36, wherein the Store pipeline stage is executed as a cache write-back operation.
Copyright KISTI. All Rights Reserved.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.