A system for a processor with memory with combined line and word access is presented. A system performs narrow read/write memory accesses and wide read/write memory accesses to the same memory bank using multiplexers and latches to direct data. The system processes 16 byte load/sore requests using a
A system for a processor with memory with combined line and word access is presented. A system performs narrow read/write memory accesses and wide read/write memory accesses to the same memory bank using multiplexers and latches to direct data. The system processes 16 byte load/sore requests using a narrow read/write memory access and also processes 128 byte DMA and instruction fetch requests using a wide read/write memory access. During DMA requests, the system writes/reads sixteen DMA operations to memory on one instruction cycle. By doing this, the memory is available to process load/store or instruction fetch requests during fifteen other instruction cycles.
대표청구항▼
What is claimed is: 1. A system comprising a memory that supports a narrow read/write memory access and a wide read/write memory access to a single memory space, wherein the system processes a load/store request that corresponds to the narrow read/write memory access that is a single 16 byte quad-w
What is claimed is: 1. A system comprising a memory that supports a narrow read/write memory access and a wide read/write memory access to a single memory space, wherein the system processes a load/store request that corresponds to the narrow read/write memory access that is a single 16 byte quad-word, and wherein the system processes a DMA request that at all times corresponds to the wide read/write memory access that is a single 128 byte line of the memory; a write accumulation buffer that accumulates a plurality of DMA write operations over a plurality of instruction cycles, the plurality of DMA write operations corresponding to the wide write memory access that executes over a single instruction cycle; and a multiplexer that provides the wide write memory access to the memory for the DMA request and provides the narrow write memory access to the memory for the store request. 2. The system of claim 1 wherein the wide read/write memory access corresponds to an instruction fetch request. 3. The system of claim 2 wherein the system is effective to prioritize the requests in the order of the DMA request, then the load/store request, and then the instruction fetch request. 4. The system of claim 1 further comprising: a read latch that receives DMA data from the wide read memory access, the DMA data corresponding to a plurality of DMA read operations; and wherein the read latch provides the plurality of DMA read operations to a DMA unit over a plurality of instruction cycles. 5. The system of claim 1 wherein the memory is used in a processing element architecture. 6. The system of claim 1 wherein the system supports split accumulation latch capability and a plurality of memory banks. 7. The system of claim 6 wherein, during the narrow read/write memory access that corresponds to the load/store request, the system accesses one of the plurality of memory banks, and wherein the remaining plurality of memory banks are not accessed. 8. The system of claim 1 wherein the wide read/write memory access corresponds to cache line cast-out or reload operations. 9. The system of claim 1 further comprising: a first read latch to receive data from the single memory space and from a second read latch during a wide read operation, wherein the second read latch receives data from the single memory space and stages the data for the first read latch. 10. The system of claim 1 further comprising: a first processor type; and one or more second processor types, wherein the memory is included in the second processor types. 11. A program product comprising computer readable code stored in computer memory, the computer readable code being effective to: receive a memory request; determine whether the memory request is a store request, wherein the store request corresponds to a narrow write memory access to a memory that is a single 16 byte quad-word, or whether the memory request is a DMA write request, wherein the DMA request at all times corresponds to a wide write memory access to the memory that is a single 128 byte line of the memory; in response to determining that the memory request is the store request, instruct a multiplexer to provide the narrow write memory access to the memory and perform the narrow write memory access to the memory through the multiplexer; and in response to determining that the memory request is the DMA write request, instruct the multiplexer to provide the wide write memory access to the memory in order to perform the wide write memory access to the memory through the multiplexer, wherein during the wide write memory access, accumulate a plurality of DMA write operations over a plurality of instruction cycles, the plurality of DMA write operations corresponding to the wide write memory access that executes over a single instruction cycle. 12. The program product of claim 11 wherein the wide read/write memory access corresponds to an instruction fetch request, the computer program code further effective to: prioritize the requests in the order of the DMA request, then the load/store request, and then the instruction fetch request. 13. A computer-implemented method comprising: receiving a memory request; determining whether the memory request is a store request, wherein the store request corresponds to a narrow write memory access to a memory that is a single 16 byte quad-word, or whether the memory request is a DMA write request, wherein the DMA write request at all times corresponds to a wide write memory access to the memory that is a single 128 byte line of the memory; in response to determining that the memory request is the store request, instructing a multiplexer to provide the narrow write memory access to the memory and perform the narrow write memory access to the memory through the multiplexer; and in response to determining that the memory request is the DMA write request, instructing the multiplexer to provide the wide write memory access to the memory in order to perform the wide write memory access to the memory through the multiplexer, wherein during the wide write memory access, accumulating a plurality of DMA write operations over a plurality of instruction cycles, the plurality of DMA write operations corresponding to the wide write memory access that executes over a single instruction cycle. 14. The method of claim 13 wherein the wide read/write memory access corresponds to an instruction fetch request. 15. The method of claim 14 further comprising: prioritizing the requests in the order of the DMA write request, then the store request, and then the instruction fetch request. 16. The method of claim 13 further comprising: utilizing a read latch that receives DMA data for the wide read memory access, the DMA data corresponding to a plurality of DMA read operations; and wherein the read latch provides the plurality of DMA read operations to a DMA unit over a plurality of instruction cycles. 17. The method of claim 13 wherein the memory is used in a processing element architecture. 18. The method of claim 13 wherein the method supports split accumulation latch capability and a plurality of memory banks. 19. The method of claim 18 wherein, during the narrow write memory access that corresponds to the store request, the method further comprising: accessing one of the plurality of memory banks, and wherein the remaining plurality of memory banks are not accessed. 20. The method of claim 13 wherein the wide write memory access corresponds to cache line cast-out or reload operations.
Pechter Richard G. (Escondido CA) Selkovitch Ronald (Ramona CA) Tsy Quoanh W. (San Diego CA) Woolf William C. (San Diego CA), Prefetch circuit for a computer memory subject to consecutive addressing.
Chen Chein C. ; Cooper John C. ; Francis David E. ; Coomes Joseph A. ; Leach Jerald G., Programmable memory interface for efficient transfer of different size data.
Arroyo Ronald X. (Austin TX) Burky William E. (Austin TX) Gruwell Tricia A. (Austin TX) Hinojosa Joaquin (Round Rock TX), Steering logic to directly connect devices having different data word widths.
Danny Marvin Neal ; Steven Mark Thurber, System for determining whether a subsequent transaction may be allowed or must be allowed or must not be allowed to bypass a preceding transaction.
Watanabe Akira (San Jose CA) Maheshwari Dinesh C. (Santa Clara CA) McKeever Bruce T. (Cupertino CA) Somasundaram Madian (San Jose CA), System for transferring M elements X times and transferring N elements one time for an array that is X*M+N long responsi.
Flachs, Brian K.; Hofstee, Harm P.; Johns, Charles R.; King, Matthew E.; Liberty, John S.; Michael, Brad W., Multithreaded programmable direct memory access engine.
Flachs, Brian K.; Hofstee, Harm P.; Johns, Charles R.; King, Matthew E.; Liberty, John S.; Michael, Brad W., Multithreaded programmable direct memory access engine.
Flachs, Brian K.; Hofstee, Harm P.; Johns, Charles R.; King, Matthew E.; Liberty, John S.; Michael, Brad W., Multithreaded programmable direct memory access engine.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.