IPC분류정보
국가/구분 |
United States(US) Patent
등록
|
국제특허분류(IPC7판) |
|
출원번호 |
US-0915936
(2010-10-29)
|
등록번호 |
US-8572469
(2013-10-29)
|
발명자
/ 주소 |
- Lee, Tak K.
- Shen, Ba-Zhong
|
출원인 / 주소 |
|
대리인 / 주소 |
|
인용정보 |
피인용 횟수 :
0 인용 특허 :
17 |
초록
▼
Turbo decoder employing ARP (almost regular permutation) interleave and arbitrary number of decoding processors. A novel approach is presented herein by which an arbitrarily selected number (M) of decoding processors (e.g., a plurality of parallel implemented turbo decoders) be employed to perform d
Turbo decoder employing ARP (almost regular permutation) interleave and arbitrary number of decoding processors. A novel approach is presented herein by which an arbitrarily selected number (M) of decoding processors (e.g., a plurality of parallel implemented turbo decoders) be employed to perform decoding of a turbo coded signal while still using a selected embodiment of an ARP (almost regular permutation) interleave. The desired number of decoding processors is selected, and very slight modification of an information block (thereby generating a virtual information block) is made to accommodate that virtual information block across all of the decoding processors during all decoding cycles except some dummy decoding cycles. In addition, contention-free memory mapping is provided between the decoding processors (e.g., a plurality of turbo decoders) and memory banks (e.g., a plurality of memories).
대표청구항
▼
1. An apparatus comprising: a plurality of memories;a plurality of turbo decoders configured to decode a turbo coded signal having an information block length, L, such that the plurality of turbo decoders including at most L turbo decoders; anda processing module, coupled to the plurality of memorie
1. An apparatus comprising: a plurality of memories;a plurality of turbo decoders configured to decode a turbo coded signal having an information block length, L, such that the plurality of turbo decoders including at most L turbo decoders; anda processing module, coupled to the plurality of memories and the plurality of turbo decoders, configured to perform contention-free mapping between the plurality of memories and the plurality of turbo decoders during a plurality of decoding cycles; and wherein:during at least one of the plurality of decoding cycles: a first subset of the plurality of turbo decoders is configured to process information from a subset of the plurality of memories, such that each turbo decoder of the first subset of a plurality of turbo decoders processing respective information from a respective, corresponding one memory of the subset of the plurality of memories, to generate updated information; anda second subset of the plurality of turbo decoders is configured to perform dummy decoding; andthe apparatus is configured to employ the updated information to generate estimates of information bits encoded within the turbo coded signal. 2. The apparatus of claim 1, wherein during at least one additional of the plurality of decoding cycles occurring before the at least one of the plurality of decoding cycles, all of the plurality of turbo decoders to process information from all of the plurality of memories, such that each of the plurality of turbo decoders to process at least one additional respective information from at least one additional respective, corresponding one of the plurality of memories, to generate prior updated information; and the plurality of turbo decoders to employ the prior updated information and the updated information to generate the estimates of information bits encoded within the turbo coded signal. 3. The apparatus of claim 1, wherein the contention-free mapping corresponding to an almost regular permutation (ARP) interleaving by which the turbo coded signal has been generated. 4. The apparatus of claim 1, wherein contention-free mapping during a first of the plurality of decoding cycles being defined according to a first almost regular permutation (ARP) dithering cycle, a first virtual block length, and a first window size; and contention-free mapping during a second of the plurality of decoding cycles being defined according to a second ARP dithering cycle, a second virtual block length, and a second window size. 5. The apparatus of claim 1, wherein during at least one additional of the plurality of decoding cycles occurring before the at least one of the plurality of decoding cycles, the processing module effectuating is further configured to perform contention-free mapping between all of the plurality of memories and all of the plurality of turbo decoders; during the at least one of the plurality of decoding cycles, the processing module is further configured to perform contention-free mapping between the subset of the plurality of memories and the first subset of the plurality of turbo decoders; andduring the at least one of the plurality of decoding cycles, no memory within the second subset of the plurality of turbo decoders being coupled to any of the plurality of memories. 6. The apparatus of claim 1, the plurality of turbo decoders including a first number of turbo decoders; and the plurality of memories including a second number of memories. 7. The apparatus of claim 1, wherein a number of the at least one of the plurality of decoding cycles during which the second subset of the plurality of turbo decoders performing dummy decoding being based on the information block length, L. 8. The apparatus of claim 1, wherein the turbo coded signal being received from storage media of a hard disk drive (HDD). 9. The apparatus of claim 1, wherein the apparatus being a communication device; and the communication device being operative within at least one of a satellite communication system, a wireless communication system, a wired communication system, and a fiber-optic communication system. 10. An apparatus comprising: a plurality of memories including a first number of memories;a plurality of turbo decoders configured to decode a turbo coded signal having an information block length, L, such that the plurality of turbo decoders including a second number of turbo decoders being less than or equal to L; anda processing module, coupled to the plurality of memories and the plurality of turbo decoders, configured to perform contention-free mapping between the plurality of memories and the plurality of turbo decoders during a plurality of decoding cycles; and wherein:during at least one of the plurality of decoding cycles: a first subset of the plurality of turbo decoders is configured to process information from a subset of the plurality of memories, such that each turbo decoder of the first subset of a plurality of turbo decoders processing respective information from a respective, corresponding one memory the subset of the plurality of memories, to generate updated information; anda second subset of the plurality of turbo decoders, being based on the information block length, L, is configured to perform dummy decoding; andthe apparatus is configured to employ the updated information to generate estimates of information bits encoded within the turbo coded signal. 11. The apparatus of claim 10, wherein during at least one additional of the plurality of decoding cycles occurring before the at least one of the plurality of decoding cycles, all of the plurality of turbo decoders to process information from all of the plurality of memories, such that each of the plurality of turbo decoders to process at least one additional respective information from at least one additional respective, corresponding one of the plurality of memories, thereby generating prior updated information; and the plurality of turbo decoders employing the prior updated information and the updated information to generate the estimates of information bits encoded within the turbo coded signal. 12. The apparatus of claim 10, wherein during at least one additional of the plurality of decoding cycles occurring before the at least one of the plurality of decoding cycles, the processing module is further configured to perform contention-free mapping between all of the plurality of memories and all of the plurality of turbo decoders; during the at least one of the plurality of decoding cycles, the processing module is further configured to perform contention-free mapping between the subset of the plurality of memories and the first subset of the plurality of turbo decoders; andduring the at least one of the plurality of decoding cycles, no memory within the second subset of the plurality of turbo decoders being coupled to any of the plurality of memories. 13. The apparatus of claim 10, wherein the apparatus being a communication device; and the communication device being operative within at least one of a satellite communication system, a wireless communication system, a wired communication system, and a fiber-optic communication system. 14. A method for execution by a communication device, the method comprising: receiving a turbo coded signal having an information block length, L, from a communication channel;effectuating contention-free mapping between a plurality of memories and a plurality of turbo decoders during a plurality of decoding cycles, such that the plurality of turbo decoders including at most L turbo decoders;during at least one of the plurality of decoding cycles: operating a first subset of a plurality of turbo decoders in accordance with processing information from a subset of the plurality of memories, such that each turbo decoder of the first subset of a plurality of turbo decoders processing respective information from a respective, corresponding one memory of the subset of the plurality of memories, to generate updated information; andoperating a second subset of the plurality of turbo decoders in accordance with dummy decoding; andemploying the updated information to generate estimates of information bits encoded within the turbo coded signal. 15. The method of claim 14, further comprising: during at least one additional of the plurality of decoding cycles occurring before the at least one of the plurality of decoding cycles, operating all of the plurality of turbo decoders processing information from all of the plurality of memories, such that each of the plurality of turbo decoders configured to process at least one additional respective information from at least one additional respective, corresponding one of the plurality of memories, to generate prior updated information; andemploying the prior updated information and the updated information to generate the estimates of information bits encoded within the turbo coded signal; and wherein a number of the at least one of the plurality of decoding cycles during which the second subset of the plurality of turbo decoders performing dummy decoding being based on the information block length, L. 16. The method of claim 14, further comprising: effectuating the contention-free mapping corresponding to an almost regular permutation (ARP) interleaving by which the turbo coded signal has been generated. 17. The method of claim 14, further comprising: effectuating the contention-free mapping during a first of the plurality of decoding cycles as defined according to a first almost regular permutation (ARP) dithering cycle, a first virtual block length, and a first window size; andeffectuating contention-free mapping during a second of the plurality of decoding cycles as defined according to a second ARP dithering cycle, a second virtual block length, and a second window size. 18. The method of claim 14, further comprising: during at least one additional of the plurality of decoding cycles occurring before the at least one of the plurality of decoding cycles, effectuating contention-free mapping between all of the plurality of memories and all of the plurality of turbo decoders;during the at least one of the plurality of decoding cycles, effectuating contention-free mapping between the subset of the plurality of memories and the first subset of the plurality of turbo decoders; andduring the at least one of the plurality of decoding cycles, decoupling each memory within the second subset of the plurality of turbo decoders from the plurality of memories. 19. The method of claim 14, wherein the plurality of turbo decoders including a first number of turbo decoders; and the plurality of memories including a second number of memories. 20. The method of claim 14, wherein the communication device being operative within at least one of a satellite communication system, a wireless communication system, a wired communication system, and a fiber-optic communication system.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.