IPC분류정보
국가/구분 |
United States(US) Patent
공개
|
국제특허분류(IPC7판) |
|
출원번호 |
18127105
(2023-03-28)
|
공개번호 |
20230315533
(2023-10-05)
|
우선권정보 |
CN-202210349300.5 (2022-04-01) |
발명자
/ 주소 |
- YANG, Jingzhong
- SHAN, Gang
|
출원인 / 주소 |
- MONTAGE TECHNOLOGY CO., LTD.
|
인용정보 |
피인용 횟수 :
0 인용 특허 :
0 |
초록
▼
An AI computing platform, an AI computing method, and an AI cloud computing system, the platform including: at least one computing component, each computing component includes: a processor, configured to initiate a calculation task and decompose the calculation task into a plurality of ordered subta
An AI computing platform, an AI computing method, and an AI cloud computing system, the platform including: at least one computing component, each computing component includes: a processor, configured to initiate a calculation task and decompose the calculation task into a plurality of ordered subtasks according to a network topology information table stored therein; a plurality of near-memory computing modules, the plurality of near-memory computing modules connecting in pairs with the processor, and the plurality of near-memory computing modules connecting in pairs with each other, wherein the plurality of near-memory computing modules are each configured to implement different operation types, and the plurality of near-memory computing modules complete one or more of the plurality of subtasks according to the operation types they each implement.
대표청구항
▼
1. An AI computing platform, comprising at least one computing component each comprising: a processor, configured to initiate a calculation task and decompose the calculation task into a plurality of ordered subtasks according to a network topology information table stored therein; anda plurality of
1. An AI computing platform, comprising at least one computing component each comprising: a processor, configured to initiate a calculation task and decompose the calculation task into a plurality of ordered subtasks according to a network topology information table stored therein; anda plurality of near-memory computing modules, the plurality of near-memory computing modules connecting in pairs with the processor, and the plurality of near-memory computing modules connecting in pairs with each other, wherein the plurality of near-memory computing modules are each configured to implement different operation types, and the plurality of near-memory computing modules complete one or more of the plurality of subtasks according to the operation types they each implement. 2. The AI computing platform according to claim 1, wherein the network topology information table comprises: an IP address of a host where the processor is located, a near-memory computing module index, operation type(s) supported by the near-memory computing module, a load rate, number of adjacent near-memory computing modules, and operation types supported by the adjacent near-memory computing modules. 3. The AI computing platform according to claim 1, wherein the processor is further configured to generate a package according to the decomposed plurality of ordered subtasks, and transmit the package to a near-memory computing module that processes the first subtask, wherein the package comprises: number of near-memory computing modules required to process the package, a list of near-memory computing modules required to process the package, a header cyclic redundancy check, a payload length, and a payload. 4. The AI computing platform according to claim 3, wherein the near-memory computing module is further configured to process a corresponding subtask when receiving the package and generate a processed package, and transmit the processed package to a next near-memory computing module according to the list of near-memory computing modules or return the processed package to the processor. 5. The AI computing platform according to claim 3, wherein the list of near-memory computing module comprises an IP address of a host located, an index, operation type(s) supported, and a load rate of the near-memory computing module sequentially processing each subtask. 6. The AI computing platform according to claim 1, wherein the processor is further configured to implement operation type(s) different from the plurality of near-memory computing modules, wherein the processor and the plurality of near-memory computing modules complete one or more of the plurality of subtasks according to the operation types they each implement. 7. The AI computing platform according to claim 1, wherein the processors of each computing component are connected via a bus. 8. An AI computing method, comprising: initiating, by a processor, a calculation task and decomposing the calculation task into a plurality of ordered subtasks according to a network topology information table stored therein;generating, by the processor, a package according to the decomposed plurality of ordered subtasks, and routing the package to a near-memory computing module that processes the first subtask; andprocessing, by the near-memory computing modules, a corresponding subtask when receiving the package to generate a processed package, and transmitting the processed package to a next near-memory computing module that processes a next subtask until completing all the subtasks, and then routing to the processor connecting to the near-memory computing module that processes the last subtask. 9. The AI computing method according to claim 8, wherein the plurality of near-memory computing modules connect in pairs with the processor, and the plurality of near-memory computing modules connect in pairs with each other, wherein the plurality of near-memory computing modules are each configured to implement different operation types. 10. The AI computing method according to claim 8, wherein the network topology information table comprises: an IP address of a host where the processor is located, a near-memory computing module index, operation type(s) supported by the near-memory computing module, a load rate, number of adjacent near-memory computing modules, and operation types supported by the adjacent near-memory computing modules. 11. The AI computing method according to claim 8, wherein the package comprises: number of near-memory computing modules required to process the package, a list of near-memory computing modules required to process the package, a header cyclic redundancy check, a payload length, and a payload. 12. The AI computing method according to claim 11, wherein the list of near-memory computing module comprises an IP address of a host located, an index, operation type(s) supported, and a load rate of the near-memory computing module sequentially processing each subtask. 13. The AI computing method according to claim 8, wherein the processor is further configured to implement operation type(s) different from the plurality of near-memory computing modules, wherein the processor and the plurality of near-memory computing modules complete one or more of the plurality of subtasks according to the operation types they each implement. 14. An AI cloud computing system, comprising: a cloud computing center, and a plurality of AI computing platforms, wherein the cloud computing center is connected to the plurality of AI computing platforms; the AI computing platform comprises at least one computing component each comprising:a processor, configured to initiate a calculation task and decompose the calculation task into a plurality of ordered subtasks according to a network topology information table stored therein; anda plurality of near-memory computing modules, the plurality of near-memory computing modules connecting in pairs with the processor, and the plurality of near-memory computing modules connecting in pairs with each other, wherein the plurality of near-memory computing modules are each configured to implement different operation types, and the plurality of near-memory computing modules complete one or more of the plurality of subtasks according to the operation types they each implement.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.