A runtime system implemented in accordance with the present invention provides an application platform for parallel-processing computer systems. Such a runtime system enables users to leverage the computational power of parallel-processing computer systems to accelerate/optimize numeric and array-in
A runtime system implemented in accordance with the present invention provides an application platform for parallel-processing computer systems. Such a runtime system enables users to leverage the computational power of parallel-processing computer systems to accelerate/optimize numeric and array-intensive computations in their application programs. This enables greatly increased performance of high-performance computing (HPC) applications.
대표청구항▼
1. A computer-implemented method, comprising: in a multi-thread runtime system configured to run on a parallel-processing computer system that includes multiple processing elements, at least two of which have different instruction set architectures, performing the following at run time: receiving on
1. A computer-implemented method, comprising: in a multi-thread runtime system configured to run on a parallel-processing computer system that includes multiple processing elements, at least two of which have different instruction set architectures, performing the following at run time: receiving one or more operation requests issued by an application;generating a first compute kernel for a first subset of the operation requests and a second compute kernel for a second subset of the operation requests, wherein the first subset of the operation requests is dynamically chosen by the runtime system for generating the first compute kernel and the second subset of the operation requests is pre-specified in the application for generating the second compute kernel, wherein the first compute kernel is configured to be executed on a first processing element and the second compute kernel is configured to be executed on a second processing element, the first and second processing elements having different instruction set architectures;storing the first compute kernel in a first program cache for later reuse by the runtime system in lieu of generating a corresponding compute kernel;storing the second compute kernel in a second program cache for later reuse by the runtime system in lieu of generating a corresponding compute kernel; andexecuting the first and second compute kernels on the corresponding first and second processing elements of the parallel-processing computer system. 2. A parallel-processing computer system, comprising: multiple processing elements, at least two of which have different instruction set architectures, wherein each of the processing elements has its associated memory; anda multi-thread runtime system being stored in the memory of at least one of the processing elements and executed by the processing elements, wherein the runtime system is configured to perform the following functions at runtime: receiving one or more operation requests issued by an application;generating a first compute kernel for a first subset of the operation requests and a second compute kernel for a second subset of the operation requests, wherein the first subset of the operation requests is dynamically chosen by the runtime system for generating the first compute kernel and the second subset of the operation requests is pre-specified in the application for generating the second compute kernel, wherein the first compute kernel is configured to be executed on a first processing element and the second compute kernel is configured to be executed on a second processing element, the first and second processing elements having different instruction set architectures;storing the first compute kernel in a first program cache for later reuse by the runtime system in lieu of generating a corresponding compute kernel;storing the second compute kernel in a second program cache for later reuse by the runtime system in lieu of generating a corresponding compute kernel; andexecuting the first and second compute kernels on the corresponding first and second processing elements of the parallel-processing computer system. 3. A computer program product, comprising: for use in conjunction with a parallel-processing computer system that includes multiple processing elements, at least two of which have different instruction set architectures, the computer program product comprising a non-transitory computer readable storage medium and a computer program mechanism embedded therein, the computer program mechanism being configured to perform the following functions at runtime in a multi-thread runtime system: receiving one or more operation requests issued by an application;generating a first compute kernel for a first subset of the operation requests and a second computer kernel for a second subset of the operation requests, wherein the first subset of the operation requests is dynamically chosen by the runtime system for generating the first compute kernel and the second subset of the operation requests is pre-specified in the application for generating the second compute kernel, wherein the first compute kernel is configured to be executed on a first processing element and the second compute kernel is configured to be executed on a second processing element, the first and second processing elements having different instruction set architectures;storing the first compute kernel in a first program cache for later reuse by the runtime system in lieu of generating a corresponding compute kernel;storing the second compute kernel in a second program cache for later reuse by the runtime system in lieu of generating a corresponding compute kernel; andexecuting the first and second compute kernels on the corresponding first and second processing elements of the parallel-processing computer system. 4. The computer-implemented method of claim 1, further including: before the generation of a compute kernel, generating one or more performance attributes for each of the respective operation requests associated with the compute kernel; andexamining the performance attributes associated with the respective operation requests to determine whether a predefined compilation condition is met. 5. The computer-implemented method of claim 4, wherein the performance attributes include at least one of (i) creation of a first object that triggers the generation of a compute kernel, (ii) release of the last reference to a second object, and (iii) performance hints from the application that specify when and how the compute kernel should be generated for the respective operation requests. 6. The computer-implemented method of claim 4, wherein the predefined compilation condition includes at least one of (i) receiving a data access operation request from the application, (ii) receiving a release of the last reference to an object, and (iii) the number of received operation requests exceeding a predetermined minimum number of operation requests. 7. The computer-implemented method of claim 4, further including: performing a lookup of the second program cache for a pre-generated version of the compute kernel for the respective operation requests and reusing the pre-generated version of the compute kernel; andif the pre-generated version of the compute kernel is not found in the second program cache, determining whether the predefined compilation condition is met; if so, performing a lookup of the first program cache for a pre-generated version of the compute kernel for the respective operation requests and reusing the pre-generated version of the first compute kernel; andif the pre-generated version of the compute kernel is not found in the first program cache, identifying a respective processing element for the compute kernel and generating the compute kernel for the respective operation requests. 8. The computer-implemented method of claim 1, wherein at least one of the processing elements is selected from the group of a single-core or multi-core CPU, a GPU, a stream processor, and a coprocessor. 9. The parallel-processing computer system of claim 2, further including: instructions for, before the generation of a compute kernel, generating one or more performance attributes for each of the respective operation requests associated with the compute kernel; andexamining the performance attributes associated with the respective operation requests to determine whether a predefined compilation condition is met. 10. The parallel-processing computer system of claim 9, wherein the performance attributes include at least one of (i) creation of a first object that triggers the generation of a compute kernel, (ii) release of the last reference to a second object, and (iii) performance hints from the application that specify when and how the compute kernel should be generated for the respective operation requests. 11. The parallel-processing computer system of claim 9, wherein the predefined compilation condition includes at least one of (i) receiving a data access operation request from the application, (ii) receiving a release of the last reference to an object, and (iii) the number of received operation requests exceeding a predetermined minimum number of operation requests. 12. The parallel-processing computer system of claim 9, further including: instructions for performing a lookup of the second program cache for a pre-generated version of the compute kernel for the respective operation requests and reusing the pre-generated version of the compute kernel;instructions for determining whether the predefined compilation condition is met if the pre-generated version of the compute kernel is not found in the second program cache;instructions for performing a lookup of the first program cache for a pre-generated version of the compute kernel for the respective operation requests and reusing the pre-generated version of the first compute kernel; andinstructions for identifying a respective processing element for the compute kernel and generating the compute kernel for the respective operation requests if the pre-generated version of the compute kernel is not found in the first program cache. 13. The parallel-processing computer system of claim 2, wherein at least one of the processing elements is selected from the group of a single-core or multi-core CPU, a GPU, a stream processor, and a coprocessor. 14. The computer program product of claim 3, further including: instructions for, before the generation of a compute kernel, generating one or more performance attributes for each operation request associated with the compute kernel andexamining the performance attributes associated with the operation requests to determine whether a predefined compilation condition is met. 15. The computer program product of claim 14, wherein the performance attributes include at least one of (i) creation of a first object that triggers the generation of a compute kernel, (ii) release of the last reference to a second object, and (iii) performance hints from the application that specify when and how the first and second compute kernels should be generated for the respective operation requests. 16. The computer program product of claim 14, wherein the predefined compilation condition includes at least one of (i) receiving a data access operation request from the application, (ii) receiving a release of the last reference to an object, and (iii) the number of received operation requests exceeding a predetermined minimum number of operation requests. 17. The computer program product of claim 14, further including: instructions for performing a lookup of the second program cache for a pre-generated version of the compute kernel for the respective operation requests and reusing the pre-generated version of the compute kernel;instructions for determining whether the predefined compilation condition is met if the pre-generated version of the compute kernel is not found in the second program cache;instructions for performing a lookup of the first program cache for a pre-generated version of the compute kernel for the respective operation requests and reusing the pre-generated version of the first compute kernel; andinstructions for identifying a respective processing element for the compute kernel and generating the compute kernel for the respective operation requests if the pre-generated version of the compute kernel is not found in the first program cache. 18. The computer program product of claim 3, wherein at least one of the processing elements is selected from the group of a single-core or multi-core CPU, a GPU, a stream processor, and a coprocessor. 19. A computer-implemented method, comprising: in a multi-thread runtime system running on a parallel-processing computer system that includes multiple processing elements, at least two of which have different instruction set architectures, performing the following at runtime: receiving one or more operation requests issued by an application;generating an intermediate representation for the one or more operation requests;dynamically generating a plurality of compute kernels for the intermediate representation, wherein a first one of the compute kernels is generated for a first subset of the operation requests dynamically chosen by the runtime system, stored in a first program cache for later reuse by the runtime system in lieu of generating a corresponding compute kernel, and is configured to be executed on a first processing element and a second one of the compute kernels is generated for a second subset of the operation requests pre-specified in the application, stored in a second program cache for later reuse by the runtime system in lieu of generating a corresponding compute kernel, and is configured to be executed on a second processing element, the first and second processing elements having different instruction set architectures; andexecuting the first and second compute kernels on the corresponding first and second processing elements of the parallel-processing computer system. 20. The method of claim 19, further comprising: receiving the one or more operation requests via an application programming interface associated with the runtime system. 21. The method of claim 19, wherein generating the compute kernels includes: including in one of the compute kernels at least one pre-compiled intrinsic operation, wherein the intrinsic operation corresponds to a program routine that is optimized for execution on a respective one of the multiple processing elements. 22. The method of claim 19, further comprising: at runtime: dynamically compiling a primitive operation and storing the dynamically compiled primitive operation in a primitive library for reuse;wherein generating the compute kernels includes retrieving from the primitive library and including in one of the compute kernels the dynamically-compiled primitive operation.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (46)
Wu, Gansha; Lueh, Guei Yuan; Shi, Xiaohua, Apparatus and methods for restoring synchronization to object-oriented software applications in managed runtime environments.
Tang Jun ; So John Ling Wing, Computer operating process allocating tasks between first and second processors at run time based upon current processor load.
Alain Charles Azagury IL; Michael Factor IL; Gera Goft IL; Shlomit Pinter IL; Esther Yeger-Lotem IL, Group communication system with flexible member model.
Kielstra,Allan Henry; Stepanian,Levon Sassoon; Stoodley,Kevin Alexander, Method and apparatus for transforming Java Native Interface function calls into simpler operations during just-in-time compilation.
Gupta Rajiv ; Worley ; Jr. William S., Out-of-order execution using encoded dependencies between instructions in queues to determine stall values that control.
Wright, Gregory M.; Wolczko, Mario I.; Seidl, Matthew L., Reducing the overhead involved in executing native code in a virtual machine through binary reoptimization.
Spix George A. (Eau Claire WI) Wengelski Diane M. (Eau Claire WI) Hawkinson Stuart W. (Eau Claire WI) Johnson Mark D. (Eau Claire WI) Burke Jeremiah D. (Eau Claire WI) Thompson Keith J. (Eau Claire W, System and method for controlling a highly parallel multiprocessor using an anarchy based scheduler for parallel executi.
Craig Chambers ; Susan J. Eggers ; Brian K. Grant ; Markus Mock ; Matthai Philipose, System and method for performing selective dynamic compilation using run-time information.
Demetriou, Christopher G.; Papakipos, Matthew N.; Gibbs, Noah L., Systems and methods for debugging an application running on a parallel-processing computer system.
Crutchfield, William Y.; Grant, Brian K.; Papakipos, Matthew N., Systems and methods for dynamically choosing a processing element for a compute kernel.
Ankireddipally, Lakshmi Narasimha; Yeh, Ryh-Wei; Nichols, Dan; Devesetti, Ravi, Transaction data structure for process communications among network-distributed applications.
Peacock, Andrew J.; Couris, Cheryl; Storm, Christina; Netz, Amir; Cheung, Chiu Ying; Flasko, Michael J.; Grealish, Kevin; Della-Libera, Giovanni M.; Carlson, Sonia P.; Heninger, Mark W.; Bach, Paula M.; Nettleton, David J., Job scheduling and monitoring in a distributed computing environment.
Greyzck, Terry D.; Fulton, William R.; Oehmke, David W.; Elsesser, Gary W., Mapping vector representations onto a predicated scalar multi-threaded system.
Mitra, Surath; Pawar, Kiran, Simultaneous utilization of a first graphics processing unit (GPU) and a second GPU of a computing platform through a virtual machine (VM) in a shared mode and a dedicated mode respectively.
Sager, David J.; Sasanka, Ruchira; Gabor, Ron; Raikin, Shlomo; Nuzman, Joseph; Peled, Leeor; Domer, Jason A.; Kim, Ho-Seop; Wu, Youfeng; Yamada, Koichi; Ngai, Tin-Fook; Chen, Howard H.; Bobba, Jayaram; Cook, Jeffery J.; Shaikh, Omar M.; Srinivas, Suresh, Systems, apparatuses, and methods for a hardware and software system to automatically decompose a program to multiple parallel threads.
Sasanka, Ruchira; Das, Abhinav; Cook, Jeffrey J.; Bobba, Jayaram; Krishnaswamy, Arvind; Sager, David J.; Srinivas, Suresh, Systems, apparatuses, and methods for a hardware and software system to automatically decompose a program to multiple parallel threads.
Bobba, Jayaram; Sasanka, Ruchira; Cook, Jeffrey J.; Das, Abhinav; Krishnaswamy, Arvind; Sager, David J.; Agron, Jason M., Using control flow data structures to direct and track instruction execution.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.