Runtime optimization of an application executing on a parallel computer
원문보기
IPC분류정보
국가/구분
United States(US) Patent
등록
국제특허분류(IPC7판)
G06F-009/46
G06F-015/16
출원번호
US-0760111
(2010-04-14)
등록번호
US-8365186
(2013-01-29)
발명자
/ 주소
Faraj, Daniel A.
Smith, Brian E.
출원인 / 주소
International Business Machines Corporation
인용정보
피인용 횟수 :
15인용 특허 :
29
초록▼
Identifying a collective operation within an application executing on a parallel computer; identifying a call site of the collective operation; determining whether the collective operation is root-based; if the collective operation is not root-based: establishing a tuning session and executing the c
Identifying a collective operation within an application executing on a parallel computer; identifying a call site of the collective operation; determining whether the collective operation is root-based; if the collective operation is not root-based: establishing a tuning session and executing the collective operation in the tuning session; if the collective operation is root-based, determining whether all compute nodes executing the application identified the collective operation at the same call site; if all compute nodes identified the collective operation at the same call site, establishing a tuning session and executing the collective operation in the tuning session; and if all compute nodes executing the application did not identify the collective operation at the same call site, executing the collective operation without establishing a tuning session.
대표청구항▼
1. A method of runtime optimization of an application executing on a parallel computer, the parallel computer having a plurality of compute nodes organized into a communicator, the method comprising: identifying, by each compute node during application runtime, a collective operation within the appl
1. A method of runtime optimization of an application executing on a parallel computer, the parallel computer having a plurality of compute nodes organized into a communicator, the method comprising: identifying, by each compute node during application runtime, a collective operation within the application;identifying, by each compute node, a call site of the collective operation in the application;determining, by each compute node, whether the collective operation is root-based;if the collective operation is not root-based: establishing a tuning session administered by a self tuning module for the collective operation in dependence upon an identifier of the call site of the collective operation and executing the collective operation in the tuning session;if the collective operation is root-based, determining, through use of a single other collective operation, whether all compute nodes executing the application identified the collective operation at the same call site;if all compute nodes executing the application identified the collective operation at the same call site, establishing a tuning session administered by the self tuning module for the collective operation in dependence upon the identifier of the call site of the collective operation and executing the collective operation in the tuning session; andif all compute nodes executing the application did not identify the collective operation at the same call site, executing the collective operation without establishing a tuning session. 2. The method of claim 1 wherein a root-based collective operation comprises one of: a broadcast operation, a scatter operation, a gather operation, or a reduce operation. 3. The method of claim 1 wherein identifying a collective operation within the application further comprises identifying a collective operation to tune in dependence upon a hint comprising an attribute of a function call that passes through the communicator to the self tuning module to indicate whether to tune the collective operation. 4. The method of claim 1 wherein identifying the call site of the collective operation in the application further comprises calling a traceback function and receiving as a return from the traceback function a unique memory address for the collective operation. 5. The method of claim 1 wherein determining whether all compute nodes executing the application identified the collective operation at the same call site further comprising performing on all the compute nodes of the communicator a single ‘allreduce’ collective operation to identify the minimum and maximum values of all of the identified call sites. 6. The method of claim 1 further comprising: selecting, for a particular collective operation of the application in dependence upon one or more tuning sessions for the particular collective operation, one or more algorithms to carry out the particular collective operation upon subsequent executions of the application, the one or more algorithms representing an optimized set of algorithms to carry out the particular collective operation;recording the one or more selected algorithms; andduring a subsequent execution of the application and without performing another tuning session, carrying out the particular collective operation of the application with the recorded selected algorithms. 7. The method of claim 6 wherein recording the one or more selected algorithms from the tuning session further comprises recording, in association with the one or more selected algorithms, an identifier of the call site for the particular collective operation, a message size, and a communicator identifier. 8. The method of claim 6 wherein: recording the one or more selected algorithms from the tuning session further comprises identifying any of the tuned collective operations that are non-critical collective operations; andcarrying out the particular collective operation of the application with the recorded selected algorithms further comprises carrying out the non-critical collective operations with standard messaging module algorithms. 9. An apparatus for runtime optimization of an application executing on a parallel computer, the parallel computer having a plurality of compute nodes organized into a communicator, the apparatus comprising a computer processor and a computer memory operatively coupled to the computer processor, the computer memory having disposed within it computer program instructions capable of: identifying, by each compute node during application runtime, a collective operation within the application;identifying, by each compute node, a call site of the collective operation in the application;determining, by each compute node, whether the collective operation is root-based;if the collective operation is not root-based: establishing a tuning session administered by a self tuning module for the collective operation in dependence upon an identifier of the call site of the collective operation and executing the collective operation in the tuning session;if the collective operation is root-based, determining whether all compute nodes executing the application identified the collective operation at the same call site;if all compute nodes executing the application identified the collective operation at the same call site, establishing a tuning session administered by the self tuning module for the collective operation in dependence upon the identifier of the call site of the collective operation and executing the collective operation in the tuning session; andif all compute nodes executing the application did not identify the collective operation at the same call site, executing the collective operation without establishing a tuning session. 10. The apparatus of claim 9 wherein a root-based collective operation comprises one of: a broadcast operation, a scatter operation, a gather operation, or a reduce operation. 11. The apparatus of claim 9 wherein identifying a collective operation within the application further comprises identifying a collective operation to tune in dependence upon a hint comprising an attribute of a function call that passes through the communicator to the self tuning module to indicate whether to tune the collective operation. 12. The apparatus of claim 9 wherein identifying the call site of the collective operation in the application further comprises calling a traceback function and receiving as a return from the traceback function a unique memory address for the collective operation. 13. The apparatus of claim 9 wherein determining whether all compute nodes executing the application identified the collective operation at the same call site further comprising performing on all the compute nodes of the communicator an ‘allreduce’ collective operation to identify the minimum and maximum values of all of the identified call sites. 14. The apparatus of claim 9 further comprising computer program instructions capable of: selecting, for a particular collective operation of the application in dependence upon one or more tuning sessions for the particular collective operation, one or more algorithms to carry out the particular collective operation, the one or more algorithms representing an optimized set of algorithms to carry out the particular collective operation;recording the one or more selected algorithms; andduring a subsequent execution of the application and without performing another tuning session, carrying out the particular collective operation of the application with the recorded selected algorithms. 15. The apparatus of claim 14 wherein recording the one or more selected algorithms from the tuning session further comprises recording, in association with the one or more selected algorithms, an identifier of the call site for the particular collective operation, a message size, and a communicator identifier. 16. The apparatus of claim 14 wherein: recording the one or more selected algorithms from the tuning session further comprises identifying any of the tuned collective operations that are non-critical collective operations; andcarrying out the particular collective operation of the application with the recorded selected algorithms further comprises carrying out the non-critical collective operations with standard messaging module algorithms. 17. A computer program product for runtime optimization of an application executing on a parallel computer, the parallel computer having a plurality of compute nodes organized into a communicator, the computer program product disposed in a computer readable storage medium, the computer program product comprising computer program instructions capable of: identifying, by each compute node during application runtime, a collective operation within the application;identifying, by each compute node, a call site of the collective operation in the application;determining, by each compute node, whether the collective operation is root-based;if the collective operation is not root-based: establishing a tuning session administered by a self tuning module for the collective operation in dependence upon an identifier of the call site of the collective operation and executing the collective operation in the tuning session;if the collective operation is root-based, determining whether all compute nodes executing the application identified the collective operation at the same call site;if all compute nodes executing the application identified the collective operation at the same call site, establishing a tuning session administered by the self tuning module for the collective operation in dependence upon the identifier of the call site of the collective operation and executing the collective operation in the tuning session; andif all compute nodes executing the application did not identify the collective operation at the same call site, executing the collective operation without establishing a tuning session. 18. The computer program product of claim 17 wherein a root-based collective operation comprises one of: a broadcast operation, a scatter operation, a gather operation, or a reduce operation. 19. The computer program product of claim 17 wherein identifying a collective operation within the application further comprises identifying a collective operation to tune in dependence upon a hint comprising an attribute of a function call that passes through the communicator to the self tuning module to indicate whether to tune the collective operation. 20. The computer program product of claim 17 wherein identifying the call site of the collective operation in the application further comprises calling a traceback function and receiving as a return from the traceback function a unique memory address for the collective operation. 21. The computer program product of claim 17 wherein determining whether all compute nodes executing the application identified the collective operation at the same call site further comprising performing on all the compute nodes of the communicator an ‘allreduce’ collective operation to identify the minimum and maximum values of all of the identified call sites. 22. The computer program product of claim 17 further comprising computer program instructions capable of: selecting, for a particular collective operation of the application in dependence upon one or more tuning sessions for the particular collective operation, one or more algorithms to carry out the particular collective operation, the one or more algorithms representing an optimized set of algorithms to carry out the particular collective operation;recording the one or more selected algorithms; andduring a subsequent execution of the application and without performing another tuning session, carrying out the particular collective operation of the application with the recorded selected algorithms. 23. The computer program product of claim 22 wherein recording the one or more selected algorithms from the tuning session further comprises recording, in association with the one or more selected algorithms, an identifier of the call site for the particular collective operation, a message size, and a communicator identifier. 24. The computer program product of claim 22 wherein: recording the one or more selected algorithms from the tuning session further comprises identifying any of the tuned collective operations that are non-critical collective operations; andcarrying out the particular collective operation of the application with the recorded selected algorithms further comprises carrying out the non-critical collective operations with standard messaging module algorithms.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (29)
Lavoie, Martin; Dionne, Carl, Coherent data sharing.
Willis John Christopher ; Newshutz Robert Neill, Compiler-oriented apparatus for parallel compilation, simulation and execution of computer programs and hardware models.
Blackard Joe Wayne ; Gillaspy Richard Adams ; Henthorn William John ; Petersen Lynn Erich ; Russell Lance W. ; Shippy Gary Roy, Data processing system and method for pacing information transfers in a communications network.
Barzilai Tsipora P. (Millwood NY) Chen Mon-Song (Katonah NY) Kadaba Bharath K. (Peekskill NY) Kaplan Marc A. (Purdys NY), Flow control for high speed networks.
Burns, Randal Chilton; Goel, Atul; Long, Darrell D. E.; Rees, Robert Michael, Lease based safety protocol for distributed system with multiple networks.
Richard Alan Diedrich ; Harvey Gene Kiel, Method and apparatus for multimedia data interchange with pacing capability in a distributed data processing system.
Shtayer Ronen (Tel-Aviv ILX) Alon Naveh (Ranat Hashnron ILX) Alexander Joffe (Rehovot ILX), Method and apparatus for pacing asynchronous transfer mode (ATM) data cell transmission.
Levin Vladimir K.,RUX ; Karatanov Vjacheslav V.,RUX ; Jalin Valerii V.,RUX ; Titov Alexandr,RUX ; Agejev Vjacheslav M.,RUX ; Patrikeev Andrei,RUX ; Jablonsky Sergei V.,RUX ; Korneev Victor V.,RUX ; M, Method for deadlock-free message passing in MIMD systems using routers and buffers.
Daruwalla, Feisal; Forster, James R.; Roeck, Guenter E.; Woundy, Richard M.; Thomas, Michael A., Routing protocol based redundancy design for shared-access networks.
Levy Henry M. ; Feeley Michael J.,CAX ; Karlin Anna R. ; Morgan William E. ; Thekkath Chandramohan A., Using global memory information to manage memory in a computer network.
Advani Deepak Mohan ; Byron Michael Justin ; Hansell Steven Robert ; Ming Chun Li Todd ; Marino John Paul ; Panda Rajendra Datta ; Pierce James Andrew ; Wang Ko-Yang ; Weinel Dennis George ; Welch Ro, Visualization tool for graphically displaying trace data.
Advani Deepak Mohan ; Byron Michael Justin ; Hansell Steven Robert ; Li Todd Ming Chun ; Marino John Paul ; Panda Rajendra Datta ; Pierce James Andrew ; Wang Ko-Yang ; Weinel Dennis George ; Welch Ro, Visualization tool for graphically displaying trace data produced by a parallel processing computer.
Archer, Charles J.; Blocksome, Michael A.; Ratterman, Joseph D.; Smith, Brian E., Improving efficiency of a global barrier operation in a parallel computer.
Archer, Charles J.; Blocksome, Michael A.; Ratterman, Joseph D.; Smith, Brian E., Performing a deterministic reduction operation in a parallel computer.
Archer, Charles J.; Blocksome, Michael A.; Ratterman, Joseph D.; Smith, Brian E., Performing a deterministic reduction operation in a parallel computer.
Archer, Charles J.; Peters, Amanda E.; Smith, Brian E., Performing an all-to-all data exchange on a plurality of data buffers by performing swap operations.
Archer, Charles J.; Blocksome, Michael A.; Ratterman, Joseph D.; Smith, Brian E., Processing data communications events by awakening threads in parallel active messaging interface of a parallel computer.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.