Efficient 3-D telestration for local robotic proctoring
원문보기
IPC분류정보
국가/구분
United States(US) Patent
등록
국제특허분류(IPC7판)
G06T-015/00
G06K-009/68
출원번호
US-0465020
(2009-05-13)
등록번호
US-8830224
(2014-09-09)
발명자
/ 주소
Zhao, Wenyi
Wu, Chenyu
Hirvonen, David
Hasser, Christopher J.
Miller, Brian E.
Mohr, Catherine J.
Curet, Myrian J.
Zhao, Tao
Di Maio, Simon
Hoffman, Brian D.
출원인 / 주소
Intuitive Surgical Operations, Inc.
인용정보
피인용 횟수 :
18인용 특허 :
60
초록▼
An apparatus is configured to show telestration in 3-D to a surgeon in real time. A proctor is shown one side of a stereo image pair, such that the proctor can draw a telestration line on the one side with an input device. Points of interest are identified for matching to the other side of the stere
An apparatus is configured to show telestration in 3-D to a surgeon in real time. A proctor is shown one side of a stereo image pair, such that the proctor can draw a telestration line on the one side with an input device. Points of interest are identified for matching to the other side of the stereo image pair. In response to the identified points of interest, regions and features are identified and used to match the points of interest to the other side. Regions can be used to match the points of interest. Features of the first image can be matched to the second image and used to match the points of interest to the second image, for example when the confidence scores for the regions are below a threshold value. Constraints can be used to evaluate the matched points of interest, for example by excluding bad points.
대표청구항▼
1. A method of three-dimensional telestration for a user, the method comprising: identifying selected points of interest of a first image;displaying the first image on a user display with a first telestration mark at the selected points of interest;selectively matching the selected points of interes
1. A method of three-dimensional telestration for a user, the method comprising: identifying selected points of interest of a first image;displaying the first image on a user display with a first telestration mark at the selected points of interest;selectively matching the selected points of interest of the first image to points of a second image to determine matched points of interest of the second image by computing a plurality of image offsets wherein each of the plurality of image offsets is computed using a different image matching method than all others of the plurality of image offsets, and by using one of the computed plurality of image offsets which is selected according at least partially to one or more confidence scores of the plurality of image offsets; anddisplaying the second image on the user display with a second telestration mark at the matched points of interest such that the first telestration mark and the second telestration mark are presented to appear in three dimensions to the user. 2. The method of claim 1, wherein the plurality of image offsets includes at least two of a global offset computed by using a coarse-to-fine global offset image matching method, a region offset computed by using a coarse-to-fine region image matching method, and a feature offset computed by using a point image matching method based upon feature detection and matching. 3. The method of claim 2, wherein the global offset is computed by using normalized cross correlations to compare information of the first and second images. 4. The method of claim 2, wherein the region offset is computed by matching a region of the first image to a region of the second image by using at least one of cross-correlation, two-way matching, least squares regression, and non-linear regression. 5. The method of claim 2, wherein the feature offset is computed by matching features of the first image to features of the second image by using at least one of Harris corner detection, scale-space extrema detection, local extrema detection, and scale invariant feature transform. 6. The method of claim 1 further comprising: inputting points of interest of the first image with an input device to draw the first telestration mark, wherein the selected points of interest are identified from the points of interest. 7. The method of claim 1, wherein identifying selected points of interest comprises at least one of selecting raw data points, fitting raw data points to a curve, and interpolating raw data points. 8. The method of claim 7, wherein identifying the selected points of interest comprises identifying the raw data points, such that the identified raw data points of the first image are matched to the second image. 9. The method of claim 7, wherein identifying the points of interest comprises fitting the raw data points to a curve such that the curve comprises the selected points of interest. 10. The method of claim 9, wherein the curve comprises a spline. 11. The method of claim 1, wherein selectively matching comprises at least one of selectively matching regions, selectively matching features, selectively interpolating features, and selectively interpolating previously matched points of interest. 12. The method of claim 11: wherein region match confidence scores are determined when the regions are selectively matched;wherein feature match scores are determined when the features are selectively matched;wherein the features are selectively matched in response to the region match confidence scores;wherein the features are selectively interpolated in response to the feature match scores; andwherein the previously matched points of interest are selectively interpolated in response to the feature match scores;such that locations of the matched points of interest are determined by using the region match confidence scores and the feature match scores. 13. The method of claim 11: wherein selectively matching comprises selectively matching the regions, in which the regions are identified from the first image in response to the selected points of interest and corresponding matched regions of the second image are determined from the regions; andwherein the regions are matched to the corresponding matched regions to determine locations of the matched points of interest. 14. The method of claim 13, wherein each of the regions from the first image comprises a portion of the first image so as to include at least one of the selected points of interest within the region, each of the corresponding matched regions comprising a portion of the second image so as to include at least one of the matched points of interest within the corresponding matched region of the second image. 15. The method of claim 13: wherein region match confidence scores are determined for each of the regions matched to the second image; andwherein the features are selectively matched for each of the regions having a confidence score below a threshold value. 16. The method of claim 13: wherein selectively matching comprises selectively matching the features;wherein selectively matching the features comprises identifying the features in response to the selected point of interest; andwherein the features are matched to the second image to determine the matched points of interest. 17. The method of claim 16: wherein each of the features is identified with a descriptor comprising at least one vector having a location, an orientation, and a magnitude; andwherein each vector is determined in response to a gradient of intensities of pixels of the first image. 18. The method of claim 16: wherein feature match confidence scores are determined for each of the features matched to the second image; andwherein successfully matched features are interpolated to determine matched points of interest. 19. The method of claim 16: wherein each region comprises a portion of the image; andwherein at least some of the features are located within matched regions having region matching scores below a threshold value. 20. The method of claim 1, wherein the identified points of interest are matched to the second image with a constraint comprising at least one of a focus constraint or an epi-polar constraint. 21. The method of claim 1: wherein the first image and the second image comprise a pair of real time stereoscopic images from a robotic surgery system; andwherein the pair of stereoscopic images are shown to appear in three dimensions to the user. 22. The method of claim 1 wherein the first image comprises a first series of real time digital images and the second image comprises a second series of real time digital images, such that the identified points of interest correspond to identified points of the first series of real time digital images and the matched points of interest correspond to matched points of the second series of real time digital images. 23. The method of claim 22, wherein a location of the second image series corresponding to a first matched point of interest of the first series is determined before a second point of interest is identified from the first series. 24. The method of claim 23, further comprising: sequentially highlighting the points of interest of the first series; anddetermining the corresponding locations of the matched points of interest of second series while selecting the points of interest of the first series. 25. The method of claim 1: wherein a first portion of the selected points of interest are successfully matched to the second image to determine successfully matched points of interest of the second image;wherein a second portion of the selected points of interest are not successfully matched to the second image and comprise unmatched points of interest of the first image; andwherein the matched points of interest are shown to the user such that the user sees the selected points of interest corresponding to the matched points of interest in three dimensions and such that the user sees selected points of interest corresponding to the unmatched points of interest without three-dimensional viewing. 26. The method of claim 1: wherein the marks fade automatically after a period of time, in which the period of time is programmable by the user;wherein the period of time is within a range from about one second to about ten seconds; andwherein the marks are erased by at least one of the user or a second user before the marks fade. 27. The method of claim 1: wherein a second user inputs telestration marks on the first image, a first portion of the first image corresponds to a valid matching region for matching to the second image, and a second portion of the first image outside the matching region corresponds to a non-matching region;wherein the telestration marks within the matching region are selectively identified for matching to the second image and the telestration marks within the non-matching region are not selectively identified for matching to the second image; andwherein the telestration marks within the second portion are shown with the first image on the display and not shown with the second image on the display. 28. An apparatus to show three-dimensional telestration a user, the apparatus comprising: a source of stereo images, each stereo image comprising a first image and a second image;a stereo display comprising a first display and a second display respectively presenting the first image and the second image in three-dimensional appearance to the user;a second user display presenting the first image to a second user;a second user input device configured to receive input from the second user to draw first telestration marks on the first image shown on the second user display; anda processor system configured to receive the input from the second user input device; identify selected points of interest corresponding to the first telestration marks; selectively match the selected points of interest to points of the second image to determine matched points of interest of the second image by computing a plurality of image offsets wherein each of the plurality of image offsets is computed using a different image matching method than all others of the plurality of image offsets, and by using one of the computed plurality of image offsets which is selected according at least partially to one or more confidence scores of the plurality of image offsets, wherein the matched points of interest correspond to second telestration marks; and cause the first and second telestration marks to be displayed on the stereo display such that the first telestration marks and the second telestration marks appear in three dimensions to the first user. 29. The apparatus of claim 28: wherein the processor system further comprises a first user input device;wherein the processor system is coupled to the first user input device such that the first user can control the first and second telestration marks shown on the stereo display;wherein the first user input device comprises a camera control coupled to the processor system such that the user can adjust a camera of the source of the pair of stereo images; andwherein the first and second telestration marks being displayed on the stereo display are erased in response to the first user touching the camera control. 30. The apparatus of claim 28, wherein the plurality of image offsets includes at least two of a global offset computed by using a coarse-to-fine global offset image matching method, a region offset computed by using a coarse-to-fine region image matching method, and a feature offset computed by using a point matching method based upon feature detection and matching. 31. The apparatus of claim 30, wherein the global offset is computed by using normalized cross correlations to compare information of the first and second images. 32. The apparatus of claim 30, wherein the region offset is computed by matching a region of the first image to a region of the second image by using at least one of cross-correlation, two-way matching, least squares regression, and non-linear regression. 33. The apparatus of claim 30, wherein the feature offset is computed by matching features of the first image to features of the second image by using at least one of Harris corner detection, scale-space extrema detection, local extrema detection, and scale invariant feature transform. 34. The apparatus of claim 28, wherein the processor system is configured to perform the selectively matching by at least one of selectively matching regions, selectively matching features, selectively interpolating features, and selectively interpolating previously matched points of interest. 35. The apparatus of claim 34: wherein region match confidence scores are determined when the regions are selectively matched;wherein feature match scores are determined when the features are selectively matched;wherein the features are selectively matched in response to the region match confidence scores;wherein the features are selectively interpolated in response to the feature match scores; andwherein the previously matched points of interest are selectively interpolated in response to the feature match scores;such that locations of the matched points of interest are determined by using the region match confidence scores and the feature match scores. 36. The apparatus of claim 34: wherein selectively matching comprises selectively matching the regions, in which the regions are identified from the first image in response to the selected points of interest and corresponding matched regions of the second image are determined from the regions; andwherein the regions are matched to the corresponding matched regions to determine locations of the matched points of interest.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (60)
Weiglhofer, Gerhard; Pauker, Fritz; Henkel, Rolf, 3D stereo real-time sensor system, method and computer program therefor.
Moll, Frederic H.; Rosa, David J.; Ramans, Andris D.; Blumenkranz, Stephen J.; Guthart, Gary S.; Niemeyer, Gunter D.; Nowlin, William C.; Salisbury, Jr., J. Kenneth; Tierney, Michael J.; Mintz, David, Arm cart for telerobotic surgical system.
Niemeyer,Gunter D.; Guthart,Gary S.; Nowlin,William C.; Swarup,Nitish; Toth,Gregory K; Younge,Robert G., Camera referenced control in a minimally invasive surgical apparatus.
Salisbury, Jr., J. Kenneth; Niemeyer, Gunter D.; Younge, Robert G.; Guthart, Gary S.; Mintz, David S.; Cooper, Thomas G., Devices and methods for presenting and regulating auxiliary information on an image display of a telesurgical system to assist an operator in performing a surgical procedure.
Banker Robert O. (Cumming GA) Ith Cham (Duluth GA) Bacon Kinney C. (Lawrenceville GA) Burleson David B. (Roswell GA), Display system for selectively overlaying symbols and graphics onto a video signal.
Cavallaro, Richard H.; Gloudemans, James R.; Lazar, Matthew T.; Meier, Kevin R.; Mozes, Alon; Peon, Roberto J.; Steinberg, Eric M., Enhancing video using a virtual surface.
Wang Yulun ; Uecker Darrin R. ; Jordan Charles S. ; Wright James W. ; Laby Keith Phillip ; Wilson Jeff D., Method and apparatus for performing minimally invasive cardiac procedures.
Chang, William H. L.; Hameed, Salmaan; Mahadik, Amit A.; Javadekar, Kiran A.; Abello, Oretho F., Multi-function image and video capture device for use in an endoscopic camera system.
Philip C. Evans ; Frederic H. Moll ; Gary S. Guthart ; William C. Nowlin ; Rand P. Pendleton ; Christopher P. Wilson ; Andris D. Ramans ; David J. Rosa ; Volkmar Falk ; Robert G. Younge, Performing cardiac surgery without cardioplegia.
William C. Nowlin ; Gary S. Guthart ; J. Kenneth Salisbury, Jr. ; Gunter D. Niemeyer, Repositioning and reorientation of master/slave relationship in minimally invasive telesurgery.
Tierney, Michael J.; Cooper, Thomas; Julian, Chris; Blumenkranz, Stephen J.; Guthart, Gary S.; Younge, Robert G., Surgical robotic tools, data architecture, and use.
Madhani Akhil J. ; Salisbury J. Kenneth, Wrist mechanism for surgical instrument for performing minimally invasive surgery with enhanced dexterity and sensitiv.
Bartelme, Michael; Crawford, Neil R.; Foster, Mitchell A.; Major, Chris; Theodore, Nicholas, Devices and methods for temporary mounting of parts to bone.
Zhao, Wenyi; Wu, Chenyu; Hirvonen, David; Hasser, Christopher J.; Miller, Brian E.; Mohr, Catherine J.; Zhao, Tao; Di Maio, Simon; Hoffman, Brian D., Efficient 3-D telestration for local and remote robotic proctoring.
Zhao, Tao; Zhao, Wenyi; Hoffman, Brian D.; Nowlin, William C.; Hui, Hua, Efficient vision and kinematic data fusion for robotic surgical instruments and other applications.
LeBoeuf, II, Robert J.; Olenio, Zachary; Yau, James; Crawford, Neil R., Infrared signal based position recognition system for use with a robot-assisted surgery.
Smith, David W.; DeSanctis-Smith, Regina; Pitt, Alan M.; Theodore, Nicholas; Crawford, Neil, Method and system for performing invasive medical procedures using a surgical robot.
Smith, David W.; DeSanctis-Smith, Regina; Pitt, Alan M.; Theodore, Nicholas; Crawford, Neil, Method and system for performing invasive medical procedures using a surgical robot.
Yoon, Sang Jin, Surgical robot system for performing surgery based on displacement information determined by the specification of the user, and method for controlling same.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.