A medical navigation system is provided for acquiring a depth map of a surgical site of interest in a patient. The medical navigation system comprises a camera, a light projecting device, a display, and a controller. The controller has a processor coupled to a memory. The controller is configured to
A medical navigation system is provided for acquiring a depth map of a surgical site of interest in a patient. The medical navigation system comprises a camera, a light projecting device, a display, and a controller. The controller has a processor coupled to a memory. The controller is configured to generate a signal provided to the light projecting device to project an edge indicator on the surgical site of interest, generate a signal to operate the camera to perform a focus sweep and capture a plurality of images during the focus sweep where the plurality of images includes the projected edge indicator, receive from the camera data representing the plurality of images captured during the focus sweep, and generate a depth map of the surgical site of interest using the data representing the plurality of images.
대표청구항▼
1. A medical navigation system for acquiring a depth map of a surgical site of interest in a patient, comprising: a camera for viewing the surgical site of interest and having a depth of field and an adjustable focus,a light projecting device for projecting an edge indicator on the surgical site of
1. A medical navigation system for acquiring a depth map of a surgical site of interest in a patient, comprising: a camera for viewing the surgical site of interest and having a depth of field and an adjustable focus,a light projecting device for projecting an edge indicator on the surgical site of interest;a display; anda controller electrically coupled to the camera, the light projecting device, and the display, the controller having a processor coupled to a memory, the controller configured to: generate a signal provided to the light projecting device to project the edge indicator on the surgical site of interest;generate a signal to operate the camera to perform a focus sweep and capture a plurality of images during the focus sweep such that substantially all elements of the surgical site of interest are in focus in at least one of the plurality of images, the plurality of images including the projected edge indicator;receive from the camera data representing the plurality of images captured during the focus sweep;generate a depth map of the surgical site of interest using the data representing the plurality of images;generate a signal to operate the camera to capture a live video feed of the surgical site of interest while a surgical procedure is being performed;receive from the camera data representing the live video feed;cause the display to display the live video feed; andcause the display to display the depth map, the depth map being overlaid on the live video feed, the depth map including a series of contour lines. 2. The medical navigation system according to claim 1, wherein the controller is further configured to: save the data representing the plurality of images in the memory; andsave the generated depth map in the memory. 3. The medical navigation system according to claim 1, wherein the projected edge indicator is projected using one of visible light, invisible light, infrared light, and ultraviolet light. 4. The medical navigation system according to claim 1, wherein the projected edge indicator includes a plurality of horizontal parallel lines and a plurality of vertical parallel lines each perpendicular to the plurality of horizontal parallel lines, the plurality of horizontal parallel lines and the plurality of vertical parallel lines arranged in a grid pattern. 5. The medical navigation system according to claim 1, wherein the depth map of the surgical site of interest is generated using the data representing the plurality of images by analyzing image sharpness of features of each of the plurality of images based on the focus depth of each of the plurality of images. 6. The medical navigation system according to claim 5, wherein the depth map of the surgical site of interest is further generated by analyzing sharpness of intersecting perpendicular and horizontal lines in a grid pattern in the plurality of images. 7. The medical navigation system according to claim 1, wherein the depth map of the surgical site of interest is generated by analyzing polarization of light reflected from the surgical site of interest. 8. The medical navigation system according to claim 1, wherein the focus sweep is performed in response to an input provided to the controller using an input device coupled to the controller. 9. The medical navigation system according to claim 1, wherein the light projecting device projects the light through the camera. 10. The medical navigation system according to claim 1, wherein the camera includes a videoscope. 11. The medical navigation system according to claim 1, wherein the controller is further configured to: stitch the focused regions from at least two of the plurality of images together to generate a composite image. 12. A method of acquiring a depth map of a surgical site of interest in a patient, the method performed on a medical navigation system having a camera, a light projecting device, a display, and a controller electrically coupled to the camera, the light projecting device, and the display, the controller having a processor coupled to a memory, the method comprising: projecting with the light projecting device an edge indicator on the surgical site of interest;performing with the camera a focus sweep and capturing a plurality of images during the focus sweep such that substantially all elements of the surgical site of interest are in focus in at least one of the plurality of images, the plurality of images including the projected edge indicator;receiving at the controller from the camera data representing the plurality of images captured during the focus sweep;generating a depth map of the surgical site of interest using the data representing the plurality of images;capturing with the camera a live video feed of the surgical site of interest while a surgical procedure is being performed;receiving at the controller from the camera data representing the live video feed; displaying on the display the live video feed; anddisplaying the depth map on the display, the depth map being overlaid on the live video feed, the depth map including a series of contour lines. 13. The method according to claim 12, further comprising: saving the data representing the plurality of images in the memory;and saving the generated depth map in the memory. 14. The method according to claim 12, wherein the projected edge indicator is projected using one of visible light, invisible light, infrared light, and ultraviolet light. 15. The method according to claim 12, wherein the projected edge indicator includes a plurality of horizontal parallel lines and a plurality of vertical parallel lines each perpendicular to the plurality of horizontal parallel lines, the plurality of horizontal parallel lines and the plurality of vertical parallel lines arranged in a grid pattern. 16. The method according to claim 12, wherein the depth map of the surgical site of interest is generated using the data representing the plurality of images by analyzing image sharpness of features of each of the plurality of images based on the focus depth of each of the plurality of images. 17. The method according to claim 16, wherein the depth map of the surgical site of interest is further generated by analyzing sharpness of intersecting perpendicular and horizontal lines in the grid pattern in the plurality of images. 18. The method according to claim 12, wherein the depth map of the surgical site of interest is generated by analyzing polarization of light reflected from the surgical site of interest. 19. The method according to claim 12, wherein the focus sweep is performed in response to an input provided to the controller using an input device coupled to the controller. 20. The method according to claim 12, wherein the light projecting device projects the light through the camera. 21. The method according to claim 12, wherein the camera includes a videoscope. 22. The method according to claim 12, further comprising: stitching focused regions from at least two of the plurality of images together to generate a composite image. 23. A medical navigation system for acquiring a depth map of a surgical site of interest in a patient, comprising: a camera for viewing the surgical site of interest,a light projecting device for projecting an edge indicator on the surgical site of interest, the light projecting device having an adjustable focus plane; a display; and a controller electrically coupled to the camera, the light projecting device, and the display, the controller having a processor coupled to a memory, the controller configured to: generate a signal provided to the light projecting device to project the edge indicator on the surgical site of interest and perform a focus sweep of the light projecting device over a range of the adjustable focus plane, the surgical site of interest having a surface contour with a maximum elevation and a minimum elevation and the sweep of the adjustable focus plane spanning a range from the minimum elevation to the maximum elevation;generate a signal to operate the camera and capture a plurality of images during the focus sweep of the light projecting device, the plurality of images including the projected edge indicator;receive from the camera data representing the plurality of images captured during the focus sweep; andgenerate a depth map of the surgical site of interest using the data representing the plurality of images.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (7)
Abramovich, Gil; Harding, Kevin George; Czechowski, III, Joseph; Wheeler, Frederick Wilson, Apparatus and method for contactless high resolution handprint capture.
Nayar Shree ; Noguchi Minori,JPX ; Wantanabe Masahiro,JPX, Apparatus and methods for determining the three-dimensional shape of an object using active illumination and relative blurring in two-images due to defocus.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.