Image data representing an image captured by a video endoscopic device is converted from a first color space to a second color space. The image data in the second color space is used to determine the location of features in the image.
대표청구항▼
1. A method comprising: accessing, by an application stored in a non-transitory memory of a computing device and executable by a processor, image data representing an image captured by a video endoscopic device, wherein the image data is encoded in a first color space;converting, by the application,
1. A method comprising: accessing, by an application stored in a non-transitory memory of a computing device and executable by a processor, image data representing an image captured by a video endoscopic device, wherein the image data is encoded in a first color space;converting, by the application, the accessed image data from the first color space to a second color space, wherein the second color space is different than the first color space;identifying, by the application, a location of a feature in the image by analyzing the image data in the second color space via: grouping, by the application, pixels in the image data in the second color space into a plurality of groups based on hue values of the pixels;determining, by the application, a first group of pixels from among the plurality of groups of pixels;determining by the application, a second group of pixels from among the plurality of groups of pixels; andselecting, by the application, one of the first or second group of pixels based on a relative color difference between the first and second groups of pixels;storing, by the application, segmentation data that indicates the location of the feature in the image, wherein the segmentation data indicates the selected group of pixels; anddisplaying, by the application, based on the segmentation data, the image with an indication of the identified location of the feature. 2. The method of claim 1 wherein displaying the image with an indication of the identified location of the feature comprises: converting, by the application, the image data from the second color space to a third color space;displaying, by the application, based on the segmentation data and the image data in the third color space, the image with an indication of the identified location of the feature. 3. The method of claim 1 wherein the one or more features includes a type of tissue, an anatomical structure, or an external object introduced into a patient. 4. The method of claim 1 wherein identifying the location of the feature in the image by analyzing the image data in the second color space comprises: generating, by the application, a histogram of hue values of pixels in the image data based on the image data in the second color space; andidentifying, by the application, pixels that fall within a range of hues in the histogram that correspond to the feature, wherein the segmentation data indicates the pixels falling with the range of hues in the histogram that correspond to the feature. 5. The method of claim 1 further comprising: grouping, by the application, pixels by generating a histogram of hue values of pixels in the image data based on the image data in the second color space;determining, by the application, a first group of pixels by determining a first set of pixels that fall within a first range of the hues in the histogram;determining, by the application, a second group of pixels by determining a second set of pixels that fall within a second range of hues in the histogram;selecting, by the application, one of the first or second group of pixels by selecting one of the first or second set of pixels based on a relative color difference between the first range of hues and the second range of hues. 6. The method of claim 1 wherein the first color space is one of RGB, YUV, YPrPb, or YcrCb. 7. The method of claim 1 wherein the second color space is one of HSV, Lab, or HSY. 8. A system comprising: a video endoscopic device configured to: generate image data representing an image captured by the video endoscopic device, wherein the image data is encoded in a first color space;transmit the image data to a computing device; anda computing device configured to: receive the image data transmitted by the video endoscopic device; convert the received image data from the first color space to a second color space, wherein the second color space is different than the first color space;identify a location of a feature in the image by analyzing the image data in the second color space, wherein to identify the location of the feature in the image by analyzing the image data in the second color space, the computing device is configured to: group pixels in the image data in the second color space into groups based on hue values of the pixels;determine a first group of pixels from among the groups of pixels;determine a second group of pixels from among the groups of pixels; andselect one of the first or second group of pixels based on a relative color difference between the first and second groups of pixels, wherein the segmentation data indicates the selected group of pixels;store segmentation data that indicates the location of the features in the image; anddisplay, based on the segmentation data, the image on a display device with an indication of the identified location of the features. 9. The system of claim 8 wherein, to display the image on the display device with an indication of the identified location of the features, the computing device is configured to: convert the image data from the second color space to a third color space;display, based on the segmentation data and the image data in the third color space, the image on the display device with an indication of the identified location of the features. 10. The system of claim 8 wherein the one or more features includes a type of tissue, an anatomical structure, or an external object introduced into a patient. 11. The system of claim 8 wherein, to identify the location of the feature in the image by analyzing the image data in the second color space, the computing device is configured to: generate a histogram of hue values of pixels in the image data based on the image data in the second color space; andidentify pixels that fall within a range of hues in the histogram that correspond to the features, wherein the segmentation data indicates the pixels falling with the range of hues in the histogram that correspond to the features. 12. The system of claim 8 wherein: to group pixels, the computing device is configured to generate a histogram of hue values of pixels in the image data based on the image data in the second color space;to determine a first group of pixels, the computing device is configured to determine a first set of pixels that fall within a first range of the hues in the histogram;to determine a second group of pixels, the computing device is configured to determine a second set of pixels that fall within a second range of hues in the histogram;to select one of the first or second group of pixels, the computing device is configured to select one of the first or second set of pixels based on a relative color difference between the first range of hues and the second range of hues. 13. The system of claim 8 wherein the first color space is one of RGB, YUV, YPrPb, or YcrCb. 14. The system of claim 8 wherein the second color space is one of HSV, Lab, or HSY. 15. A method comprising: accessing, by an application stored in a non-transitory memory of a computing device and executable by a processor, image data representing video captured by a video endoscopic device, wherein the image data is encoded in a first color space;converting, by the application, the accessed image data from the first color space to a second color space, wherein the second color space is different than the first color space;identifying, by the application, a location of a landmark feature in the video by analyzing the image data in the second color space;tracking, by the application, a position of the landmark feature over multiple frames of the image data;generating, by the application, an anatomical model based on the tracked landmark feature;determining, by the application, a location of a target anatomical feature in the video data based on the anatomical model;displaying, by the application, the video with an indication of the location of the target feature. 16. The method of claim 15 wherein determining the location of the target anatomical feature comprises determining, by the application, the location of the target anatomical feature based on the anatomical model and known anatomical relationships between aspects of the model and the target anatomical feature. 17. The method of claim 15, by the application, wherein generating an anatomical model based on the tracked landmark feature comprises determining, by the application, movement of the landmark feature based on changes in position of the landmark feature between the multiple frames and generating, by the application, the anatomical model based on the movement of the landmark feature. 18. The method of claim 15 wherein identifying the location of the landmark feature in the image by analyzing the image data in the second color space comprises: generating, by the application, a histogram of hue values of pixels in the image data based on the image data in the second color space;identifying, by the application, pixels that fall within a range of hues in the histogram that correspond to the features, wherein the segmentation data indicates the pixels falling with the range of hues in the histogram that correspond to the features. 19. The method of claim 15 wherein identifying the location of the landmark feature in the image by analyzing the image data in the second color space comprises: grouping, by the application, pixels in the image data in the second color space into groups based on hue values of the pixels;determining, by the application, a first group of pixels from among the groups of pixels;determining, by the application, a second group of pixels from among the groups of pixels;selecting, by the application, one of the first or second group of pixels based on a relative color difference between the first and second groups of pixels, wherein the segmentation data indicates the selected group of pixels. 20. The method of claim 19, further comprising: grouping, by the application, pixels by generating a histogram of hue values of pixels in the image data based on the image data in the second color space;determining, by the application, a first group of pixels by determining a first set of pixels that fall within a first range of the hues in the histogram;determining, by the application, a second group of pixels by determining a second set of pixels that fall within a second range of hues in the histogram;selecting, by the application, one of the first or second group of pixels by selecting one of the first or second set of pixels based on a relative color difference between the first range of hues and the second range of hues. 21. The method of claim 15 wherein the first color space is one of RGB, YUV, YPrPb, or YcrCb. 22. The method of claim 15 wherein the second color space is one of HSV, Lab, or HSY. 23. A system comprising: a video endoscopic device configured to: generate image data representing video captured by the video endoscopic device, wherein the image data is encoded in a first color space; andtransmit the image data to a computing device; anda computing device configured to: receive the image data transmitted by the video endoscopic device;convert the received image data from the first color space to a second color space, wherein the second color space is different than the first color space;identify a location of a landmark feature in the video by analyzing the image data in the second color space;track a position of the landmark feature over multiple frames of the image data;generate an anatomical model based on the tracked landmark feature;determine a location of a target anatomical feature in the video data based on the anatomical model; anddisplay the video on a display device with an indication of the location of the target feature. 24. The system of claim 23 wherein, to determine the location of the target anatomical feature, the computing device is configured to determine the location of the target anatomical feature based on the anatomical model and known anatomical relationships between aspects of the model and the target anatomical feature. 25. The system of claim 23 wherein, to generate an anatomical model based on the tracked landmark feature, the computing device is configured to determine movement of the landmark feature based on changes in position of the landmark feature between the multiple frames and generating the anatomical model based on the movement of the landmark feature. 26. The system of claim 23 wherein, to identify the location of the feature in the image by analyzing the image data in the second color space, the computing device is configured to: generate a histogram of hue values of pixels in the image data based on the image data in the second color space; andidentify pixels that fall within a range of hues in the histogram that correspond to the features, wherein the segmentation data indicates the pixels falling with the range of hues in the histogram that correspond to the features. 27. The system of claim 23 wherein to identify the location of the feature in the image by analyzing the image data in the second color space, the computing device is configured to: group pixels in the image data in the second color space into groups based on hue values of the pixels;determine a first group of pixels from among the groups of pixels;determine a second group of pixels from among the groups of pixels; andselect one of the first or second group of pixels based on a relative color difference between the first and second groups of pixels, wherein the segmentation data indicates the selected group of pixels. 28. The system of claim 27 wherein: to group pixels, the computing device is configured to generate a histogram of hue values of pixels in the image data based on the image data in the second color space;to determine a first group of pixels, the computing device is configured to determine a first set of pixels that fall within a first range of the hues in the histogram;to determine a second group of pixels, the computing device is configured to determine a second set of pixels that fall within a second range of hues in the histogram;to select one of the first or second group of pixels, the computing device is configured to select one of the first or second set of pixels based on a relative color difference between the first range of hues and the second range of hues. 29. The system of claim 23 wherein the first color space is one of RGB, YUV, YPrPb, or YcrCb. 30. The system of claim 23 wherein the second color space is one of HSV, Lab, or HSY.
Heinrichs Jean-Pierre,DEX ; Bitrolf Ehrenfried,DEX ; Dolt Martin,DEX, Method and device for the automatic identification of components of medical apparatus systems.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.