Neural network training image generation system
원문보기
IPC분류정보
국가/구분
United States(US) Patent
등록
국제특허분류(IPC7판)
G06K-009/00
G06K-009/62
G06K-009/66
G06T-007/00
출원번호
US-0584129
(2017-05-02)
등록번호
US-10262236
(2019-04-16)
발명자
/ 주소
Lim, Ser Nam
Jain, Arpit
Diwinsky, David Scott
Bondugula, Sravanthi
출원인 / 주소
General Electric Company
대리인 / 주소
Flynn, Peter A.
인용정보
피인용 횟수 :
0인용 특허 :
8
초록▼
A system that generates training images for neural networks includes one or more processors configured to receive input representing one or more selected areas in an image mask. The one or more processors are configured to form a labeled masked image by combining the image mask with an unlabeled ima
A system that generates training images for neural networks includes one or more processors configured to receive input representing one or more selected areas in an image mask. The one or more processors are configured to form a labeled masked image by combining the image mask with an unlabeled image of equipment. The one or more processors also are configured to train an artificial neural network using the labeled masked image to one or more of automatically identify equipment damage appearing in one or more actual images of equipment and/or generate one or more training images for training another artificial neural network to automatically identify the equipment damage appearing in the one or more actual images of equipment.
대표청구항▼
1. A system comprising: one or more processors configured to receive input representing one or more selected areas in an image mask, the one or more processors configured to form a labeled masked image by combining the image mask with an unlabeled image of equipment, wherein the one or more processo
1. A system comprising: one or more processors configured to receive input representing one or more selected areas in an image mask, the one or more processors configured to form a labeled masked image by combining the image mask with an unlabeled image of equipment, wherein the one or more processors are configured to train an artificial neural network using the labeled masked image to one or more of automatically identify equipment damage appearing in one or more actual images of equipment or generate one or more training images for training another artificial neural network to automatically identify the equipment damage appearing in the one or more actual images of equipment; and further comprising at least one generative adversarial network (GAN) communicatively coupled to the one or more processors, the at least one generative adversarial network comprising: a generator network: and a discriminator network, the discriminator network for classifying images correctly, wherein the generator network generates at least one fake image that is as close to a probability distribution of the one or more training images and can potentially fool the discriminator network into determining that the at least one fake images is a real image of a subject object of the one or more training images, and wherein the image mask is a binary mask including two different types of areas to appear in the labeled masked image. 2. The system of claim 1, wherein the one or more processors are configured to receive the input representing locations of where artificial anomalies are to appear on the equipment in the labeled masked image, and wherein the one or more processors comprises at least one of a microprocessor, and an integrated circuit. 3. The system of claim 1, wherein the equipment includes a turbine engine and the one or more selected areas indicate locations on the turbine engine where damage to a coating of the turbine engine is to appear in the labeled masked image, and wherein the equipment damage includes at least one of a crack and a spall of the coating of the turbine engine. 4. The system of claim 1, wherein pixels of the labeled masked image are annotated with indications of objects represented by the pixels, and wherein the one or more processors comprises at least one field programmable gate array. 5. The system of claim 1, wherein pixels of the unlabeled image are not annotated with indications of objects represented by the pixels. 6. The system of claim 1, wherein a first type of the types of areas to appear in the labeled masked image is an artificial appearance of damage to the equipment and a second type of the types of areas to appear in the labeled masked image is an unchanged portion of the unlabeled image, and wherein the generator network and the discriminator network interact in a setting of a two-player minimax game to generate the labeled masked image. 7. A method comprising: receiving input representing one or more selected areas in an image mask; forming a labeled masked image by combining the image mask with an unlabeled image of equipment; and training an artificial neural network using the labeled masked image to one or more of automatically identify equipment damage appearing in one or more actual images of equipment or generate one or more training images for training another artificial neural network to automatically identify the equipment damage appearing in the one or more actual images of equipment and further comprising: at least one generative adversarial network (GAN) communicatively coupled to the one or more processors, the at least one generative adversarial network comprising: a generator network: and a discriminator network, the discriminator network for classifying images correctly, wherein the discriminator network sends a signal to the generator network indicating that the labeled masked image does not depict the one or more objects in the one or more actual images of equipment, wherein the generator network, in response to receiving the signal from the discriminator network, changes how the labeled masked image is generated, and wherein a first type of the types of areas to appear in the labeled masked image is an artificial appearance of damage to the equipment and a second type of the types of areas to appear in the labeled masked image is an unchanged portion of the unlabeled image. 8. The method of claim 7, wherein the input that is received represents locations of where artificial anomalies are to appear on the equipment in the labeled masked image, and wherein forming a labeled masked image further comprises forming a labeled masked image based on at least one statistical distribution of at least one of at least one color of a pixel of the labaled masked image and at least one intensity of a pixel of the labaled masked image. 9. The method of claim 8, wherein the equipment includes a turbine engine and the one or more selected areas indicate locations on the turbine engine where damage to a coating of the turbine engine is to appear in the labeled masked image, and wherein the at least one statistical distribution further comprises at least one Gaussian distribution. 10. The method of claim 7, further comprising: providing at least one loss function, wherein the at least one loss function represents a confidence that the labeled masked image depicts one or more objects in the one or more actual images of equipment,wherein pixels of the labeled masked image are annotated with indications of objects represented by the pixels. 11. The method of claim 10, further comprising: determining that the labeled masked image depicts the one or more objects in the one or more actual images of equipment if the loss function is less than or equal to a predetermined threshold,wherein pixels of the unlabeled image are not annotated with indications of objects represented by the pixels. 12. The method of claim 10, further comprising: determining that the labeled masked image does not depict the one or more objects in the one or more actual images of equipment if the loss function is greater than or equal to a predetermined threshold,wherein the image mask is a binary mask including two different types of areas to appear in the labeled masked image. 13. A system comprising: one or more processors configured to receive an actual image of equipment, the actual image not including annotations of what object is represented by each pixel in the actual image, the one or more processors also configured to obtain an image mask, the image mask representing one or more selected areas where damage to the equipment is to appear, the one or more processors configured to generate a labeled masked image by combining the actual image with the image mask, wherein the labeled masked image includes annotations of what object is represented by plural pixels in the one or more selected areas from the image mask: and at least one camera communicatively coupled to the one or more processors, wherein the at least one camera provides the actual image of equipment to the one or more processors; and wherein a first type of the types of areas to appear in the labeled masked image is an artificial appearance of damage to the equipment and a second type of the types of areas to appear in the labeled masked image is an unchanged portion of the unlabeled image, and wherein the labeled masked image comprises at least one artificial anomaly, the at least one artificial anomaly superimposed onto at least one unlabeled image, the at least one artificial anomaly configured to at least one of: replace, occlude view of, entirely overlap, and partially overlap at least one real anomaly in the actual image. 14. The system of claim 13, further comprising: at least one controller communicatively coupled to the one or more processors, the at least one controller configured to control the operation of at least one powered system:at least one memory communicatively coupled to the one or more processors, the at least one memory configured to store the actual image; andat least one output device communicatively coupled to the one or more processors,wherein the one or more processors are configured to train an artificial neural network using the labeled masked image to automatically identify equipment damage appearing in one or more additional images of equipment. 15. The system of claim 14, wherein the one or more processors are configured to generate one or more training images for training an artificial neural network to automatically identify equipment damage appearing in the one or more additional images of equipment, and wherein the at least one powered system comprises an automated robotic system for repairing a component of the equipment. 16. The system of claim 15, wherein the equipment includes a turbine engine and the one or more selected areas indicate locations on the turbine engine where damage to a coating of the turbine engine is to appear in the labeled masked image, and wherein the automated robotic system is configured to spray an additive onto a coating of the component of the equipment. 17. The system of claim 14, wherein the image mask is a binary mask including two different types of areas to appear in the labeled masked image, wherein the at least one powered system comprises a vehicle, andwherein the controller changes the direction of the vehicle to avoid a collision with an object identified by the system.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (8)
Stafford Richard G. (Chadds Ford PA) Mickewich Daniel J. (Arden DE) Beutel Jacob (Hockessin DE), Application of neural networks as an aid in medical diagnosis and general anomaly detection.
Yang, Jianchao; Wang, Zhangyang; Brandt, Jonathan; Jin, Hailin; Shechtman, Elya; Agarwala, Aseem Omprakash, Font recognition and font similarity learning using a deep neural network.
Kotake, Daisuke; Katayama, Akihiro; Sakagawa, Yukio; Endo, Takaaki; Suzuki, Masahiro, Image reproducing method and apparatus for displaying annotations on a real image in virtual space.
Gong, Yunchao; Leung, King Hong Thomas; Toshev, Alexander Toshkov; Ioffe, Sergey; Jia, Yangqing, Ranking approach to train deep neural nets for multilabel image annotation.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.