최소 단어 이상 선택하여야 합니다.
최대 10 단어까지만 선택 가능합니다.
SAI
다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
NTIS 바로가기다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
DataON 바로가기다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
Edison 바로가기다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
Kafe 바로가기국가/구분 | United States(US) Patent 등록 |
---|---|
국제특허분류(IPC7판) |
|
출원번호 | US-0566515 (2006-12-04) |
등록번호 | US-9442607 (2016-09-13) |
발명자 / 주소 |
|
출원인 / 주소 |
|
대리인 / 주소 |
|
인용정보 | 피인용 횟수 : 0 인용 특허 : 281 |
An interactive input system comprises imaging devices with different viewpoints and having at least partially overlapping fields of view encompassing a region of interest. At least two of the imaging devices have different focal lengths. Processing structure processes image data acquired by the imag
An interactive input system comprises imaging devices with different viewpoints and having at least partially overlapping fields of view encompassing a region of interest. At least two of the imaging devices have different focal lengths. Processing structure processes image data acquired by the imaging devices to detect the existence of a pointer and determine the location of the pointer within the region of interest.
1. An interactive input system comprising: imaging devices with different viewpoints and having at least partially overlapping fields of view of a touch surface, at least one of the imaging devices comprising first and second adjacent image sensors with different focal lengths, the first image senso
1. An interactive input system comprising: imaging devices with different viewpoints and having at least partially overlapping fields of view of a touch surface, at least one of the imaging devices comprising first and second adjacent image sensors with different focal lengths, the first image sensor having a field of view encompassing generally the entirety of the touch surface and the second image sensor having a field of view encompassing only a portion of the touch surface; andprocessing structure configured to process image data acquired by the imaging devices to determine if a pointer image exists in the image data with a desired level of confidence and generate pointer data, wherein for each of said at least one imaging devices having said first and second adjacent image sensors with different focal lengths, said processing structure configured to generate the pointer data using image data acquired by only one of said first and second adjacent image sensors and to determine the location of a pointer on the touch surface based on the pointer data,wherein for each pointer contact with said touch surface at a location that is proximate the at least one imaging device, the pointer image is determined to exist in the image data with the desired level of confidence in only image data acquired by the first image sensor, andwherein for each pointer contact with said touch surface that is distant from the at least one imaging device, the pointer image is determined to exist in the image data with the desired level of confidence in only image data acquired by the second image sensor. 2. The interactive input system according to claim 1 wherein all of said imaging devices comprise first and second adjacent image sensors with different focal lengths. 3. The interactive input system according to claim 2 wherein the first and second adjacent image sensors are vertically stacked so that their optical axes are in line. 4. The interactive input system according to claim 2 wherein the first and second adjacent image sensors are side-by-side. 5. The interactive input system according to claim 2 wherein the imaging devices are positioned adjacent respective corners of the touch surface, the imaging devices looking generally across said touch surface. 6. The interactive input system according to claim 1 wherein the imaging devices are positioned adjacent respective corners of the touch surface, the imaging devices looking generally across said touch surface. 7. The interactive input system according to claim 1 wherein said desired level of confidence is existence of a pointer image having a size greater than a threshold size. 8. A touch system comprising: a touch surface on which an image is visible;imaging assemblies about the periphery of said touch surface, said imaging assemblies having at least partially overlapping fields of view encompassing said touch surface, each imaging assembly comprising at least two proximate imaging devices with each imaging device having a different focal length, a first of said two proximate imaging devices having a field of view encompassing the entirety of said touch surface and a second of said two proximate imaging devices having a field of view encompassing only a portion of said touch surface; andprocessing structure configured to process image data generated by the imaging assemblies to determine the location of at least one pointer relative to the touch surface based on image data acquired by the imaging assemblies, wherein each imaging assembly generates said pointer data using image data acquired by only one of the first and second imaging devices thereof, wherein said processing structure is configured to process image data acquired by each imaging device to determine if a pointer image exists in the image data with a desired level of confidence,wherein when the pointer contacts the touch surface proximate a particular imaging assembly, the pointer image is detected with the desired level of confidence in only image data acquired by the first imaging device of the particular imaging assembly, andwherein when the pointer contacts the touch surface distant the particular imaging assembly, the pointer image is detected with the desired level of confidence in only image data acquired by the second imaging device. 9. The touch system according to claim 8 wherein the first and second imaging devices of each imaging assembly are vertically stacked so that their optical axes are in line. 10. The touch system according to claim 8 wherein the first and second imaging devices of each imaging assembly are side-by-side. 11. The touch system according to claim 8 wherein said touch surface is rectangular and wherein imaging assemblies are positioned at least adjacent two opposite corners thereof. 12. An interactive input system comprising: camera assemblies with different viewpoints and having fields of view encompassing a touch surface, each camera assembly comprising at least first and second adjacent image sensors with the image sensors having different focal lengths, the first image sensor having a field of view encompassing generally the entirety of the touch surface and the second image sensor having a field of view encompassing only a portion of the touch surface; andprocessing structure configured to process image data received from said camera assemblies to determine the position of at least one pointer relative to said touch surface based on image data acquired by the camera assemblies, wherein the image data provided to the processing structure by each camera assembly is based only on image data acquired by one of the first and second adjacent image sensors thereof, wherein said processing structure processes image data acquired by each image sensor to determine if a pointer image is believed to exist in the image data with a desired level of confidence,wherein when the pointer contacts the touch surface proximate a particular camera assembly, the pointer image is detected with the desired level of confidence in only image data acquired by the first image sensor of the particular camera assembly, andwherein when the pointer contacts the touch surface distant the particular camera assembly, the pointer image is detected with the desired level of confidence in only image data acquired by the second image sensor of the particular camera assembly. 13. The interactive input system according to claim 12 wherein said desired level of confidence is existence of a pointer image having a size greater than a threshold size. 14. The interactive input system according to claim 12 wherein the first and second adjacent image sensors of each camera assembly are vertically stacked so that their optical axes are in line. 15. The interactive input system according to claim 12 wherein the first and second adjacent image sensors of each camera assembly are side-by-side. 16. The interactive input system according to claim 12 wherein the camera assemblies are positioned adjacent corners of the touch surface, the first and second adjacent image sensors of each camera assembly looking generally across said touch surface. 17. An interactive input system comprising: imaging assemblies at spaced locations about the periphery of a touch surface, said imaging assemblies having at least partially overlapping fields of view encompassing said touch surface and acquiring images looking generally across said touch surface, each imaging assembly comprising at least two proximate imaging devices with each imaging device having a different focal length, a first of said two proximate imaging devices having a field of view encompassing the entirety of said touch surface and a second of said two proximate imaging devices having a field of view encompassing only a portion of said touch surface, image data acquired by said imaging assemblies being processed to determine the location of at least one pointer relative to the touch surface based on the acquired image data, wherein image data acquired by only one of said first and second imaging devices of each imaging assembly is used to determine the location of the pointer, wherein image data acquired by each image device is used to determine if a pointer image is believed to exist in the image data with a desired level of confidence,wherein when the pointer contacts the touch surface proximate a particular imaging assembly, the pointer image is detected with the desired level of confidence in only image data acquired by the first imaging device of the particular imaging assembly, andwherein when the pointer contacts the touch surface distant the particular imaging assembly, the pointer image is detected with the desired level of confidence in only image data acquired by the second imaging device of the particular imaging assembly. 18. The interactive input system according to claim 17 wherein the first and second imaging devices of each imaging assembly are vertically stacked so that their optical axes are in line. 19. The interactive input system according to claim 17 wherein the first and second imaging devices of each imaging assembly are side-by-side. 20. The interactive input system according to claim 17 wherein said touch surface is generally rectangular and wherein imaging assemblies are positioned at least adjacent two corners thereof. 21. The interactive input system according to claim 17 wherein said touch surface is generally rectangular and wherein imaging assemblies are positioned at least adjacent two corners thereof. 22. The interactive input system according to claim 17 wherein the imaging device that best sees the pointer is the imaging device that sees a largest pointer image having a size greater than a threshold size. 23. An interactive input system comprising: camera assemblies with different viewpoints and having fields of view encompassing a touch surface, each camera assembly comprising at least first and second closely positioned image sensors with the image sensors having different focal lengths and acquiring images of said touch surface, the first image sensor having a field of view encompassing the entirety of the touch surface and the second image sensor having a field of view encompassing only a portion of the touch surface, image data acquired by said camera assemblies being processed to determine the location of at least one pointer relative to the touch surface based on the acquired image data, wherein image data acquired only one of said first and second image sensors of each imaging assembly is used to determine the location of the pointer, wherein image data acquired by each image sensor is used to determine if a pointer image is believed to exist in the image data with a desired level of confidence,wherein when the pointer contacts the touch surface proximate a particular camera assembly, the pointer image is detected with the desired level of confidence in only image data acquired by the first image sensor of the particular camera assembly, andwherein when the pointer contacts the touch surface distant the particular camera assembly, the pointer image is detected with the desired level of confidence in only image data acquired by the second image sensor of the particular camera assembly. 24. The interactive input system according to claim 23 wherein the first and second image sensors of each camera assembly are vertically stacked so that their optical axes are in line. 25. The interactive input system according to claim 23 wherein the first and second image sensors of each camera assembly are side-by-side. 26. The interactive input system according to claim 23 wherein the camera assemblies are positioned adjacent corners of the touch surface, the first and second image sensors of each camera assembly looking generally across said touch surface. 27. An interactive input system comprising: imaging assemblies with different viewpoints and having fields of view encompassing a touch surface, each imaging assembly comprising at least first and second imaging devices with each imaging device having a different focal length, the first imaging device of each imaging assembly having a focal length encompassing the entirety of the touch surface and the second imaging device of each imaging assembly having a focal length encompassing only a portion of the touch surface, wherein image data acquired only by either the first imaging device or by the second imaging device of each imaging assembly is processed to determine the location of at least one pointer within the region of interest, wherein image data acquired by each imaging device is used to determine if a pointer image is believed to exist in the image data with a desired level of confidence,wherein when a pointer contacts the touch surface proximate a particular imaging assembly, the pointer image is detected with the desired level of confidence in only image data acquired by the first imaging device of the particular imaging assembly, andwherein when the pointer contacts the touch surface distant the particular imaging assembly, the pointer image is detected with the desired level of confidence in only image data acquired by the second imaging device of the particular imaging assembly. 28. The interactive input system according to claim 27 wherein the first and second imaging devices of each imaging assembly are vertically stacked so that their optical axes are in line. 29. The interactive input system according to claim 27 wherein the first and second imaging devices of each imaging assembly are side-by-side. 30. The interactive input system according to claim 27 wherein said region of interest is generally rectangular and wherein said imaging assemblies are positioned at least adjacent two corners thereof.
해당 특허가 속한 카테고리에서 활용도가 높은 상위 5개 콘텐츠를 보여줍니다.
더보기 버튼을 클릭하시면 더 많은 관련자료를 살펴볼 수 있습니다.
IPC | Description |
---|---|
A | 생활필수품 |
A62 | 인명구조; 소방(사다리 E06C) |
A62B | 인명구조용의 기구, 장치 또는 방법(특히 의료용에 사용되는 밸브 A61M 39/00; 특히 물에서 쓰이는 인명구조 장치 또는 방법 B63C 9/00; 잠수장비 B63C 11/00; 특히 항공기에 쓰는 것, 예. 낙하산, 투출좌석 B64D; 특히 광산에서 쓰이는 구조장치 E21F 11/00) |
A62B-1/08 | .. 윈치 또는 풀리에 제동기구가 있는 것 |
내보내기 구분 |
|
---|---|
구성항목 |
관리번호, 국가코드, 자료구분, 상태, 출원번호, 출원일자, 공개번호, 공개일자, 등록번호, 등록일자, 발명명칭(한글), 발명명칭(영문), 출원인(한글), 출원인(영문), 출원인코드, 대표IPC 관리번호, 국가코드, 자료구분, 상태, 출원번호, 출원일자, 공개번호, 공개일자, 공고번호, 공고일자, 등록번호, 등록일자, 발명명칭(한글), 발명명칭(영문), 출원인(한글), 출원인(영문), 출원인코드, 대표출원인, 출원인국적, 출원인주소, 발명자, 발명자E, 발명자코드, 발명자주소, 발명자 우편번호, 발명자국적, 대표IPC, IPC코드, 요약, 미국특허분류, 대리인주소, 대리인코드, 대리인(한글), 대리인(영문), 국제공개일자, 국제공개번호, 국제출원일자, 국제출원번호, 우선권, 우선권주장일, 우선권국가, 우선권출원번호, 원출원일자, 원출원번호, 지정국, Citing Patents, Cited Patents |
저장형식 |
|
메일정보 |
|
안내 |
총 건의 자료가 검색되었습니다. 다운받으실 자료의 인덱스를 입력하세요. (1-10,000) 검색결과의 순서대로 최대 10,000건 까지 다운로드가 가능합니다. 데이타가 많을 경우 속도가 느려질 수 있습니다.(최대 2~3분 소요) 다운로드 파일은 UTF-8 형태로 저장됩니다. ~ |
Copyright KISTI. All Rights Reserved.
AI-Helper는 오픈소스 모델을 사용합니다. 사용하고 있는 오픈소스 모델과 라이센스는 아래에서 확인할 수 있습니다.
AI-Helper uses Open Source Models. You can find the source code of these open source models, along with applicable license information below. (helpdesk@kisti.re.kr)
OpenAI의 API Key를 브라우저에 등록하여야 ChatGPT 모델을 사용할 수 있습니다.
등록키는 삭제 버튼을 누르거나, PDF 창을 닫으면 삭제됩니다.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.