IPC분류정보
국가/구분 |
United States(US) Patent
등록
|
국제특허분류(IPC7판) |
|
출원번호 |
US-0256946
(1999-02-24)
|
발명자
/ 주소 |
- Oberwager, Bradford S.
- Moguin, Brian P.
|
출원인 / 주소 |
|
대리인 / 주소 |
McConathy, Evelyn H.Dilworth Paxson LLP
|
인용정보 |
피인용 횟수 :
54 인용 특허 :
7 |
초록
▼
The present invention relates to a method for developing an ingestible formula through a network that operates according to a hypertext transfer protocol (HTTP). A plurality of first statements inviting a plurality of first responses are received at a client computer system. The associated first res
The present invention relates to a method for developing an ingestible formula through a network that operates according to a hypertext transfer protocol (HTTP). A plurality of first statements inviting a plurality of first responses are received at a client computer system. The associated first responses are then received at the client computer system. A server computer system coupled over the network to the client computer system then receives the first responses. The server computer system processes the first responses according to a relational database to produce the ingestible formula. Various embodiments and features are disclosed.
대표청구항
▼
The present invention relates to a method for developing an ingestible formula through a network that operates according to a hypertext transfer protocol (HTTP). A plurality of first statements inviting a plurality of first responses are received at a client computer system. The associated first res
The present invention relates to a method for developing an ingestible formula through a network that operates according to a hypertext transfer protocol (HTTP). A plurality of first statements inviting a plurality of first responses are received at a client computer system. The associated first responses are then received at the client computer system. A server computer system coupled over the network to the client computer system then receives the first responses. The server computer system processes the first responses according to a relational database to produce the ingestible formula. Various embodiments and features are disclosed. red voice print and an identity of each of a plurality of individuals having access to the secured site, said stored voice print of each of said plurality of individuals being generated from a corresponding voice data thereof; (b) a first input device for inputting user information for verifying that the user identifies him- or herself as a specific individual among said plurality of individuals; (c) a second input device for inputting temporary voice data of the user; (d) a first processing unit for generating a temporary voice print from said temporary voice data received from said second input device; and (e) a second processing unit for comparing said temporary voice print to said stored voice print of each of at least a portion of said plurality of individuals to provide a distortion level between said temporary voice print and each said stored voice print of each said individual in said at least a portion of said plurality of individuals, at least said portion of said plurality of individuals including said specific individual, whereby said distortion level is smallest when the user and said specific individual are the same person and less than said distortion level between said temporary voice print and said stored voice print of all other individuals of at least said portion of said plurality of individuals, the user being granted access to the secured site based on said smallest distortion level; wherein said comparing is effected by a voice authentication algorithm selected from the group consisting of a text-dependent and a text-independent voice authentication algorithm. 2. The system of claim 1, wherein said first input device is selected from the group consisting of a keypad and a microphone. 3. The system of claim 1, wherein said first input device communicates with said first processing unit via a communication mode selected from the group consisting of telephone communication, cellular telephone communication, computer network communication and radiofrequency communication. 4. The system of claim 1, wherein said second input device includes a microphone. 5. The system of claim 1, wherein said second input device communicates with said first processing unit via a communication mode selected from the group consisting of telephone communication, cellular telephone communication, computer network communication and radiofrequency communication. 6. The system of claim 1, wherein said first input device and said second input device are integrated into a single input device, whereas said single input device includes a microphone. 7. The system of claim 6, wherein said temporary voice data includes said user information. 8. The system of claim 1, wherein said first processing unit and said second processing unit are integrated into a single processing unit. 9. The system of claim 1, wherein said stored voice print of each of said plurality of individuals has been generated by said first processing unit. 10. The system of claim 1, wherein said voice authentication algorithm is selected from the group consisting of feature extraction followed by pattern matching, a neural network algorithm, a dynamic time warping algorithm, the hidden Markov algorithm and a vector quantization algorithm. 11. The system of claim 1, wherein said first processing unit processes said user information so as to validate that he user identifies him- or herself as a specific individual of said plurality of individuals prior to generating said temporary voice print. 12. The system of claim 1, wherein said plurality of individuals includes at least 10 individuals. 13. The system of claim 1, wherein said corresponding voice data of each of said plurality of individuals includes a plurality of independent voice data inputs. 14. The system of claim 13, wherein said stored voice print of each of said plurality of individuals is generated from at least one of said plurality of independent voice data inputs. 15. The system of claim 1, wherein sai d first processing unit also extracts at least one voice feature from said temporary voice data. 16. The system of claim 1, wherein the secure site is selected from the group consisting of a virtual site and a physical site. 17. The system of claim 1, wherein said virtual site is a World Wide Web site. 18. A method of authorizing a user access to a secure site, the method comprising the steps of: (a) providing a memory unit for storing information including a stored voice print and an identity of each of a plurality of individuals having access to the secured site, said stored voice print of each of said plurality of individuals being generated from a corresponding voice data thereof; (b) collecting user information, provided by the user, for verifying that the user identifies him- or herself as a specific individual among said plurality of individuals; (c) processing temporary voice data collected from the user into a temporary voice print; (d) comparing said temporary voice print with said stored voice print of each of at least a portion of said plurality of individuals to provide a distortion level between said temporary voice print and each said stored voice print of each said individual in said at least a portion of said plurality of individuals, at least said portion of said plurality of individuals including said specific individual, said distortion level being smallest when the user and said specific individual are the same person, and less than said distortion level between said temporary voice print and said stored voice print of all other individuals of at least said portion of said plurality of individuals; and (e) identifying said smallest distortion level and granting the user access to the secure site based on said identification; wherein said comparing is effected by a voice authentication algorithm selected from the group consisting of a text-dependent and a text-independent voice authentication algorithm. 19. The method of claim 18, wherein said user information is provided via an input device selected from the group consisting of a keypad and a microphone. 20. The method of claim 18, wherein said user information is provided via an input device selected from the group consisting of telephone communication, cellular telephone communication, computer network communication and radiofrequency communication. 21. The method of claim 18, wherein said temporary voice data is collected by a microphone. 22. The method of claim 18, wherein said temporary voice data is collected by an input device selected from the group consisting of telephone communication, cellular telephone communication, computer network communication and radiofrequency communication. 23. The method of claim 18, wherein said user information and said temporary voice data are collected by a single input device, whereas said single input device includes a microphone. 24. The method of claim 23, wherein said temporary voice data includes said user information. 25. The method of claim 18, wherein steps (c) and (d) are effected by a single processing unit. 26. The method of claim 18, wherein said stored voice print of each of said plurality of individuals has been generated by said first processing unit. 27. The method of claim 18, wherein said voice authentication algorithm is selected from the group consisting of feature extraction followed by pattern matching, a neural network algorithm, a dynamic time warping algorithm, the hidden Markov algorithm and a vector quantization algorithm. 28. The method of claim 18, further comprising the step of validating that the user has identified him- or herself as said specific individual of said plurality of individuals prior to said step of processing temporary voice data collected from the user into a temporary voice print. 29. The method of claim 18, wherein said plurality of individuals includes at lest 10 individuals. 30. The method of claim 18, wherein said corresponding voice data of each of said plurality of individuals includes a plurality of independent voice data inputs. 31. The method of claim 30, wherein said stored voice print of each of said plurality of individuals is generated from at least one of said plurality of independent voice data inputs. 32. The method of claim 18, wherein said step of processing said temporary voice data collected from the user into said temporary voice print also includes extracting at least one voice feature from said temporary data. 33. The method of claim 18, wherein the secure site is selected from the group consisting of a virtual site and a physical site. 34. The method of claim 18, wherein said virtual site is a World Wide Web site. 35. A system for authorizing a user to a secure site, the system comprising: (a) a memory unit for storing information including a stored voice print and an identity of each of a plurality of individuals having access to the secured site, said stored voice print of each of said plurality of individuals being generated from a corresponding voice data thereof; (b) a first input device for inputting user information for verifying that the user identifies him- or herself as a specific individual among said plurality of individuals; (c) a second input device for inputting temporary voice data of the user; (d) a first processing unit for generating a temporary voice print from said temporary voice data received from said second input device; and (e) a second processing unit for comparing said temporary voice print to said stored voice print of each of at least a portion of said plurality of individuals to provide a distortion level between said temporary voice print and each said stored voice print of each said individual in said at least portion of said plurality of individuals including said specific individual, access of the user to the secure site being contingent on said distortion level between said temporary voice print and said stored voice print of said specific individual being less than any other said distortion level; wherein said comparing is effected by a voice authentication algorithm selected from the group consisting of a text-dependent and a text-independent voice authentication algorithm. 36. A method of authorizing a user access to a secure site, the method comprising the steps of: (a) storing information including a stored voice print and an identity of each of a plurality of individuals having access to the secure site, said stored voice print of each of said plurality of individuals being generated from a corresponding voice data thereof; (b) collecting user information, provided by the user, for verifying that he user identifies him- or herself as a specific individual among said plurality of individuals; (c) processing temporary voice data collected from the user into a temporary voice print; (d) comparing said temporary voice print with said stored voice print of each of at least a portion of said plurality of individuals to provide a distortion level between said temporary voice print and each said stored voice print of each said individual in said at least portion of said plurality of individuals, said at least portion of said plurality of individuals including said specific individual; and (e) granting the user access to the secure site only if said distortion level between said temporary voice print and said stored voice print of said specific individual being less than any other said distortion level; wherein said comparing is effected by a voice authentication algorithm selected from the group consisting of a text-dependent and a text-independent voice authentication algorithm.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.