대표
청구항
▼
1. A method of interfacing with an empathetic computing system, the method comprising: receiving sensor data from sensors of an empathetic computing device, wherein the sensor data is generated by user interaction with the empathetic computing device, the user interaction comprising a plurality of interaction units;receiving contextual information associated with the user interaction;classifying the sensor data as a sequence of interaction units using stored associations between exemplary sensor data and pre-determined interaction units; andproviding fee...
1. A method of interfacing with an empathetic computing system, the method comprising: receiving sensor data from sensors of an empathetic computing device, wherein the sensor data is generated by user interaction with the empathetic computing device, the user interaction comprising a plurality of interaction units;receiving contextual information associated with the user interaction;classifying the sensor data as a sequence of interaction units using stored associations between exemplary sensor data and pre-determined interaction units; andproviding feedback with the empathetic computing device, wherein the feedback is based, at least in part, on the sequence of interaction units and the contextual information, and wherein the providing feedback comprises generating a pattern of lights with a plurality of illumination sources of the empathetic computing device, each light in the pattern corresponding to an interaction unit of the sequence of interaction units. 2. The method of claim 1, wherein the user interaction comprises natural human actions including any of a facial expression, a posture, vocal utterance, speech, body or body part movement or position, relative movement or position of the empathetic device with respect to the user. 3. The method of claim 1, wherein the receiving sensor data comprises receiving an indication of user proximity, user motion, detected speech, detected facial expression, or ambient condition. 4. The method of claim 1, further comprising storing a data set corresponding to the sequence of interaction units. 5. The method of claim 1, wherein the contextual information includes, at least in part, information extracted from the sensor data. 6. The method of claim 1, wherein the contextual information includes ambient light level, ambient sound level, ambient temperature, date, time, location of the empathetic computing device, or combinations thereof. 7. The method of claim 1, wherein a color of each light in the pattern is based on the kind of interaction unit. 8. The method of claim 7, wherein the color is further based on the contextual information associated with a respective interaction unit. 9. The method of claim 1, wherein the pattern of lights comprises a spiral of sequentially illuminated LEDs. 10. The method of claim 1, further comprising, generating, for at least one interaction unit in the sequence, feedback comprising a pattern of lights with a plurality of illumination sources of the empathetic computing device, wherein the pattern is indicative of content of vocalized speech extracted from the user interaction. 11. The method of claim 10, wherein the generating for at least one interaction unit in the sequence, feedback comprising a pattern of lights includes generating two distinct patterns of lights, each corresponding to a different word from the vocalized speech. 12. The method of claim 1, wherein the providing feedback comprises generating a pattern of audible sounds with an audio generator of the empathetic computing device, a vibrator of the empathetic computing device, or a combination of the two, one or more of the sounds in the pattern corresponding to an interaction unit from the sequence of the sequence of interaction units. 13. The method of claim 1, wherein the providing feedback further comprises generating a vibrational response. 14. The method of claim 1, wherein the user interaction is a first user interaction and wherein the sequence of interaction units is a first sequence, the method further comprising generating a second sequence based on sensor data associated with a second user interaction temporally spaced from the first user interaction by a pre-determined duration of time. 15. The method of claim 1, further comprising segmenting the sensor data into a plurality of sensor data sets each associated with respective one of a plurality of interaction sessions. 16. The method of claim 15, further comprising: receiving an indication of an interaction free-period having a pre-determined duration; andsegmenting the sensor data into a first set of sensor data and a second set of sensor data, the first set of sensor data recorded during a first period of time and the second set of sensor data recorded during a second period temporally spaced form the first period by the interaction-free period. 17. The method of claim 16, further comprising comparing the first set of sensor data with the exemplary sensor data to identify a first interaction unit and comparing a second set of sensor data with the exemplary sensor data to identify a second interaction unit. 18. The method of claim 15, further comprising storing data sets corresponding to sequences of interaction units associated with the plurality of interaction sessions, and updating an interaction model of the empathetic computing device using the stored data sets. 19. The method of claim 18, further comprising transmitting the data sets to a server, and wherein the updating an interaction model of the empathetic computing device is performed by the server. 20. The method of claim 1, further comprising monitoring user interactions with the empathetic computing device during a period of time including determining a total number of interaction units in a given user interaction or during the period of time, a number of interaction units of a same kind in a given user interaction or during the period of time, a total number and types of sequences of user interactions, or combinations thereof, and characterizing a user pattern or user state based on the monitored interactions. 21. The method of claim 1, wherein receiving sensor data comprises receiving an indication of placement of the empathetic computing device in a palm of the user, the method further comprising activating a microphone, a camera, or both responsive to the indication of placement of the empathetic computing device in a palm of the user. 22. The method of claim 21, further comprising deactivating the microphone, the camera, or both responsive to an indication of removal of the empathetic computing device from the palm of the user. 23. An empathetic computing device comprising: a processor;a plurality of sensors configured to generate sensor data based on user interaction with the empathetic computing device, the user interaction comprising a plurality of interaction units;a plurality of light sources; anda memory operatively coupled to the plurality of sensors and the processor, the memory comprising stored associations between exemplary sensor data and pre-determined interaction units, the memory further comprising processor-executable instructions, which when executed by the processor cause the empathetic computing device to: receive contextual information associated with the user interaction;classify the sensor data as a sequence of interaction units using the stored associations between exemplary sensor data and pre-determined interaction units; andprovide feedback based at least in part on the sequence of interaction units and the contextual information, wherein the instructions to provide feedback based at least in part on the sequence of interaction units and the contextual information include instructions to illuminate one or more of the plurality of light sources in a pattern. 24. The empathetic computing device of claim 23, wherein the processor includes an extraction processor configured to receive the sensor data, filter the sensor data, and perform features extraction on the filtered sensor data. 25. The empathetic computing device of claim 23, wherein the feature extraction comprises extracting features from simultaneously recorded data from a plurality of sensors of different types. 26. The empathetic computing device of claim 23, wherein the feature extraction comprises performing speech and face recognition. 27. The empathetic computing device of claim 23, wherein the pattern corresponds to the sequence of interaction units, a color of each illuminated light source selected based on respective ones of the interaction units in the sequence. 28. The empathetic computing device of claim 23, wherein the processor, the memory, one or more sensors of the plurality of sensors, and one or more sensors of the plurality of light sources are enclosed in an enclosure configured to fit in a palm of a user. 29. The empathetic computing device of claim 23, wherein the plurality of sensors includes a touch sensor, a proximity sensor, an image sensor, a microphone, or combinations thereof. 30. The empathetic computing device of claim 29, wherein the touch sensor includes a touch sensitive surface disposed on a bottom side of the empathetic computing device. 31. The empathetic computing device of claim 23, wherein the plurality of sensors further comprises a plurality of infrared sensors configured to determine proximity of the user to the empathetic computing device. 32. The empathetic computing device of claim 23, wherein the plurality of sensors further comprises at least one light sensor arranged to sense ambient light. 33. An empathetic computing system comprising the empathetic computing device of claim 23, the system further comprising an other computing device communicatively coupled to the empathetic computing device, the other computing device configured to receive from the empathetic computing device and store data including user data, empathetic computing device system data, or combinations thereof, the other computing device further configured to execute an application for visualizing the stored data. 34. The method of claim 1, wherein the plurality of user interaction units correspond with sequentially performed user interactions. 35. The empathetic computing device of claim 23, wherein the plurality of user interaction units correspond with sequentially performed user interactions. 36. The method of claim 1, wherein the generating a pattern of lights comprises sequentially illuminating one or more of the plurality of illumination sources. 37. The method of claim 1, wherein each light in the patters comprises a light expression generated by illuminating one or more of the plurality of illumination sources. 38. An empathetic computing device comprising: a processor;a plurality of sensors configured to generate sensor data based on user interaction with the empathetic computing device, the user interaction comprising a plurality of interaction units; anda memory operatively coupled to the plurality of sensors and the processor, the memory comprising stored associations between exemplary sensor data and pre-determined interaction units, the memory further comprising processor-executable instructions, which when executed by the processor cause the empathetic computing device to: receive contextual information associated with the user interaction;classify the sensor data as a sequence of interaction units using the stored associations between exemplary sensor data and pre-determined interaction units; andprovide feedback based at least in part on the sequence of interaction units and the contextual information, wherein the plurality of user interaction units correspond with sequentially performed user interactions. 39. The empathetic computing device of claim 38, wherein the processor includes an extraction processor configured to receive the sensor data, filter the sensor data, and perform features extraction on the filtered sensor data. 40. The empathetic computing device of claim 39, wherein the feature extraction comprises extracting features from simultaneously recorded data from a plurality of sensors of different types. 41. The empathetic computing device of claim 38, wherein the feature extraction comprises performing speech and face recognition. 42. The empathetic computing device of claim 38, further comprising a plurality of light sources, and wherein the instructions to provide feedback based at least in part on the sequence of interaction units and the contextual information include instructions to illuminate one or more of the plurality of light sources in a pattern. 43. The empathetic computing device of claim 42, wherein the pattern corresponds to the sequence of interaction units, a color of each illuminated light source selected based on respective ones of the interaction units in the sequence. 44. The empathetic computing device of claim 38, wherein the processor, the memory, one or more sensors of the plurality of sensors, and one or more sensors of the plurality of light sources are enclosed in an enclosure configured to fit in a palm of a user. 45. The empathetic computing device of claim 38, wherein the plurality of sensors includes a touch sensitive surface disposed on a bottom side of the empathetic computing device. 46. The empathetic computing device of claim 45, wherein the plurality of sensors further comprises a plurality of infrared sensors configured to determine proximity of the user to the empathetic computing device. 47. A method of interfacing with an empathetic computing system, the method comprising: receiving sensor data from sensors of an empathetic computing device, wherein the sensor data is generated by user interaction with the empathetic computing device, the user interaction comprising a plurality of interaction units;receiving contextual information associated with the user interaction;classifying the sensor data as a sequence of interaction units using stored associations between exemplary sensor data and pre-determined interaction units; andproviding feedback with the empathetic computing device, wherein the feedback is based, at least in part, on the sequence of interaction units and the contextual information, wherein the plurality of user interaction units correspond with sequentially performed user interactions. 48. The method of claim 47, wherein the user interaction comprises natural human actions. 49. The method of claim 47, wherein the receiving sensor data comprises receiving an indication of user proximity, user motion, detected speech, detected facial expression, or ambient condition. 50. The method of claim 47, wherein the contextual information includes, at least in part, information extracted from the sensor data. 51. The method of claim 47, wherein the contextual information includes ambient light level, ambient sound level, ambient temperature, date, time, location of the empathetic computing device, or combinations thereof. 52. The method of claim 47, wherein the providing feedback comprises generating a pattern of lights with a plurality of illumination sources of the empathetic computing device. 53. The method of claim 52, wherein a color of a light in the pattern is based on an interaction unit of the plurality of interaction units of the sequence of interaction units. 54. The method of claim 53, wherein the color is further based on the contextual information associated with a respective interaction unit. 55. The method of claim 47, further comprising, generating, for at least one interaction unit in the sequence, feedback comprising a pattern of lights, wherein the pattern is indicative of content of vocalized speech extracted from the user interaction. 56. The method of claim 47, wherein the providing feedback comprises generating an audible sound with an audio generator of the empathetic computing device, a vibrator of the empathetic computing device, or a combination of the two. 57. The method of any of claim 56, wherein the providing feedback further comprises generating a vibrational response. 58. The method of claim 47, further comprising segmenting the sensor data into a plurality of sensor data sets each associated with respective one of the plurality of interaction units, comparing a first sensor data set of the plurality of sensor data sets with exemplary sensor data to identify a first interaction unit and comparing a second sensor data set of the plurality of sensor data sets with the exemplary sensor data to identify a second interaction unit. 59. The method of claim 47, further comprising storing data sets corresponding to sequences of interaction units associated with a plurality of interaction sessions, and updating an interaction model of the empathetic computing device using the stored data sets. 60. The method of claim 59, further comprising transmitting the data sets to a server, and wherein the updating an interaction model of the empathetic computing device is performed by the server. 61. A method of interfacing with an empathetic computing system, the method comprising: receiving sensor data from sensors of an empathetic computing device, wherein the sensor data is generated by user interaction with the empathetic computing device, the user interaction comprising a plurality of interaction units;receiving contextual information associated with the user interaction;classifying the sensor data as a sequence of interaction units using stored associations between exemplary sensor data and pre-determined interaction units; andproviding feedback with the empathetic computing device, wherein the feedback is based, at least in part, on the sequence of interaction units and the contextual information, further comprising, generating, for at least one interaction unit in the sequence, feedback comprising a pattern of lights with a plurality of illumination sources of the empathetic computing device, wherein the pattern of lights is indicative of vocalized speech extracted from the user interaction. 62. The method of claim 61, wherein the generating for at least one interaction unit in the sequence, feedback comprising a pattern of lights includes generating two distinct patterns of lights, each corresponding to a different word from the vocalized speech. 63. The method of claim 61, wherein the pattern is further associated with one or more colors indicative of an interaction unit of the plurality of interaction units in the sequence.