The present application discloses a computing device that can provide a low-power, highly capable computing platform for computational imaging. The computing device can include one or more processing units, for example one or more vector processors and one or more hardware accelerators, an intellige
The present application discloses a computing device that can provide a low-power, highly capable computing platform for computational imaging. The computing device can include one or more processing units, for example one or more vector processors and one or more hardware accelerators, an intelligent memory fabric, a peripheral device, and a power management module. The computing device can communicate with external devices, such as one or more image sensors, an accelerometer, a gyroscope, or any other suitable sensor devices.
대표청구항▼
1. A computing device comprising: a plurality of vector processors, wherein one of the plurality of vector processors is configured to execute an instruction that operates on a first array of values;a hardware accelerator configured to perform a filtering operation on a second array of values;a memo
1. A computing device comprising: a plurality of vector processors, wherein one of the plurality of vector processors is configured to execute an instruction that operates on a first array of values;a hardware accelerator configured to perform a filtering operation on a second array of values;a memory fabric comprising a plurality of memory slices and an interconnect system having a first interface and a second interface, wherein the first interface is configured to couple the plurality of vector processors to the plurality of memory slices and wherein the second interface is configured to couple the hardware accelerator to the plurality of memory slices;a host processor configured to cause the memory fabric to provide the first array of values to the one of the plurality of vector processors via the first interface and to provide the second array of values to the hardware accelerator via the second interface, thereby enabling the one of the plurality of vector processors to process the first array of values in accordance with the instruction and enabling the hardware accelerator to process the second array of values in accordance with the filtering operation; anda peripheral device coupled to a plurality of input/output (I/O) pins, wherein the peripheral device is configured to provide a communication channel between at least one of the plurality of vector processors and an external device, wherein the peripheral device comprises an emulation module that is configured to cause the peripheral device to emulate a functionality of a plurality of standard protocol interfaces via a common set of the I/O pins. 2. The computing device of claim 1, further comprising a plurality of power islands each comprising at least one power domain, wherein a first of the plurality of power islands is coupled to a first supply voltage to provide the first supply voltage to one of the plurality of vector processors, and wherein a second of the plurality of power islands is coupled to a second supply voltage to provide the second supply voltage to the hardware accelerator. 3. The computing device of claim 2, further comprising a power management module configured to provide an enable signal to a switch that couples the first of the plurality of power islands to the first supply voltage, thereby placing the one of the plurality of vector processors into an active mode. 4. The computing device of claim 3, wherein the one of the plurality of vector processors comprises a logic circuit region for processing the first array of values and local memory for storing at least a subset of the first array of values, and wherein the power management module is configured to cause the first supply voltage to be provided to the logic circuit region and to cause a third supply voltage to be provided to the local memory to control a power consumption of the logic circuit region and the local memory independently. 5. The computing device of claim 3, wherein the power management module is configured to turn off the switch to disconnect the first of the plurality of power islands from the first supply voltage, thereby placing the one of the plurality of vector processors into a low-power mode. 6. The computing device of claim 3, wherein the power management module comprises a valid signal generator configured to generate a valid signal, indicating a time instance at which circuit blocks in the first of the plurality of power islands are ready to process input data, wherein the valid signal generator comprises a daisy chain of switches that provides the first supply voltage to the circuit blocks in the first of the plurality of power islands. 7. The computing device of claim 1, wherein the peripheral device is within a power island that is always powered on. 8. The computing device of claim 7, wherein the peripheral device is configured to monitor signals from the external device to detect an event to which one of the plurality of vector processors should respond to, and when the peripheral device detects the event, cause the power management module to place the one of the plurality of vector processors into the active mode. 9. The computing device of claim 1, wherein the peripheral device is coupled to a differential pair of I/O pins, and the peripheral device is configured to change a polarity of the differential pair based on a polarity control signal. 10. The computing device of claim 1, wherein the differential pair of I/O pins comprises a differential pair of Mobile Industry Processor Interface (MIPI) lanes. 11. The computing device of claim 1, wherein the peripheral device comprises a bypass buffer that is configured to perform a bypass between an input I/O pin and an output I/O pin, thereby providing a communication channel between the input I/O pin and the output I/O pin without placing the one of the vector processors in an active mode. 12. A method comprising: providing a memory fabric comprising a plurality of memory slices and an interconnect system having a first interface and a second interface;coupling, using the first interface, the plurality of memory slices and a plurality of vector processors;coupling, using the second interface, the plurality of memory slices and a hardware accelerator;providing, by the memory fabric, a first array of values to one of the plurality of vector processors via the first interface and providing a second array of values to the hardware accelerator via the second interface;executing, at the one of a plurality of vector processors, an instruction that operates on the first array of values;performing, by the hardware accelerator, a filtering operation on the second array of values;providing a peripheral device coupled to a plurality of input/output (I/O) pins, wherein the peripheral device is associated with a power island that is always powered on; andemulating, by the peripheral device, a functionality of a plurality of standard protocol interfaces via a common set of the I/O pins. 13. The method of claim 12, further comprising: providing a first supply voltage to one of the plurality of vector processors; andproviding a second supply voltage to the hardware accelerator, wherein the one of the plurality of vector processors and the hardware accelerator are associated with a first power island and a second power island, respectively. 14. The method of claim 13, further comprising providing, by a power management module, an enable signal to a switch that couples the first power island to the first supply voltage, thereby placing the one of the plurality of vector processors into an active mode. 15. The method of claim 13, further comprising generating a valid signal, indicating a time instance at which circuit blocks in the first power island are ready to process input data, using a daisy chain of switches that provides the first supply voltage to the circuit blocks in the one of the plurality of vector processors. 16. The method of claim 12, further comprising monitoring signals from an external device to detect an event to which the one of the plurality of vector processors should respond to, and causing the power management module to place the one of the plurality of vector processors into the active mode. 17. The method of claim 12, wherein the peripheral device is coupled to a differential pair of I/O pins, and the method further comprises changing a polarity of the differential pair based on a polarity control signal. 18. The method of claim 12, further comprising performing a bypass between an input I/O pin and an output I/O pin using a bypass buffer, thereby providing a communication channel between the input I/O pin and the output I/O pin without placing the one of the vector processors in an active mode.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (33)
Comair, Claude; Li, Xin; Abou-Samra, Samir; Champagne, Robert; Fam, Sun Tjen; Ghali, Prasanna; Pan, Jun, 3D transformation matrix compression and decompression.
Seong,Nak hee; Lim,Kyoung mook; Jeong,Seh woong; Park,Jae hong; Im,Hyung jun; Bae,Gun young; Kim,Young duck, Apparatus and method for dispatching very long instruction word having variable length.
Iwata Yasushi,JPX ; Asato Akira,JPX, Data processing device to compress and decompress VLIW instructions by selectively storing non-branch NOP instructions.
Pitsianis,Nikos P.; Pechanek,Gerald George; Rodriguez,Ricardo, Efficient complex multiplication and fast fourier transform (FFT) implementation on the ManArray architecture.
Pitsianis, Nikos P.; Pechanek, Gerald G.; Rodriguez, Ricardo E., Efficient complex multiplication and fast fourier transform (FFT) implementation on the manarray architecture.
Coleman Charles H. (Redwood City CA) Miller Sidney D. (Mountain View CA) Smidth Peter (Menlo Park CA), Method and apparatus for image data compression using combined luminance/chrominance coding.
Gerald G. Pechanek ; Juan Guillermo Revilla ; Edwin F. Barry, Methods and apparatus for dynamic very long instruction word sub-instruction selection for execution time parallelism in an indirect very long instruction word processor.
Pechanek Gerald G. ; Revilla Juan Guillermo ; Barry Edwin F., Methods and apparatus for dynamic very long instruction word sub-instruction selection for execution time parallelism in an indirect very long instruction word processor.
Pechanek, Gerald G.; Revilla, Juan Guillermo; Barry, Edwin Franklin, Methods and apparatus for dynamic very long instruction word sub-instruction selection for execution time parallelism in an indirect very long instruction word processor.
Drabenstott, Thomas L.; Pechanek, Gerald G.; Barry, Edwin F.; Kurak, Jr., Charles W., Methods and apparatus to support conditional execution in a VLIW-based array processor with subword execution.
Drabenstott, Thomas L.; Pechanek, Gerald G.; Barry, Edwin F.; Kurak, Jr., Charles W., Methods and apparatus to support conditional execution in a VLIW-based array processor with subword execution.
Drabenstott,Thomas L.; Pechanek,Gerald George; Barry,Edwin Franklin; Kurak, Jr.,Charles W., Methods and apparatus to support conditional execution in a VLIW-based array processor with subword execution.
Drabenstott,Thomas L.; Penchanek,Gerald G.; Barry,Edwin F.; Kurak, Jr.,Charles W., Methods and apparatus to support conditional execution in a VLIW-based array processor with subword execution.
Thomas L. Drabenstott ; Gerald G. Pechanek ; Edwin F. Barry ; Charles W. Kurak, Jr., Methods and apparatus to support conditional execution in a VLIW-based array processor with subword execution.
Hall William E. (Beaverton OR) Stigers Dale A. (Hillsboro OR) Decker Leslie F. (Portland OR), Parallel vector processing system for individual and broadcast distribution of operands and control information.
Topham,Nigel Peter, Processor and method for generating and storing compressed instructions in a program memory and decompressed instructions in an instruction cache wherein the decompressed instructions are assigned im.
Topham,Nigel Peter, Processor and method for generating and storing compressed instructions in a program memory and decompressed instructions in an instruction cache wherein the decompressed instructions are assigned imaginary addresses derived from information stored in the program memory with the compressed instructions.
Booth, Jr.,Lawrence A.; Rosenzweig,Joel; Burr,Jeremy, System and method for high-speed communications between an application processor and coprocessor.
Haikonen Pentti,FIX ; Juhola Janne M.,FIX ; Latva-Rasku Petri,FIX, Video compressing method wherein the direction and location of contours within image blocks are defined using a binary picture of the block.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.