IPC분류정보
국가/구분 |
United States(US) Patent
등록
|
국제특허분류(IPC7판) |
|
출원번호 |
US-0870946
(2001-06-01)
|
등록번호 |
US-7469237
(2008-12-23)
|
발명자
/ 주소 |
|
출원인 / 주소 |
|
대리인 / 주소 |
|
인용정보 |
피인용 횟수 :
10 인용 특허 :
32 |
초록
▼
Fractal computers are neural network architectures that exploit the characteristics of fractal attractors to perform general computation. This disclosure explains neural network implementations for each of the critical components of computation: composition, minimalization, and recursion. It then de
Fractal computers are neural network architectures that exploit the characteristics of fractal attractors to perform general computation. This disclosure explains neural network implementations for each of the critical components of computation: composition, minimalization, and recursion. It then describes the creation of fractal attractors within these implementations by means of selective amplification or inhibition of input signals, and it describes how to estimate critical parameters for each implementation by using results from studies of fractal percolation. These implementation provide standardizable implicit alternatives to traditional neural network designs. Consequently, fractal computers permit the exploitation of alternative technologies for computation based on dynamic systems with underlying fractal attractors.
대표청구항
▼
What is claimed is: 1. A computerized accelerated leaming method for image-based decision system applications such as machine vision, non-contact gauging, inspection, robot guidance, medical imaging o accelerate learning maturity and enhance learning outcome comprises the following steps: (a) input
What is claimed is: 1. A computerized accelerated leaming method for image-based decision system applications such as machine vision, non-contact gauging, inspection, robot guidance, medical imaging o accelerate learning maturity and enhance learning outcome comprises the following steps: (a) input learning samples images; (b) perform object of interest implantation on images using the learning samples images to generate simulated learning samples containing simulated objects of interest in the images; (c) perform computerized algorithm learning using the input learning samples images and the simulated learning samples images. 2. The method of claim 1 wherein the object of interest implantation on images includes a texture mapping method extracting the defects from different products and mapping into normal images of the new product. 3. The method of claim 1 wherein the object of interest implantation on images uses geometry and intensity models defining the shape and pixel intensity of objects of interest. 4. The method of claim 1 wherein the object of interest implantation on images uses manual image editing of known good images to create negative or positive learning samples. 5. The method of claim 1 wherein the object of interest implantation on images uses a combination of methods selected from the set consisting of: (a) texture mapping method extracting the defects from different products and mapping into normal images of the new product, (b) geometry and intensity modeling defining the shape and pixel intensity of objects of interest, and (c) manual image editing of known good images to create negative or positive learning samples. 6. The method of claim 1 wherein learning includes an computerized algorithm training process. 7. The method of claim 1 wherein the learning includes a computerized startup learning process. 8. The method of claim 3 wherein the geometry and intensity models use one or more image models selected from the set consisting of: (a) image circle model, (b) image donut model, (c) image rectangle model, (d) image spline curve model, and (e) image comet model. 9. An accelerated computerized algorithm training method for image-based decision system applications such as machine vision, non-contact gauging, inspection, robot guidance, medical imaging to accelerate learning maturity and enhance learning outcome comprises the following steps: (a) input learning samples images; (b) perform object of interest implantation on imams using the learning samples to generate simulated learning samples containing simulated objects of interest in the images; (c) perform computerized algorithm training using the learning samples images and the simulated learning samples images. 10. The method of claim 9 wherein the object of interest implantation on images includes a texture mapping method extracting the defects from different products and mapping into normal images of the new product. 11. The method of claim 9 wherein the object of interest implantation on images uses geometry and intensity models defining the shape and pixel intensity of objects of interest. 12. The method of claim 9 wherein the object of interest implantation on images uses manual image editing of known good images to create negative or positive learning samples. 13. The method of claim 9 wherein the object of interest implantation on images uses a combination of methods selected from the set consisting of: (a) texture mapping method extracting the defects from different products and mapping into normal images of the new product, (b) geometry and intensity modeling defining the shape and pixel intensity of objects of interest, and (c) manual image editing of known good images to create negative or positive learning samples. 14. The method of claim 9 wherein the computerized algorithm training further comprises: (a.) input additional learning sample images following initial computerized algorithm training; (b) perform test using the additional learning samples images and adjustment to achieve the performance goals, and (c) output a general computerized algorithm including algorithm architecture and default parameters. 15. The method of claim 11 wherein the geometry and intensity models use at least one image model selected from the set consisting of: (a) image circle model, (b) image donut model, (c) image rectangle model, (d) image spline curve model, and (e) image comet model. 16. The method of claim 14 further comprising input performance goals and expected tolerances for the computerized applications. 17. The method of claim 9 wherein the object of interest implantation on images comprises: (a) input expected computer application tolerances; (b) output initial simulated learning samples images using initial learning sample images and expected computer application tolerances; (c) input additional learning samples-images; (d) output additional simulated learning samples images using the additional learning sample images and expected computer application tolerances. 18. A computerized accelerated start-up learning method for image-based decision system applications such as machine vision, non-contact gauging, inspection, robot guidance, medical imaging to accelerate learning maturity and enhance learning outcome comprising: (a) input start-up learning sample images; (b) perform object of interest implantation on images using the start-up learning sample images to generate simulated learning samples containing simulated objects of interest in the images; (c) perform computerized start-up learning on a general computerized algorithm using the input start-up learning samples images and the simulated learning samples-images. 19. The method of claim 18 wherein the object of interest implantation on images comprises a texture mapping method extracting the defects from different products and mapping into normal images of the new product. 20. The method of claim 18 wherein the object of interest implantation on images uses at least one geometry and intensity model defining the shape and pixel intensity of objects of interest. 21. The method of claim 18 wherein the simulated learning sample images simulate defective samples images. 22. The method of claim 18 wherein the object of interest implantation on images uses a combination of methods selected from the set consisting of: (a) texture mapping method extracting the defects from different products and mapping into normal images of the new product, (b) geometry and intensity modeling defining the shape and pixel intensity of objects of interest, and (c) manual image editing of known good images to create negative or positive learning samples. 23. The method of claim 18 wherein the computerized startup learning further comprises: (a) input at least one start-up learning samples images; (b) input a computerized general algorithm; (c) output an application specific computerized algorithm using the at least one start-up learning samples images; (d) perform automatic computerized adjustment using simulated learning samples images to generate an application specific computerized algorithm. 24. The method of claim 20 wherein the geometry and intensity models use at least one image model selected from the set consisting of (a) image circle model, (b) image donut model, (c) image rectangle model, (d) image spline curve model, and (e) image comet model. 25. The method of claim 17, wherein at least one of the plurality of modules includes a CNOT gate. 26. The method of claim 1, further including the steps of: copying values for critical probabilities for a lattice formed by the computational nodes from the neural network architecture; and writing the values to another neural network. 27. An apparatus for implicit digital computation comprising: a neural network architecture means having a plurality of layer means, each layer means comprising a plurality of computational nodes, each of the plurality of computational nodes being implemented as a software process on a general purpose computer or by a digital or analog hardware device, the plurality of layer means comprising: a processing layer means including: at least one input processing layer means which is capable of rendering a stable transformable digital representation of input signals, at least one central processing layer means, and at least one output processing layer means; feedforward input channel means; full lateral and feedback connection means within the processing layer means; output channel means; re-entrant feedback means from the output channel means to the processing layer means; means for updating each of the plurality of computational node means using local update processes; and means for using re-entrant feedback from the output channel means to perform minimalization for general computation such that said stable transformable digital representations of input signals are distributed to the plurality of computational nodes which combine the stable transformable digital representations of input signals according to the interconnectivity of the at least one input processing layer and the plurality of computational layers, and which perform minimization steps on on the plurality of combinations to match at least one specified success criteria whereby the said minimalization step selects from an inventory of at least one output value based on the said plurality of combinations at the moment of the selection step to minimize the difference between the said plurality of stable transformable combinations and the said at least one success criteria, whereby the selection weight for said selection is decreased when the plurality of subsequent input stable transformable combinations diverges from the said at least one success criteria, and the selection weight for said selection is increased when the plurality of subsequent input stable transformable combinations converges with said at least one success criteria. 28. The apparatus of claim 27, wherein the output channel means uses feedforward connection means between the output channel means and the processing layer means. 29. The apparatus of claim 27, wherein the output channel means uses bi directional connection means between the output channel means and the processing layer means. 30. The apparatus of claim 27, wherein the re-entrant feedback means is uni-directional. 31. The apparatus of claim 27, wherein the re-entrant feedback means is bi-directional. 32. The apparatus of claim 27, wherein the local update processes are any one of: random processes between the plurality of adaptPte computational nodes, non-stationary random processes between the plurality of computational nodes, Polya processes between the plurality of adaptive computational node means, and Bose-Einstein processes between the plurality of ad.aptPte computational nodes means. 33. The apparatus of claim 27, wherein the local update processes are Bose-Einstein processes and the plurality of computational nodes are lasers. 34. The apparatus of claim 27, wherein the local update processes are a phase change in the plurality of computational nodes. 35. The apparatus of claim 27, wherein the local update processes are a Bose-Einstein condensation in the plurality of computational nodes. 36. The apparatus of claim 27, wherein the local update processes are quantum measurements performed on the plurality of computational nodes. 37. The apparatus of claim 27, wherein the local update processes perform nearest-neighbor normalization among the plurality of computational nodes. 38. The apparatus of claim 27, wherein the local update processes create a Delaunay tessellation from one layer means to a next layer means. 39. The apparatus of claim 27, wherein the local update processes include inhibition between at least one of adaptive computational node means and at least one other of the plurality of computational nodes. 40. The apparatus of claim 27, wherein the local update processes cause fractal percolation among the plurality of computational nodes. 41. The apparatus of claim 27, wherein the minimalization recalibrates the adaptive computation node means in the processing layer means. 42. The apparatus of claim 27, wherein the minimalization step is triggered by fractal percolation across the plurality of computational nodes. 43. The apparatus of claim 27, wherein the minimalization step is a quantum measurement performed on the plurality of computational nodes. 44. The apparatus of claim 27, wherein the plurality of layer means is one module means in an architecture with a plurality of module means. 45. The apparatus of claim 44, wherein one of the plurality of module means is an attention module means that includes: at least two layer means connected by bi-directional connection means; and lateral connectivity means to at least two processing layer means belonging to other module means. 46. The apparatus of claim 44, wherein one of the plurality of module means is a standard digital memory means. 47. The apparatus of claim 44, wherein one of the plurality of module means is a dynamic memory means that includes: a plurality of layer means including a plurality of computational nodes; feedforward input means from the output channel means of other module means; feedforward connectivity means from a first to a last layer means; and feedforward re-entrant connection means to the first layer means using bi-directional connectivity means between the module means and the processing layer means from the other module means. 48. The apparatus of claim 44, wherein the module means have standardized dimensions number of layer means and a standardized number of adaptive computational node means, and wherein each module means includes: connectivity means from the input processing layer means of each module means to a neighboring module means; and lateral connectivity means between a corresponding processing layer means to permit leaky processing. 49. The apparatus of claim 44, wherein the plurality of module means provide converging and diverging connection means from the output layer means of each module means to processing layer means of other module means. 50. The apparatus of claim 44, wherein one module means includes at least one of a logic gate, aNAND gate, and a CNOT gate. 51. A method for computation using a of a neural network architecture in a computing device comprising the steps of: organizing a plurality of computational nodes into a neural computing device, each of the plurality of computational nodes being implemented as a software process on a general purpose computer or by a digital or analog hardware device, wherein the architecture of the neural computing device comprises: using at least one stable transformable digital representation as an input to receive data to be processed from an environment; using a locally connected subset of the plurality of computational nodes for fractal percolation using the at least one stable transformable digital representation; using a minimalization step for computation; and using at least one output to output processed data that can be used by a human or as an input to a machine. 52. The method of claim 51, further including the steps of: copying values for critical probabilities for a lattice formed by the computational nodes from the neural network architecture; and writing the values to another neural network. 53. The method of claim 51, wherein the plurality of locally connected computation nodes has a state space of dimension of at least two. 54. The method of claim 51, wherein the plurality of locally connected computation nodes has a Hilbert state space. 55. The method of claim 51, wherein connections among the plurality of locally connected computation nodes extend beyond nearest neighbors. 56. The method of claim 51, wherein connections among the plurality of locally connected computation nodes are feedforward connections, leading to "first-pass" percolation. 57. The method of claim 51, wherein at least one connection among the plurality of locally connected computation nodes is an inhibitory connection. 58. The method of claim 51, wherein the minimalization step is performed by re-entrant connections to the plurality of locally connected computation nodes. 59. The method of claim 51, wherein the fractal percolation across the plurality of locally connected computational nodes occurs by one of a random process, a Poisson process, a non stationary random process, a Polya process, and a Bose-Einstein statistical process. 60. The method of claim 51, wherein the fractal percolation across the plurality of locally connected computation nodes occurs by nearest-neighbor renormalization. 61. The method of claim 51, wherein the minimalization step is performed by re-scaled weights among the plurality of locally connected computation nodes. 62. The method of claim 51, wherein the minimalization step is performed by a quantum measurement. 63. The method of claim 51, wherein the plurality of locally connected computational nodes are one module in an architecture with a plurality of modules. 64. The method of claim 63 wherein one of the plurality of modules is an attention module using bi-directional connections with at least one other module. 65. The method of claim 63, wherein one of the plurality of modules is a standard digital memory. 66. The method of claim 63, wherein one of the plurality of modules is a dynamic memory including: a plurality of locally connected computational nodes, with feedforward inputs from another module, feedforward connectivity among the plurality of locally connected computational nodes using at least one feedforward re-entrant connection to the first of the plurality of locally connected computational nodes; and bi-directional connectivity between the modules. 67. The method of claim 63, wherein the plurality of modules have standardized dimensions and numbers of the plurality of locally connected computational nodes, wherein random connectivity among neighboring modules permits leaky processing. 68. The method of claim 63, wherein the plurality of modules provide converging and diverging connections from the at least one output of one module to another module. 69. The method of claim 63, wherein each module includes at least one of a logic gate, a NAND gate, and a CNOT gate. 70. An apparatus for implicit computation comprising: a of a neural network architecture means including: an input means from an environment capable of rendering a stable transformable digital representation of the environment; an output means; and, a plurality of locally connected computation nodes, each of the plurality of computation nodes being implemented as a software process on a general purpose computer or by a digital or analog hardware device, wherein the plurality of locally connected computation nodes is organized to perform fractal percolation using said stable transformable digital representations, wherein a minimalization step is used for computation. 71. The apparatus of claim 70, wherein the plurality of locally connected computation nodes has a state space of dimension of at least or equal to two. 72. The apparatus of claim 70, wherein the plurality of locally connected computation nodes has a Hilbert state space. 73. The apparatus of claim 70, wherein the plurality of locally connected computation nodes include Rydberg atoms. 74. The apparatus of claim 70, wherein the plurality of locally connected computation nodes include molecular magnets. 75. The apparatus of claim 70, wherein the local connection means among the plurality of locally connected computation nodes extend beyond nearest neighbors. 76. The apparatus of claim 70, wherein connection means connecting plurality of locally connected computation nodes are all feedforward, leading to "first-pass" percolation. 77. The apparatus of claim 70, wherein at least one connection means connecting the plurality of locally connected computation nodes is an inhibitory connection means. 78. The apparatus of claim 70, wherein the fractal percolation across the plurality of locally connected computation nodes occurs by any one of a Polya process, Bose-Einstein condensation, a Poisson process, a non-stationary random process, nearest-neighbor renormalization, and a random process. 79. The apparatus of claim 70, wherein the fractal percolation across the plurality of locally connected computation nodes occurs across an Ising lattice. 80. The apparatus of claim 70, wherein the plurality of locally connected computation nodes includes at least one laser. 81. The apparatus of claim 70, wherein the minimalization step user re-scaled weights among the plurality of locally connected computation nodes. 82. The apparatus of claim 70, wherein the minimalization step is performed by one of electron spin resonance pulses, quantum measurement, re-entrant connection means to the plurality of locally connected computation nodes and coherent radiation. 83. The apparatus of claim 70, wherein the plurality of locally connected computational nodes is one module means in an architecture with a plurality of module means. 84. The apparatus of claim 83, wherein one of the plurality of module means is an attention module means using bi-directional connection means to connect with another module means. 85. The apparatus of claim 83, wherein one of the plurality of module means is a standard digital memory means. 86. The apparatus of claim 83, wherein one of the plurality of modules is a dynamic memory using a plurality of locally connected computational nodes, and further including: feedforward input means from the output means of a remainder of the module means; feedforward connectivity means among the plurality of locally connected computational nodes; at least one feedforward re-entrant connection means to a first of the plurality of locally connected computational nodes, bi-directional connectivity means between the module means and both the plurality of locally connected computational nodes and the remainder of the plurality of module means. 87. The apparatus of claim 83, wherein the plurality of module means have standardized dimensions and standardized numbers of the plurality of locally connected computational nodes, the computational nodes using random connectivity means to connect to neighboring members of the plurality of module means to permit leaky processing. 88. The apparatus of claim 83, wherein the plurality of module means provide converging and diverging connection means from one output means of the plurality of module means to at least one other module means. 89. The apparatus of claim 83, wherein each module means includes one of a logic gate, a NAND gate, and a CNOT gate. 90. The apparatus of claim 70, further comprising: storage means for copying values for critical probabilities for a lattice formed by the computational nodes from the neural network architecture; and transfer means to write the values to another neural network means. 91. A system comprising: a plurality of computational nodes, wherein each computational node is implemented as a software process on a general purpose computer or by a digital or analog hardware device, each computational node comprising a local update process which transforms data received by the computational node wherein, a first subset of the plurality of computational nodes is organized into at least one feedforward input channel operatively connected to an input digital or analog data source; a second subset of the plurality of computational nodes organized into a plurality of processing layers having full lateral and feedback connections between the plurality of processing layers, at least one of the second subset of the plurality of computational nodes operatively being connected at least one of the first subset of the plurality of computational nodes; a third subset of the plurality of computational nodes organized into at least one output channel operatively connected to a data storage device or network, at least one of the third subset of the plurality of computational nodes operatively being operatively connected to at least one of the second subset of the plurality of computational nodes comprising at least one re-entrant feedback channel, wherein the feedforward input channel receives data from the external data source and transforms the data to a digital format and distributes the data in the digital format to at least one of second subset of the plurality of computational nodes, wherein the data in the digital format is processed by the plurality of processing layers using the local update processes of the nodes comprising plurality of processing layers, using the full lateral and feedback connections within the processing layers and using re-entrant feedback from the re-entrant feedback channel such that the data in the digital format is combined and minimalized, and wherein the combined data is output by the at least one output channel to the data storage device or network.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.