An affordable, highly trustworthy, survivable and available, operationally efficient distributed supercomputing infrastructure for processing, sharing and protecting both structured and unstructured information. A primary objective of the SHADOWS infrastructure is to establish a highly survivable, e
An affordable, highly trustworthy, survivable and available, operationally efficient distributed supercomputing infrastructure for processing, sharing and protecting both structured and unstructured information. A primary objective of the SHADOWS infrastructure is to establish a highly survivable, essentially maintenance-free shared platform for extremely high-performance computing (i.e., supercomputing)—with “high performance” defined both in terms of total throughput, but also in terms of very low-latency (although not every problem or customer necessarily requires very low latency)—while achieving unprecedented levels of affordability at its simplest, the idea is to use distributed “teams” of nodes in a self-healing network as the basis for managing and coordinating both the work to be accomplished and the resources available to do the work. The SHADOWS concept of “teams” is responsible for its ability to “self-heal” and “adapt” its distributed resources in an “organic” manner. Furthermore, the “teams” themselves are at the heart of decision-making, processing, and storage in the SHADOWS infrastructure. Everything that's important is handled under the auspices and stewardship of a team
대표청구항▼
1. A dynamically reconfigurable computing infrastructure comprising: a distributed network; andat least one team of geographically distributed computing nodes, the team of computing nodes comprising at least n nodes configured to use coding theory to achieve Byzantine agreement, wherein agreement am
1. A dynamically reconfigurable computing infrastructure comprising: a distributed network; andat least one team of geographically distributed computing nodes, the team of computing nodes comprising at least n nodes configured to use coding theory to achieve Byzantine agreement, wherein agreement among at least k of the n nodes is sufficient to tolerate f unknown faulty or malicious nodes and c known crashes or unresponsive nodes when (c+2f)<(n−k); andwherein a computing node comprises: (a) a memory;(b) at least one processor;(c) a first component configured to perform computing and data processing functions which further comprise secure communications and access controls;(d) a second component configured to recognize and differentiate between computing nodes by their cryptographic identities and the absence of anomalous behaviors, and to determine whether at least one of an object, a subject, and an interaction is authorized or unauthorized;(e) a third component configured to establish trust between computing nodes and collaborate with computing nodes to establish teams of nodes;(f) a fourth component configured to mutually associate nodes contained within the teams of nodes, wherein each such team of nodes is accountable for a portion of the aggregate responsibilities of the computing infrastructure; and;(g) a fifth component configured to timely publish selected aggregated resource information to the teams of nodes. 2. The computing infrastructure of claim 1 further comprising a sixth component configured to determine readiness or relative workload capacities of a computing node based on on the computing node's local statistics and the aggregated resource information, the computing node's local statistics comprising at least one of its operational profile and survivability factors. 3. The computing infrastructure of claim 2 further comprising a seventh component configured to dynamically adapt a node to system-wide load balancing needs by volunteering for responsibilities or delegating responsibilities to nodes with a readiness sufficient to meet service level agreements and operational goals. 4. The computing infrastructure of claim 1 wherein a site comprising one or more computing nodes further comprises: at least one self-contained power subsystem that enables off-grid operation for extended periods using at least one of stored, renewable, and externally supplied non-grid energy sources; anda thermal energy transfer system configured to direct energy from a heat source using direct immersion and directed-flow conduction into a low-boiling-point phase-change working fluid, which fluid flows into, through, and out of the site in a closed loop. 5. The computing infrastructure of claim 1, wherein the teams of computing nodes are dynamically selected and are distributed geographically. 6. The computing infrastructure of claim 1 wherein a computing node further comprises a combination of interconnected and interacting processing devices, the devices further comprising: a set of slave devices;a set of master devices configured to control the physical and virtual environments seen by the set of slave devices;a set of memory processing devices controlled by the set of master devices configured to implement and accelerate selected memory-processing functions; anda set of communications devices controlled by the set of master devices configured to implement selected communications functions. 7. The computing infrastructure of claim 6 wherein the set of master devices further controls at least one of power utilization, efficiency, capacity, latency, and throughput of the computing nodes by optimizing a combination of individual variables comprising at least one of device enablement, device bypass, device redundancy, device programming, device modes, device utilizations, device load profiles, device connectivities, device voltages, device operating frequencies, device junction temperatures, working fluid temperatures, working fluid flow rates, working fluid paths, and working fluid pressures. 8. The computing infrastructure of claim 4 further comprising: an eighth component configured to dynamically and automatically self-modify a computer node's collective operational profile and associated readiness so as to economize operational resources and optimize their collective survivability while maintaining necessary trust relationships, such that if a computing node is no longer trusted in one or more operational roles, it is effectively removed from those roles. 9. The computing infrastructure of claim 8 further comprising: a ninth component configured to automatically select at least one opportunistic resource, by automatically shifting resource allocation among computing nodes. 10. The computing infrastructure of claim 9 wherein at least one grouping of computing nodes comprises one or more field-replaceable units. 11. The computing infrastructure of claim 10, further comprising: a high-speed inter-device communications subsystem comprising a plurality of communications mechanisms capable of efficiently and diversely interconnecting the primary components of a computing node to each other and to corresponding devices in other local computing nodes. 12. The computing infrastructure of claim 4 wherein the thermal energy transfer system further comprises routing complementary phases of the phase-change working fluid through a thermal pumping mechanism such that the primarily thermodynamic energy conversion effects of vapor-to-liquid phase-change combine with secondary Venturi effects to create a pressure-based motive force capable of propelling or assisting in propelling the working fluid toward downstream destinations. 13. The computing infrastructure of claim 4 wherein the thermal energy transfer system further comprises routing of at least a segregated portion of the phase-change working fluid through at least one of a ground loop and underground fluid reservoir. 14. The computing infrastructure of claim 4 wherein the thermal energy transfer system further comprises indirect routing of the phase-change working fluid from one or more heat sources through the primary circuit of an internal heat exchanger whose secondary circuit handles the flow of an external working fluid into, through, and out of the site to an external heat rejection system further comprising at least one of a heat exchanger for downstream heat uses, chilled water system, cooling tower, dry cooler, and ground-coupled heat exchanger. 15. The computing infrastructure of claim 4 wherein the phase-change working fluid is an organic dielectric fluid with a boiling point between 20° C. and 40° C., such as 1-methoxy-heptafluoropropane (C3f70CH3). 16. The computing infrastructure of claim 1, further comprising: an energy transfer subsystem configured to capture thermal energy within the plurality of subsystems, such that the captured energy is used for internal power generation. 17. The computing infrastructure of claim 1 further comprises an associative memory that provides associative access to its content. 18. The computing infrastructure of claim 17 wherein the associative memory further comprises at least one of content compression, persistent content retention and non-persistent content retention. 19. The computing infrastructure of claim 1 further comprising: an tenth component configured to selectively perform deterministic results memoization in order to avoid reprocessing inputs to deterministic processes whose results have been previously determined. 20. The computing infrastructure of claim 17 wherein the memory is configured for mandatory access control.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허를 인용한 특허 (3)
Arney, David V.; Bermeo, Dennis G.; Snyder, Glenn D.; Simonds, Hale B.; Berens, Peter S., Elliptical conical antenna apparatus and methods.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.