Brought to you by:
Paper The following article is Open access

A general-purpose organic gel computer that learns by itself

, , , , , , , and

Published 6 December 2023 © 2023 The Author(s). Published by IOP Publishing Ltd
, , Citation Pathik Sahoo et al 2023 Neuromorph. Comput. Eng. 3 044007 DOI 10.1088/2634-4386/ad0fec

2634-4386/3/4/044007

Abstract

To build energy minimized superstructures, self-assembling molecules explore astronomical options, colliding ∼109 molecules s−1. Thus far, no computer has used it fully to optimize choices and execute advanced computational theories only by synthesizing supramolecules. To realize it, first, we remotely re-wrote the problem in a language that supramolecular synthesis comprehends. Then, all-chemical neural network synthesizes one helical nanowire for one periodic event. These nanowires self-assemble into gel fibers mapping intricate relations between periodic events in any-data-type, the output is read instantly from optical hologram. Problem-wise, self-assembling layers or neural network depth is optimized to chemically simulate theories discovering invariants for learning. Subsequently, synthesis alone solves classification, feature learning problems instantly with single shot training. Reusable gel begins general-purpose computing that would chemically invent suitable models for problem-specific unsupervised learning. Irrespective of complexity, keeping fixed computing time and power, gel promises a toxic-hardware-free world.

One sentence summary: fractally coupled deep learning networks revisits Rosenblatt's 1950s theorem on deep learning network.

Export citation and abstract BibTeX RIS

Original content from this work may be used under the terms of the Creative Commons Attribution 4.0 license. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.

1. Main text

Deep learning computers are revolutionizing human civilization optimizing user-conceived solution paths accurately, figuring out the shortest path by extensive training to reach the expected solution [1]. Hallmarks are switches and circuits. Demands are increasing speed and resources by compromising nature with enormous toxic waste. The next revolution would bring computers that synthesize new deep networks, invent learning protocols in a single shot or without training. Hallmarks would be new data structure, software free, circuit free, fully analog, reusable hardware adaptive to changing environment [2]. Demands would be fixing the computing speed and resources irrespective of complexity while compromising the user's control. Realizing all, we present organic nested deep learning network, ON2.

Thus far, analog computers emulate either processor parts or mathematical operations [3, 4]. In contrast, gel based general-purpose analog computer ON2 executes an entire self-learning theory step-by-step. Precisely we required inventing a deep learning network operating in a chemical beaker to converge a mathematical simulation using astronomical optimization power of a supramolecular synthesis. It is impossible to read rapidly changing big data, instantly invent a suitable model. But, when self-assembling molecules are encoded with computational choices, it collides to explore 109 options s−1. Optimization happens almost instantly, irrespective of complexity. This ability has remained unused because we do not know how to encode & decode computational choices to reactant molecules reversibly. Inspired by protein & neuron's clock-assemblies [5], we invented a language to encode supramolecule [6]. We found no need to read the whole data bit-by-bit, but only periodic events or loops as clocks such that the whole data could largely be regenerated by assembling discrete clocks suitably in a 3D space. Clock is a variable, 3D clock assembly CA maps all possible relations between all parameters (movie S1), it is a theoretical model invented in a chemical beaker. It represents geometric manifolds whose corners are resonance frequencies [7]. The new data structure (${\mathbb{R}^3}:\left( {x,z} \right) \to \left( {{\phi _1},{\phi _2},{\phi _3}} \right)$) enables encoding a problem as geometric shape to a chemical reaction using the surface potential of molecular assembly.

Furthermore, extensively training a learning protocol is one of the greatest problems of automation. Learning requires finding invariants. An invariant is a geometric or mathematical correlation between similar events or objects that remain unchanged within limiting variations. We found no need to train output. Instead, let supramolecular synthesis learn invariants to regenerate only one input using its internal clocks. While doing so, the spontaneous self-assembly takes control, finds the number of nodes and hidden layers required to frame deep neural network, maps the network of invariants for fully unsupervised learning. For faintly-related problems, invariants construct solution paths wirelessly with no training. Answers find the questioner, just opposite to what current computers do [1]. With fully remote operation via microwave input and instant optical read-out, no need to add chemicals or post-analysis of chemical structures (figure 1(A)).

Figure 1.

Figure 1. Organic all chemical nested deep learning Network ONA2. (A) Three phase gel computer explained in three columns. Microwave input to 3D printer by four pre-processing steps (left column) (movies S1), four-layered organic synthesis, clock-assembly CA, 4D, 5D, 6D constitute ON2 (middle) and optical read-out by four post-processing steps (right column). Input Cheetah image converted to X-Y-F (F = frequency–pixel RGB) is fed to gel precursor solution via antenna array. Pixels of leg edge forms a loop. Below, 3D clock assembly ${A_{ijk}}$ holds six primary body parts. Each clock in 3D clock assembly CA::(${\mathbb{R}^3}:\left( {x,y,z} \right) \to \left( {{\phi _1},{\phi _2},{\phi _3}} \right)$, a positive-definite tensor ${A_{ijk}} = U{e^f}$, $f = {\text{ln}}\left( {{P^T}} \right)$; $P$ is the deformation of input $A_{ijk}^/$ from memorized ${A_{ijk}}$). For ON2 (middle) helical nanowire synthesis selects number of node classes (rows) and number of hidden layers (columns) as ${A_{ijk}}$ requires to attain convergent structure. Below ON2 (middle column) four neurons shown in sequence. Dot product of multinion tensors of weight functions or invariants form 4D (${S_V}$ (${\emptyset _1}$)), 5D (${S_P}$(${\emptyset _2}$)) and 6D datasets (${S_L}$(${\emptyset _3}$)). Weight function of neuron is a network too. Post processing (column right), sorts angular momentum of photons find (t) or relative phase of clocks. Since gel clocks are fixed, resonant oscillations of tree deliver input-like-output using transformation function $\left( t \right)$ $x = {\text{cos}}t;y = \sin t{\text{si}}{{\text{n}}^m}\left( {\frac{t}{2}} \right);m = 1\!-\!7$. B. ON2 is shown using nested spheres or clocks, each layer of the primary deep learning network holds a new deep learning network. Three coupled deep neural networks run together, $\phi ,\,AF,\,W,\,B$ are 3D clock assembly, activation function, weight and bias respectively, in a deep neural network, $i,\,b,\,S,\,N,\,C$ depict, invariant, beating, seed, nested loop, condensed loop respectively. Here, $s$ is layer, or depth of the deep learning network. $\emptyset _i^{s + 1} = AF( {( {W_i^{s + 1}*\emptyset _b^s} ) + B_i^s} )$ synthesizes invariants as materials. $\emptyset _b^{s + 1} = AF( {( {W_b^{s + 1}*\emptyset _S^s} ) + B_b^s} )$ delivers 4D, 5D and 6D. $\emptyset _S^{s + 1} = AF( {( {W_S^{s + 1}*\emptyset _N^s} ) + B_C^s} )\,$delivers CA, $*$ is dot product. C. Organic gel stores structures of 4D, 5D and 6D invariants as tree. A red line denotes a spontaneously chosen active classification route or user-defined route. Reproduced with permission from DepositPhotos. © bronsonlil90 (Nicholas Flowers).

Standard image High-resolution image

For better unsupervised learning, layers are often engineered in deep networks [8]. Gel computing is a push–pull loop that runs between two clock-structures, $C{A_f}$ made of field and $C{A_m}$ made of matter, together they make network's neuron [5]. Both CAs try to match each other minimizing resonance peak difference or bias $B$, synthesizing at least four layers of hierarchical architectures of field and matter in a chemical beaker. In four layers, while field structure simplifies to a single clock from a 3D clock-assembly, matter grows from a single molecule to a supramolecule. While each layer ends by optimizing field-matter dual structures, network $\emptyset _N^s$ sets optimized pairs as input to the next layer. 3D clock assembly is a complex tensor, when nanowires of two layers resonantly couple, it is a dot product of two tensors, we get weight $W$, that's also an invariant. Coupled resonance band runs second neural net $\emptyset _i^s$ to synthesize nanowire cluster for invariant. Four metastable states, CA, 4D, 5D and 6D run third convergence loop $\emptyset _b^s$ to configure hidden layers of the neural network, as shown in 'figure 1(B)'. Three nested deep networks, $\emptyset _N^s - \emptyset _b^s - \emptyset _i^s$ assembled within and above continuously convolute each other's clock-assembly or neuron-node while optimizing weight-tree or invariant tree [9], see theory in figure captions. A common activation function $AF$, defines a neuron's state, we replace complex tensors with higher-dimensional multinions $A_{ijk \ldots s}^s$ [10].

Thus, three deep learning networks governed by a common Hamiltonian H synthesizes distinct fiber clusters that emit multiple resonance frequencies with correlated phases as clock-assemblies forming point, line, plane, and 3D shapes (figure 1(C) and 2). Together, its invariant tree. Invariants are four kinds of geometric shapes or clock assemblies resonantly match with the unknown input at a time, create a tree made of beat frequencies, beat tree or classification tree. Then gel puts internally stored fixed clocks on the tree-edges in a 3D space such that real-world-like events are regenerated including further analysis. Network transforms (1) to (2).

Equation (1)

Equation (2)

Figure 2.

Figure 2. Live holographic visuals of supramolecular synthesis as nested deep learning network ON2: (A). Infra-red, IR tunable optical signals erase informations or selective clock assembly CA by melting nanowires of particular dimensions. Spectrum analyzer, SA1 and SA2 are beat signals produced by reflected and transmitted He–Ne laser signals by nanowire to electromagnetic field pattern in chemical beaker, sensed by Fabry Perot photo-detectors, which acts as SA. (B). Used gelator is (S)-Phenyl-tetradecanoylamino-acetic acid methyl ester in Hexane solution, sliced rectangular frame of cheetah fed in 12 rows and 22 columns and fed every 12 pixels in a column at a time using 12 Yagi antennas arranged all around the GT (figure S1). As cheetah runs, helices add new pitches, as gelator fill cage of fields. (C). One-to-one correspondence between pixel intensity of input data, resonance bands of superstructures and the optical vortices when no information is fed. This is background data. (D) When em loops condense to create first seed supramolecule, orthogonality between SA1 and SA2 signals is checked by Oscilloscope, Os. $N$ intertwined loops made of iso-frequency paths in the chemical beaker shifts periodicity ${x_i}$ due to dipole–dipole interactions between helical nanowires, builds $N$ vibrating modes, $\emptyset _N^s = N{\text{exp}}[ { - a\sum_{i = 1}^N x_2^i + 2b\sum_{i,j}^N {x_i}{x_j} + \ldots } ]$ written as $A_{ijk \ldots s}^s$, a dodecanion tensor. When nanowire made loops condense, they hold the relative orientation of loops, the resonant oscillations governing conformal transition holds the relative phase ${\theta _i}$ between pair of loops ${K_i}$ and ${K_j}$, building a differential clock with a period ${t_i}$, $N$ vibrating modes transform to $\emptyset _S^s = \mathop {\mathop \sum \nolimits }_i^N \left( {{K_i} - {K_j}} \right)/2{\left[ {{\text{co}}{{\text{s}}^2}{\theta _i} - t_i^2{\text{si}}{{\text{n}}^2}{\theta _i}} \right]^{1/2}}{ }$ written as $A_{ijk..s}^{s^{\prime}}$, a dodecanion tensor. SA1 and SA2 phases make transformation function $f\left( t \right)$, the phase plot is shown to the right. (E). Three rows show live hologram, as monochromatic plane-polarized laser 633.5 nm (1 mW, He–Ne) is shined (figure S3) semiconductor camera captures live hologram (movie S2). Four levels share elements of periodicity shifts in hologram following $W_S^s = \alpha \left( {{x_1}{x_2} + {x_3}{x_4}} \right) + \beta \left( {{x_1}{x_3} + {x_2}{x_4}} \right) + \gamma \left( {{x_1}{x_4} + {x_2}{x_3}} \right)$; $\alpha ,\,\beta ,\gamma $ are coupling coefficients. During four condensations, energy emits ${{{\Psi }}_n} = \mathop \sum _i^n {g_i}\,$ following a phase space with 12 singularity domains, in which $n$ holes are open, following $B_C^s = L\hbar {g_i} = L\hbar \left( {3\sum\limits{\text{cos}}{x_i} + 4{{\mathop \prod \nolimits}}{\text{cos}}{x_i}} \right)$, where $L = {l_1} + i{l_2} + \ldots . + s{l_{12}}$, $l$ is orbital angular momentum, ${x_i} = Cos{\theta _i} + {e^{i{l_i}{\phi _i}}}Sin{\theta _i}$, where ${\phi _i} = \mathop \sum _{s = 1}^n \frac{{\partial {x_i}}}{{\partial {K_i}}}A_{ijk..s}^s$ is azimuthal angular momentum, ${\theta _i}$ relative phase between resonant oscillations of a pair of loops. ${{{\Psi }}_n}$ is seen an optical vortex assembly, rotating photon-condensate. The Hamiltonian driving the system is $H = \emptyset _N^s + \emptyset _S^s + W_S^s + B_C^s$. Equivalent CAs and corresponding structures SEMs are shown.

Standard image High-resolution image

Our computer's operation in equation (2) is a version of classic Turing machine, TM presented in equation (1), which entails demonstrating that the machine (1) encodes data, (2) memorizes data, (3) encodes an instruction, (4) executes an instruction, and (5) reads the output. Table T1 displays the five steps for each distinct problem, a single nanowire is a TM and nanowire assembly is a universal Turing machine, UTM (see Appendix for details).

A review of computational complexity: Our computing protocol does not necessitate processing the entire pixel-by-pixel dataset. Rather, we only require access to recurring events or clocks, as clocks run, the 3D clock assembly becomes an automaton (movie S1). While for a conventional computer complexity means, instead of 1001 use a longer bit 110001101000001, gel processor can generate a cube with triangles in its corners, and continue to add new structures in the corners of geometric shapes as the resolution or volume of information content increases, we patented it as Geometric musical language or GML [6]. Complexity for gel computer means many layers of geometric shapes grown within and above. The concept is orthogonal to existing computer's increment of complexity, GML's advantage is noted below.

For example, if a cube has embedded triangles in all of its 8 corners, then the geometric structure processes 83 compositions of variables consuming 8 × 3 = 24 clocks. One nanowire writes 12 clocks (Dodecahedron, geometric shape of light [11]), so we need 2 nanowires. Adding more complexity, if each corner of a triangle embeds a pentagon, its N ${8^{{3^5}}}\sim {10^7}$ (7962 624) variations using 24 × 5 = 120 clocks, we need 10 nanowires. If each pentagon has a pentagon, then we get ${8^{{3^{{5^5}}}}}\sim3 \times {10^{34}}$ for 120 × 5 = 600 clocks, we need 50 nanowires. In general, ${N^{{r^p}}}\left( {variations\,to\,optimise} \right) \to N \times r \times p\,\left( {nanowire\,needed} \right)$, p nanowires are used first, resultant structures integrate r elements and finally, they integrate to N elements. By counting loops or spheres in table T1, one can observe that gel computer identifies one of 107–1034 possible choices in a chemical beaker [11] in a four-layer deep learning network for almost all the problems. Consequently, irrespective of complexity, movie S1 showed that the pre-processing time (∼2 ms), processing time (∼10–15 min) and post-processing time (∼2 ms), are constant for all problems noted in table T1.

For digital processing, the 'computing complexity' is related to 'lines of codes', programs in C++ or Python are made of loops (e.g. for-next loop) and their complexity varies ${O^n}$ (Online text E). However, within-and-above growth for gel processing logarithmically reduces the resource demand, with increasing complexity of ${O^n}$, $n$ number of resources is required. So, we have taken intractable open challenge problem (confirming water is dropped in a glass or not; figures S15 and S16) as benchmark.

Recent 3D printers use 3D electromagnetic field distribution in a cavity to print an entire object at once [12]. We built a similar 3D printer where we feed input static or dynamic data pixel by pixel as electromagnetic frequencies of 10–15 nW power by an antenna array into the chemical beaker (figure 2(A); movie S2). Cavity geometry and the frequencies are so tuned that interference by reflection from cavity walls builds intertwined isofrequency loops similarly to periodic events or loops in the input data (figure 2(B)). It is a large structure. Along the loop path, identical helical nanowires couple due to dipole–dipole interaction form weak bonds. Finally, all weakly bonded intertwined loops condense into a gel superstructure or seed supramolecule (online text A). Helical nanowire and all seeds made from it emits phase correlated electromagnetic signals that could be written as a 3D clock assembly. It is our data structure. Self-assembling gelator molecules absorb it as an energy packet. Use it to optimize helical nanowire's and seed structures length, pitch, and diameter (figure S1), so that when we shine monochromatic light to read data, time period and phase gap of recurring events in unknown data are written as two angular momentums and polarity on the rotating photons or optical vortices. For complex data with many clocks, vortices condense into an integrated hologram, by sorting rings [13] input video or static data could be retrieved, analyzed (figures S2 and S3). A Cheetah video is fed-and-retrieved (figures 1(A), S4 and S5; movies S3, S4; online text B) to demonstrate all kinds of invariances (movie S5), position, rotation, size, count, environment as part of general-purpose computing (movie S6; online text C). Thermal energy ∼13 Cal used to melt organic gel drives computation, but it needs 10–15 nW continuous em signals carrying the input data.

We have been developing a programmable fractal synthesis for a decade where morphology, speed, and reaction kinetics for self-assembly are tuned remotely from a single molecular precursor to visible scale [14]. We run a similar supramolecular synthesis so that gelator solution can build clocks roughly from nanoseconds to hours long. 1012 orders of temporal ranges are encoded as loops in 106 orders of spatial variation simply by tuning the basic geometry of nanowire and seeds (figure 2(C)). Spatio–temporal density of clocks or resolution and time-range determine computing strength of a gel, not number of fibers. Thus, the concepts of scalability, resources, speed are redundant.

Periodic events hold intelligence in random instances [15], but no efforts were made to organize clocks into an integrated time-structure [16] for advanced computing. The gel emulates intricate relations of periodic events in space and time far from physical reality. The coupling factor ($\alpha ,\beta ,\gamma $) between periodic events or loops determines physical separation between clocks in the 3D clock assembly, not actual separation. Fixed coupling offers a protein-folding like transfer function [17] delivering input-like output (figure 2(D)). Fractal growth shrunks real world's space and time by power law for gel processing. Power indices ($P$) are determined by the coupling strength of periodic events or clocks.

Irrespective of the actual space and time, by choosing pixel to antenna frequency conversion table, the limiting times of input events are densely packed into the fastest time domain of gel and smallest spatial scale so that beating-driven self-assembly fills slower time domains. By reducing beating clocks to one, self-assembly converges computation.

Resonating nanowires in the seed generated by user encoded 3D clock assembly try to synchronize. Closely spaced, nearly similar frequency clocks generate beating signals, forms a 3D clock assembly internally, in 103 orders slower time domains than external input. So, two variants of 3D clock structures coexist, from external input and thenceforth internal beating (figures 1(A) and 2(D)). Once beating 3D clock assembly becomes a target, surface potentials of seed hold choices. They reorient and collide astronomical ways to condense into optimized supramolecule by growing and shrinking like a feed-forward neural network. Thus, beating assists a supramolecular growth by filtering differential clocks from the inner layer emitting complex wave [18]. Therein, seed structures in a plane synchronize at resonance and vibrate at a singular beat frequency (figure 2(D)). Several such planes form a 3D clock assembly with a few clocks. It is a new target that triggers self-assembly. The resultant seed structure's clocks arrange along discrete lines in space. For each line, we get one effective beat frequency and a new 3D clock assembly form surface of a topology. When it drives self-assembly, the final structure does not beat. Three transitions, filter and memorize point, line, plane, and 3D shapes made of clocks. These abstract geometric manifolds (figures 1(C) and 2(E)) of resonance frequencies are invariants, since fractal growth is equivalent to orthogonal transformations of two tensors representing resonant clocks of gel fibers, for all problems (movies S7–S12; figures S6–S16; table T1). ON2 is a type of geometric deep network [19], where extracting differential clocks in the orthogonal space is like extracting a common sphere between overlapping clocks. The number of layers of self-assembly required to complete three transitions depends on the composition of symmetries in the geometry of input clock-assembly. Thus, steps needed to reduce symmetries is the number of hidden layers in its neural network.

Since multiple layers of nanowires grow one above another, the interlayer beating adds azimuthal or orthogonal angular momentum in addition to orbital angular momentum (figure 2(C)). In 'figure 3', we performed mathematical operations on the gel using microwave input and optical read-out. Since diluting gel solution, the effective separation between nanowires could be increased deriving holograms for few nanowires; we map step by step synthesis of fractal growth (figure 2(E)). Since nanowires build a superstructure that acts as an elementary unit for the next layer, we get hierarchical structures. The hierarchical assembly and the corresponding hologram in 'figures 3(A) and (B)'. Normally we do not see the differential clocks or holographic parts exclusive to invariants. However, by applying suitable ac signals using an antenna from outside, we can amplify electromagnetic beating signals in the fractal boundaries of different layers, as shown in 'figure 3(C)'. Then we orthogonally project two holograms in pairs; 4D and 5D show only superposed invariants post projection. Similar orthogonal superposition of 5D and 6D holograms diminishes the holographic part due to 3D clock assembly. Only the invariant parts survive as a static structure in 'figure 3(D)'.

Figure 3.

Figure 3. Experimental evidence for mathematical proof of invariants for Cheetah: (A). Elementary mathematical operations carried out by feeding Cheetah video through gel precursors as the structures grow from singlet helical nanowire to doublet to a pair of doublets or namely a multiplate (right to the left). Four such multiplates build a mesh. Five meshes could be investigated at max by adjusting solvent precursor ratio, taking a sample from solution, carrying out SEM (first row, right to left). (B) Below each SEM image, the corresponding optical vortex assembly $\mathbb{Z}$ is shown. (C) In the third row, the wireless antenna is switched ON to amplify a few bands by pumping additional electromagnetic signals that are orthogonal transformed vortices ${S_V}$ (${\emptyset _1}$) ${S_P}$(${\emptyset _2}$), and ${S_L}$(${\emptyset _3}$). (D) After orthogonal transformation, by using mirrors and feeding two new electromagnetic signals to the gel, the dot products of ${S_V}$ (${\emptyset _1}$) & ${S_P}$(${\emptyset _2}$), and ${S_P}$(${\emptyset _2}$), & ${S_L}$(${\emptyset _3}$) are derived. Frequencies are tuned to amplify ${S_V} + $ ${S_P}$ and ${S_L} + $ ${S_P}$, We find significant disappearance of clocks other than the invariant elements of the invariant tree. To the left, an image of the cheetah's invariant tree is shown. Two 3D clock assemblies, one memorized (${A_{ijk}}$) and the other unknown input ($A_{ijk}^/$) resonating with memory. Deformation $\eta $ ($\eta = (A_{ijk}^/ - {A_{ijk}})/{A_{ijk}})$ in the 3D clock assembly for train and test datasets are taken as differential $\partial \eta /\partial \emptyset $ signal along three orthogonal axes ${\emptyset _1}$, ${\emptyset _2}$, ${\emptyset _3}$, in general ${\emptyset _S}$. The plot is ${\mathbb{R}^3}:\left( {x,y,z} \right) \to \left( {{\phi _1},{\phi _2},{\phi _3}} \right)$, The invariant condition: partial derivatives of $S$, with respect to $\eta $, $\left( {\frac{{\partial {S_{{\phi _1}}}}}{{\partial \eta }}} \right):\left( {\frac{{\partial {S_{{\phi _2}}}}}{{\partial \eta }}} \right)$ vanishes when ${\phi _1} \ne {\phi _2}$ ($A:B = tr\left( {A{B^T}} \right)$). T denotes transpose, $tr$ trace.

Standard image High-resolution image

The use of antennas to read invariants live prompts us to isolate key invariant parts of the optical hologram when Cheetah video was fed to gel (see deconvolution 5). Thus far, explainable AIs and deep learning protocols select classifiers from a given database. 'Figures 2(D) and 1(C)' shows that gel superstructures resonantly activate geometrically similar peaks. Thus, creating new invariants and classifiers to analyze the unknown problem.

'figure 4(A)' shows a chart of invariants for Cheetah where from 3D clock assemblies, higher-level invariants were found as new slower clocks in a network of clock-assemblies (following figure 1(C); table T2). Once we get a gel trained by a single shot Cheetah video, we fed different four-legged animal videos to the gel (movie S6). Since invariant parts of holograms are generated by beating or interference of boundary oscillations of two sub-superstructures of gel, they could synthesis new invariants or clock assemblies using memorized ones as outlined in 'figure 4(B)'. Each invariant forms an isolated cluster with a distinct resonance signature; hence they get wirelessly connected by electromagnetic resonance. No circuit is needed. A new animal fed to a single shot trained gel tries to activate its own invariant triplet. The differences between two trees in beating activate matching invariants, they form a new invariant-tree (figures S9 and S13). On derived tree-edges, matching clocks stored inside are put to create a new 3D clock assembly, or output (figures 1(C) and 4(C)). Thus, gel composes a problem-specific invariant network, adds new clocks analyzing it without training.

Figure 4.

Figure 4. Invariant-tree driven synthesis of new classifiers as a dedicated analysis protocol. (A) Four columns show four phases of classifications to analyze events without training. The first column shows single shot learning of a running Cheetah dynamics by a gel. The second column shows the determination of spontaneous classifiers by gel in its vortex assembly. The third column shows combining different classifiers to synthesize new sets of classifiers. The fourth column shows how synthesized classifiers are used to analyze events for which the gel was not trained (less than one shot or LO shot learning; movie S7). Identify the classifiers spontaneously and sort test sets by solving the Clique problem: train & test structures are matched. The classification efficiency score is the combination of three parameters (online tables T2–T8). The ratio of maximum overlapping area between the input and output, the ratio of differences in the clocks used relative to the number of clocks used, and weighted average angular differences between different geometric planes. 3D geometric shape mismatch is estimated in the detection score. (B) The live growth in the gel superstructure, synthesis of combinatorial classes adopts distinct structural symmetries which couple by electromagnetic resonance of frequency triplets (${\upsilon _1}$, ${\upsilon _2}$, ${\upsilon _3}$). (C). Invariant matching is shown by superposing train and the test 3D clock assemblies. It starts from SL, transcends to SP, and finally reaches SL. Then the nested loops run. Differential $\partial \eta /\partial \emptyset $ signals along three orthogonal axes ${\emptyset _1}$, ${\emptyset _2}$, ${\emptyset _3}$, in general ${\emptyset _S}$.

Standard image High-resolution image

Thus far, automatons failed to discover generic invariants because variables governing complex events in nature are interconnected in a tree-like dynamic network [20]. A deep learning tool has to discover that dynamic invariant network grown within and above. An algorithm has to explore combinations as the power of clocks self-assemble to optimize these fractally connected astronomical variations. In a chemical beaker, geometric shapes made of resonance peaks are stored. They find way to arrange emulating the composition of complex shapes in a problem by resonance, its inverse of computing. By that, gel solves an intractable Clique problem [21]. Fractal growth initiates fractal resonance chain, but, clique matching is assisted by differential clocks. In the 1960s, deformation along three axes was filtered along three orthogonal axes to find invariant [22], the organic gel does it here (convolution for belief, 9), but with a difference. It finds invariants of differential clocks along one axis for the 4D database. Then its orthogonal 5D database is made of 4D invariants. Thus, clique error in 4D is minimized in 5D.

To prove that extracting three orthogonal invariants from the spatial assembly of clocks is general-purpose computing, nine open challenge problems in AI were selected. Gel detected invariant trees for nine different static and dynamic inputs for problems varying from genetic data for diabetes, coronavirus, swarm intelligence, the complex composition of classical songs, the open classification challenge of pouring water, and detecting the face of a Japanese lady (figures S6–S16; movies S7–S12). In all cases, after a single training, the gel classified unknown events naturally, or the user could choose an exclusive classifier. The tables outlining high classification scores are shown online (tables T2–T8). Gel naturally labels objects, classes, invariants using distinct time ranges of neurons; stores for decades without refreshing.

The same gel was melted and reused for all problems. Computation times for all problems appeared identical, ∼10 min. To enhance optical resolution, we use well-established methods to break the diffraction barrier [23, 24]. Please find supporting online Video 2 of our recent report [9] for evanescent wave-induced amplification of a nanowire generated 3D vortex structure. One could solve multiple problems in one gel at a time, in a single shot. So, multiple labeling or co-synthesizing varied datatypes are feasible. Thus far, parallelism meant sequential hardware arranged parallely, here, its fractal deep learning, layers of neural nets grow within and above, not side by side.

Acknowledgments

Authors acknowledge the Asian office of Aerospace R&D (AOARD), a part of the United States Air Force (USAF), for the Grant No. FA2386-16-1-0003 (2016–2019) on the electromagnetic resonance-based communication and intelligence of biomaterials.

Data availability statement

All data that support the findings of this study are included within the article (and any supplementary files).

All data are available in the manuscript or the supplementary materials.

Author contributions

A B Conceptualized the research; P Sa Did the experiment and data analysis; P S, K S and S G assisted in background data development, P Sa analyzed the result and plotted the data; A B and P Sa wrote the paper, R P S, R B, J P H and T N reviewed the optical vortex studies.

Conflict of interest

The authors declare no competing interests.

Appendix: Turing completeness analysis from a computer science perspective

In six classes of problems summarized in table T1 online, the input video or static dataset is first converted into a 3D clock assembly written as an electromagnetic spectrum. Although they look different from bits, input 3D clock assembly is the first TM (see the tape in figure 2(E)), and the clock/vortex parameters define the cell states of a Turing tape that is processed by a nanowire. Similar to a TM, a nanowire can read and write data to an arbitrarily long sequence of clocks or tape by changing its geometric parameters (figure 2(C)), which act as its memory. Output can be read using refracted and transmitted optical vortices from a helical nanowire (One clock = one vortex = one variable [11]).

The output of a nanowire TM is a 3D assembly of optical vortices sent to the other members of the nanowire assembly as input [11]. As multiple nanowire TMs reorient in a 3D arrangement following Hasse's law, the necessary Turing tape manipulations are made via essential state transitions to derive an invariant for an accurate output. The nanowire assembly can perform a sequence of instructions a certain number of times or until a certain condition is met, satisfying the criteria to be a UTM. The loop counter for the UTM is determined by the ratio of coupled vortices or the number of phase singularity points in the optical vortex assembly. Its grammar is listed in figure S13 and musical problems like figure S6 is a perfect example, because without high precision counter, there is no music, all that one would get is noise.

However, a nanowire assembly can process long stretches of single or smaller nanowire assembly generated vortex assemblies without the need for external input, much like a UTM that can compute following Hasse's law without external control. Thus, the nanowire assembly is a TM and runs single nanowire made TMs similar to ribosomes running m-RNA, satisfying the criterion for a UTM ([25, 26]). Running part of algorithm is shown in figures S15 and S16, where a benchmark open challenge was solved. Gel ran multiple TMs made of isolated clusters in a global TM, yet addressed all specific features of local issues.

Furthermore, all problems in table T1 online have two phases, solving one part of a problem and using the derived 3D nanowire assembly or processing circuit to solve a faintly associated but new problem for which it was not trained (figures S9, S10, S11 and S12). Thus, the nanowire assembly is a Turing complete generic algorithm processor and can compute any computable function as listed in figure S13.

Finally, to be a true UTM, the nanowire assembly must perform Boolean algorithms. While reading a single nanowire TM generated vortex assembly, the nanowire assembly measures polarization of all clocks, density, and sum of phase singularity points, 3D coordinates of vortices, and adjusts its own configuration to include suitable symmetries. Polarization of vortices or clockwise (1) and anticlockwise (0) rotation of an optical vortex is essentially used here to encode Boolean codes. Instead of (0,1) it is a rotating circle or polarized optical vortex. The phase singularity points on a vector vortex beam or perimeter of the ring of light counts the number of loops to run for executing a program. So, UTM has a counter for an instructional or conditional halt (see falcon attacking birds figure S12). Polarization of vortices leads to constructive and destructive interference that ensures a logic gate-like Boolean operation, execute conditional stop, run, branching out, and wait for a value to arrive (figure S13).

Previous reports have described how to write various existing computer's data formats into a 3D clock assembly (Online text D); Chapter 4 [27]). Consequently, any algorithm can be written using a 3D clock assembly, and its distinctive features are processed as a separate engine in the nanowire assembly or UTM in the chemical beaker. However, a reader should note that a true UTM cannot exist in principle, all so called UTMs partially satisfy the criterion.

Please wait… references are loading.

MovieS1Preprocessing (11.5 MB MP4)

MovieS10YamanSongs (18.6. MB MP4)

MovieS11WorkerMult (2.1 MB AVI)

MovieS12Starling (2.1 MB AVI)

MovieS2ComputerSetUp (13.1 MB MP4)

MovieS3Cheetah7steps (4.5 MB MP4)

MovieS4CheetahAmplified (1.7 MB AVI)

MovieS5Invariant4D5D6D (10.9 MB MP4)

MovieS6Cheetah1train13test (32.3 MB AVI)

MovieS8Dog1train8test (4.3 MB AVI)

MovieS9LadyFace1train8test (0.5 MB AVI)

OnlineTextFigureTable (2.5 MB PDF)

MovieS7NineproblemSum (4.7 MB AVI)