A general-purpose organic gel computer that learns by itself

To build energy minimized superstructures, self-assembling molecules explore astronomical options, colliding ∼109 molecules s−1. Thus far, no computer has used it fully to optimize choices and execute advanced computational theories only by synthesizing supramolecules. To realize it, first, we remotely re-wrote the problem in a language that supramolecular synthesis comprehends. Then, all-chemical neural network synthesizes one helical nanowire for one periodic event. These nanowires self-assemble into gel fibers mapping intricate relations between periodic events in any-data-type, the output is read instantly from optical hologram. Problem-wise, self-assembling layers or neural network depth is optimized to chemically simulate theories discovering invariants for learning. Subsequently, synthesis alone solves classification, feature learning problems instantly with single shot training. Reusable gel begins general-purpose computing that would chemically invent suitable models for problem-specific unsupervised learning. Irrespective of complexity, keeping fixed computing time and power, gel promises a toxic-hardware-free world. One sentence summary: fractally coupled deep learning networks revisits Rosenblatt’s 1950s theorem on deep learning network.


Main text
Deep learning computers are revolutionizing human civilization optimizing user-conceived solution paths accurately, figuring out the shortest path by extensive training to reach the expected solution [1].Hallmarks are switches and circuits.Demands are increasing speed and resources by compromising nature with enormous toxic waste.The next revolution would bring computers that synthesize new deep networks, invent learning protocols in a single shot or without training.Hallmarks would be new data structure, software free, circuit free, fully analog, reusable hardware adaptive to changing environment [2].Demands would be fixing the computing speed and resources irrespective of complexity while compromising the user's control.Realizing all, we present organic nested deep learning network, ON 2 .
Thus far, analog computers emulate either processor parts or mathematical operations [3,4].In contrast, gel based general-purpose analog computer ON 2 executes an entire self-learning theory step-by-step.Precisely we required inventing a deep learning network operating in a chemical beaker to converge a mathematical simulation using astronomical optimization power of a supramolecular synthesis.It is impossible to read rapidly changing big data, instantly invent a suitable model.But, when self-assembling molecules are encoded with computational choices, it collides to explore 10 9 options s −1 .Optimization happens almost instantly, irrespective of complexity.This ability has remained unused because we do not know how to encode & decode computational choices to reactant molecules reversibly.Inspired by protein & neuron's clock-assemblies [5], we invented a language to encode supramolecule [6].We found no need to read the whole data bit-by-bit, but only periodic events or loops as clocks such that the whole data could largely be regenerated by assembling discrete clocks suitably in a 3D space.Clock is a variable, 3D clock assembly CA maps all possible relations between all parameters (movie S1), it is a theoretical model invented in a chemical beaker.It represents geometric manifolds whose corners are resonance frequencies [7].The new data structure (R 3 : (x, z) → (ϕ 1 , ϕ 2 , ϕ 3 )) enables encoding a problem as geometric shape to a chemical reaction using the surface potential of molecular assembly.
Furthermore, extensively training a learning protocol is one of the greatest problems of automation.Learning requires finding invariants.An invariant is a geometric or mathematical correlation between similar events or objects that remain unchanged within limiting variations.We found no need to train output.Instead, let supramolecular synthesis learn invariants to regenerate only one input using its internal clocks.While doing so, the spontaneous self-assembly takes control, finds the number of nodes and hidden layers required to frame deep neural network, maps the network of invariants for fully unsupervised learning.For faintly-related problems, invariants construct solution paths wirelessly with no training.Answers find the questioner, just opposite to what current computers do [1].With fully remote operation via microwave input and instant optical read-out, no need to add chemicals or post-analysis of chemical structures (figure 1(A)).
For better unsupervised learning, layers are often engineered in deep networks [8].Gel computing is a push-pull loop that runs between two clock-structures, CA f made of field and CA m made of matter, together they make network's neuron [5].Both CAs try to match each other minimizing resonance peak difference or bias B, synthesizing at least four layers of hierarchical architectures of field and matter in a chemical beaker.In four layers, while field structure simplifies to a single clock from a 3D clock-assembly, matter grows from a single molecule to a supramolecule.While each layer ends by optimizing field-matter dual structures, network ∅ s N sets optimized pairs as input to the next layer.3D clock assembly is a complex tensor, when nanowires of two layers resonantly couple, it is a dot product of two tensors, we get weight W, that's also an invariant.Coupled resonance band runs second neural net ∅ s i to synthesize nanowire cluster for invariant.Four metastable states, CA, 4D, 5D and 6D run third convergence loop ∅ s b to configure hidden layers of the neural network, as shown in 'figure 1(B)' .Three nested deep networks, ∅ s N − ∅ s b − ∅ s i assembled within and above continuously convolute each other's clock-assembly or neuron-node while optimizing weight-tree or invariant tree [9], see theory in figure captions.A common activation function AF, defines a neuron's state, we replace complex tensors with higher-dimensional multinions A s ijk...s [10].Thus, three deep learning networks governed by a common Hamiltonian H synthesizes distinct fiber clusters that emit multiple resonance frequencies with correlated phases as clock-assemblies forming point, line, plane, and 3D shapes (figure 1(C) and 2).Together, its invariant tree.Invariants are four kinds of geometric shapes or clock assemblies resonantly match with the unknown input at a time, create a tree made of beat frequencies, beat tree or classification tree.Then gel puts internally stored fixed clocks on the tree-edges in a 3D space such that real-world-like events are regenerated including further analysis.Network transforms (1) to (2).Output = activation_function(dot_product(weights, inputs) + bias) Output = activation_function (dot_product (invariant_tree, fixed_clock) + bias_tree) . ( Our computer's operation in equation ( 2) is a version of classic Turing machine, TM presented in equation (1), which entails demonstrating that the machine (1) encodes data, (2) memorizes data, (3) encodes an instruction, (4) executes an instruction, and (5) reads the output.Table T1 displays the five steps for each distinct problem, a single nanowire is a TM and nanowire assembly is a universal Turing machine, UTM (see Appendix for details).
A review of computational complexity: Our computing protocol does not necessitate processing the entire pixel-by-pixel dataset.Rather, we only require access to recurring events or clocks, as clocks run, the 3D clock assembly becomes an automaton (movie S1).While for a conventional computer complexity means, instead of 1001 use a longer bit 110001101000001, gel processor can generate a cube with triangles in its corners, and continue to add new structures in the corners of geometric shapes as the resolution or volume of information content increases, we patented it as Geometric musical language or GML [6].Complexity for gel computer means many layers of geometric shapes grown within and above.The concept is orthogonal to existing computer's increment of complexity, GML's advantage is noted below.; m = 1−7.B. ON 2 is shown using nested spheres or clocks, each layer of the primary deep learning network holds a new deep learning network.Three coupled deep neural networks run together, ϕ , AF, W, B are 3D clock assembly, activation function, weight and bias respectively, in a deep neural network, i, b, S, N, C depict, invariant, beating, seed, nested loop, condensed loop respectively.Here, s is layer, or depth of the deep learning network.
Organic gel stores structures of 4D, 5D and 6D invariants as tree.A red line denotes a spontaneously chosen active classification route or user-defined route.Reproduced with permission from DepositPhotos.© bronsonlil90 (Nicholas Flowers).
For example, if a cube has embedded triangles in all of its 8 corners, then the geometric structure processes 8 3 compositions of variables consuming 8 × 3 = 24 clocks.One nanowire writes 12 clocks (Dodecahedron, geometric shape of light [11]), so we need 2 nanowires.Adding more complexity, if each corner of a triangle embeds a pentagon, its N 8 3 5 ∼ 10 7 (7962 624) variations using 24 × 5 = 120 clocks, we need 10 nanowires.If each pentagon has a pentagon, then we get 8 3 5 5  ∼ 3 × 10 34 for 120 × 5 = 600 clocks, we need 50 nanowires.In general, N r p (variations to optimise) → N × r × p (nanowire needed), p nanowires are used first, resultant structures integrate r elements and finally, they integrate to N elements.By counting loops or spheres in table T1, one can observe that gel computer identifies one of 10 7 -10 34 possible choices in a chemical beaker [11] in a four-layer deep learning network for almost all the problems.Consequently, irrespective of complexity, movie S1 showed that the pre-processing time (∼2 ms), processing time (∼10-15 min) and post-processing time (∼2 ms), are constant for all problems noted in table T1.
For digital processing, the 'computing complexity' is related to 'lines of codes' , programs in C++ or Python are made of loops (e.g.for-next loop) and their complexity varies O n (Online text E).However, within-and-above growth for gel processing logarithmically reduces the resource demand, with increasing complexity of O n , n number of resources is required.So, we have taken intractable open challenge problem (confirming water is dropped in a glass or not; figures S15 and S16) as benchmark.Used gelator is (S)-Phenyl-tetradecanoylamino-acetic acid methyl ester in Hexane solution, sliced rectangular frame of cheetah fed in 12 rows and 22 columns and fed every 12 pixels in a column at a time using 12 Yagi antennas arranged all around the GT (figure S1).As cheetah runs, helices add new pitches, as gelator fill cage of fields.(C).One-to-one correspondence between pixel intensity of input data, resonance bands of superstructures and the optical vortices when no information is fed.This is background data.(D) When em loops condense to create first seed supramolecule, orthogonality between SA1 and SA2 signals is checked by Oscilloscope, Os.N intertwined loops made of iso-frequency paths in the chemical beaker shifts periodicity x i due to dipole-dipole interactions between helical nanowires, builds N vibrating modes, ∑ N i,j x i x j + . ..] written as A s ijk...s , a dodecanion tensor.When nanowire made loops condense, they hold the relative orientation of loops, the resonant oscillations governing conformal transition holds the relative phase θ i between pair of loops K i and K j , building a differential clock with a period t i , N vibrating modes transform to ∅ s S = .s , a dodecanion tensor.SA1 and SA2 phases make transformation function f (t), the phase plot is shown to the right.(E).Three rows show live hologram, as monochromatic plane-polarized laser 633.5 nm (1 mW, He-Ne) is shined (figure S3) semiconductor camera captures live hologram (movie S2 Recent 3D printers use 3D electromagnetic field distribution in a cavity to print an entire object at once [12].We built a similar 3D printer where we feed input static or dynamic data pixel by pixel as electromagnetic frequencies of 10-15 nW power by an antenna array into the chemical beaker (figure 2(A); movie S2).Cavity geometry and the frequencies are so tuned that interference by reflection from cavity walls builds intertwined isofrequency loops similarly to periodic events or loops in the input data (figure 2(B)).It is a large structure.Along the loop path, identical helical nanowires couple due to dipole-dipole interaction form weak bonds.Finally, all weakly bonded intertwined loops condense into a gel superstructure or seed supramolecule (online text A).Helical nanowire and all seeds made from it emits phase correlated electromagnetic signals that could be written as a 3D clock assembly.It is our data structure.Self-assembling gelator molecules absorb it as an energy packet.Use it to optimize helical nanowire's and seed structures length, pitch, and diameter (figure S1), so that when we shine monochromatic light to read data, time period and phase gap of recurring events in unknown data are written as two angular momentums and polarity on the rotating photons or optical vortices.For complex data with many clocks, vortices condense into an integrated hologram, by sorting rings [13] input video or static data could be retrieved, analyzed (figures S2 and S3).A Cheetah video is fed-and-retrieved (figures 1(A), S4 and S5; movies S3, S4; online text B) to demonstrate all kinds of invariances (movie S5), position, rotation, size, count, environment as part of general-purpose computing (movie S6; online text C).Thermal energy ∼13 Cal used to melt organic gel drives computation, but it needs 10-15 nW continuous em signals carrying the input data.
We have been developing a programmable fractal synthesis for a decade where morphology, speed, and reaction kinetics for self-assembly are tuned remotely from a single molecular precursor to visible scale [14].We run a similar supramolecular synthesis so that gelator solution can build clocks roughly from nanoseconds to hours long.10 12 orders of temporal ranges are encoded as loops in 10 6 orders of spatial variation simply by tuning the basic geometry of nanowire and seeds (figure 2(C)).Spatio-temporal density of clocks or resolution and time-range determine computing strength of a gel, not number of fibers.Thus, the concepts of scalability, resources, speed are redundant.
Periodic events hold intelligence in random instances [15], but no efforts were made to organize clocks into an integrated time-structure [16] for advanced computing.The gel emulates intricate relations of periodic events in space and time far from physical reality.The coupling factor (α, β, γ) between periodic events or loops determines physical separation between clocks in the 3D clock assembly, not actual separation.Fixed coupling offers a protein-folding like transfer function [17] delivering input-like output (figure 2(D)).Fractal growth shrunks real world's space and time by power law for gel processing.Power indices (P) are determined by the coupling strength of periodic events or clocks.
Irrespective of the actual space and time, by choosing pixel to antenna frequency conversion table, the limiting times of input events are densely packed into the fastest time domain of gel and smallest spatial scale so that beating-driven self-assembly fills slower time domains.By reducing beating clocks to one, self-assembly converges computation.
Resonating nanowires in the seed generated by user encoded 3D clock assembly try to synchronize.Closely spaced, nearly similar frequency clocks generate beating signals, forms a 3D clock assembly internally, in 10 3 orders slower time domains than external input.So, two variants of 3D clock structures coexist, from external input and thenceforth internal beating (figures 1(A) and 2(D)).Once beating 3D clock assembly becomes a target, surface potentials of seed hold choices.They reorient and collide astronomical ways to condense into optimized supramolecule by growing and shrinking like a feed-forward neural network.Thus, beating assists a supramolecular growth by filtering differential clocks from the inner layer emitting complex wave [18].Therein, seed structures in a plane synchronize at resonance and vibrate at a singular beat frequency (figure 2(D)).Several such planes form a 3D clock assembly with a few clocks.It is a new target that triggers self-assembly.The resultant seed structure's clocks arrange along discrete lines in space.For each line, we get one effective beat frequency and a new 3D clock assembly form surface of a topology.When it drives self-assembly, the final structure does not beat.Three transitions, filter and memorize point, line, plane, and 3D shapes made of clocks.These abstract geometric manifolds (figures 1(C) and 2(E)) of resonance frequencies are invariants, since fractal growth is equivalent to orthogonal transformations of two tensors representing resonant clocks of gel fibers, for all problems (movies S7-S12; figures S6-S16; table T1).ON 2 is a type of geometric deep network [19], where extracting differential clocks in the orthogonal space is like extracting a common sphere between overlapping clocks.The number of layers of self-assembly required to complete three transitions depends on the composition of symmetries in the geometry of input clock-assembly.Thus, steps needed to reduce symmetries is the number of hidden layers in its neural network.
Since multiple layers of nanowires grow one above another, the interlayer beating adds azimuthal or orthogonal angular momentum in addition to orbital angular momentum (figure 2(C)).In 'figure 3' , we performed mathematical operations on the gel using microwave input and optical read-out.Since diluting gel solution, the effective separation between nanowires could be increased deriving holograms for few nanowires; we map step by step synthesis of fractal growth (figure 2(E)).Since nanowires build a superstructure that acts as an elementary unit for the next layer, we get hierarchical structures.The hierarchical assembly and the corresponding hologram in 'figures 3(A) and (B)' .Normally we do not see the differential clocks or holographic parts exclusive to invariants.However, by applying suitable ac signals using an antenna from outside, we can amplify electromagnetic beating signals in the fractal boundaries of different layers, as shown in 'figure 3(C)' .Then we orthogonally project two holograms in pairs; 4D and 5D show only superposed invariants post projection.Similar orthogonal superposition of 5D and 6D holograms diminishes the holographic part due to 3D clock assembly.Only the invariant parts survive as a static structure in 'figure 3(D)' .
The use of antennas to read invariants live prompts us to isolate key invariant parts of the optical hologram when Cheetah video was fed to gel (see deconvolution 5).Thus far, explainable AIs and deep learning protocols select classifiers from a given database.'Figures 2(D) and 1(C)' shows that gel superstructures resonantly activate geometrically similar peaks.Thus, creating new invariants and classifiers to analyze the unknown problem.
'figure 4(A)' shows a chart of invariants for Cheetah where from 3D clock assemblies, higher-level invariants were found as new slower clocks in a network of clock-assemblies (following figure 1(C); table T2).Once we get a gel trained by a single shot Cheetah video, we fed different four-legged animal videos to the gel (movie S6).Since invariant parts of holograms are generated by beating or interference of boundary oscillations of two sub-superstructures of gel, they could synthesis new invariants or clock assemblies using memorized ones as outlined in 'figure 4(B)' .Each invariant forms an isolated cluster with a distinct resonance signature; hence they get wirelessly connected by electromagnetic resonance.No circuit is needed.A new animal fed to a single shot trained gel tries to activate its own invariant triplet.The differences between two trees in beating activate matching invariants, they form a new invariant-tree (figures S9 and S13).On derived tree-edges, matching clocks stored inside are put to create a new 3D clock assembly, or output (figures 1(C) and 4(C)).Thus, gel composes a problem-specific invariant network, adds new clocks analyzing it without training.
Thus far, automatons failed to discover generic invariants because variables governing complex events in nature are interconnected in a tree-like dynamic network [20].A deep learning tool has to discover that dynamic invariant network grown within and above.An algorithm has to explore combinations as the power of clocks self-assemble to optimize these fractally connected astronomical variations.In a chemical beaker, geometric shapes made of resonance peaks are stored.They find way to arrange emulating the composition of complex shapes in a problem by resonance, its inverse of computing.By that, gel solves an intractable Clique problem [21].Fractal growth initiates fractal resonance chain, but, clique matching is assisted by differential clocks.In the 1960s, deformation along three axes was filtered along three orthogonal axes to find invariant [22], the organic gel does it here (convolution for belief, 9), but with a difference.It finds invariants of differential clocks along one axis for the 4D database.Then its orthogonal 5D database is made of 4D invariants.Thus, clique error in 4D is minimized in 5D.
To prove that extracting three orthogonal invariants from the spatial assembly of clocks is general-purpose computing, nine open challenge problems in AI were selected.Gel detected invariant trees for nine different static and dynamic inputs for problems varying from genetic data for diabetes, coronavirus, swarm intelligence, the complex composition of classical songs, the open classification challenge of pouring water, and detecting the face of a Japanese lady (figures S6-S16; movies S7-S12).In all cases, after a single training, the gel classified unknown events naturally, or the user could choose an exclusive classifier.The tables outlining high classification scores are shown online (tables T2-T8).Gel naturally labels objects, classes, invariants using distinct time ranges of neurons; stores for decades without refreshing.
The same gel was melted and reused for all problems.Computation times for all problems appeared identical, ∼10 min.To enhance optical resolution, we use well-established methods to break the diffraction barrier [23,24].Please find supporting online Video 2 of our recent report [9] for evanescent wave-induced amplification of a nanowire generated 3D vortex structure.One could solve multiple problems in one gel at a time, in a single shot.So, multiple labeling or co-synthesizing varied datatypes are feasible.Thus far, parallelism meant sequential hardware arranged parallely, here, its fractal deep learning, layers of neural nets grow within and above, not side by side.

Figure 1 .
Figure 1.Organic all chemical nested deep learning Network ONA 2 .(A) Three phase gel computer explained in three columns.Microwave input to 3D printer by four pre-processing steps (left column) (movies S1), four-layered organic synthesis, clock-assembly CA, 4D, 5D, 6D constitute ON 2 (middle) and optical read-out by four post-processing steps (right column).Input Cheetah image converted to X-Y-F (F = frequency-pixel RGB) is fed to gel precursor solution via antenna array.Pixels of leg edge forms a loop.Below, 3D clock assembly A ijk holds six primary body parts.Each clock in 3D clock assembly CA::(R 3 : (x, y, z) → (ϕ 1, ϕ 2, ϕ 3), a positive-definite tensor A ijk = Ue f , f = ln ( P T ) ; P is the deformation of input A / ijk from memorized A ijk ).For ON 2 (middle) helical nanowire synthesis selects number of node classes (rows) and number of hidden layers (columns) as A ijk requires to attain convergent structure.Below ON 2 (middle column) four neurons shown in sequence.Dot product of multinion tensors of weight functions or invariants form 4D (SV (∅1)), 5D (SP(∅2)) and 6D datasets (SL(∅3)).Weight function of neuron is a network too.Post processing (column right), sorts angular momentum of photons find (t) or relative phase of clocks.Since gel clocks are fixed, resonant oscillations of tree deliver input-like-output using transformation function (t) x = cost; y = sin tsin m ( t 2 ); m = 1−7.B. ON 2 is shown using nested spheres or clocks, each layer of the primary deep learning network holds a new deep learning network.Three coupled deep neural networks run together, ϕ , AF, W, B are 3D clock assembly, activation function, weight and bias respectively, in a deep neural network, i, b, S, N, C depict, invariant, beating, seed, nested loop, condensed loop respectively.Here, s is layer, or depth of the deep learning network.∅ s+1

Figure 2 .
Figure 2. Live holographic visuals of supramolecular synthesis as nested deep learning network ON 2 : (A).Infra-red, IR tunable optical signals erase informations or selective clock assembly CA by melting nanowires of particular dimensions.Spectrum analyzer, SA1 and SA2 are beat signals produced by reflected and transmitted He-Ne laser signals by nanowire to electromagnetic field pattern in chemical beaker, sensed by Fabry Perot photo-detectors, which acts as SA.(B).Used gelator is (S)-Phenyl-tetradecanoylamino-acetic acid methyl ester in Hexane solution, sliced rectangular frame of cheetah fed in 12 rows and 22 columns and fed every 12 pixels in a column at a time using 12 Yagi antennas arranged all around the GT (figure S1).As cheetah runs, helices add new pitches, as gelator fill cage of fields.(C).One-to-one correspondence between pixel intensity of input data, resonance bands of superstructures and the optical vortices when no information is fed.This is background data.(D) When em loops condense to create first seed supramolecule, orthogonality between SA1 and SA2 signals is checked by Oscilloscope, Os.N intertwined loops made of iso-frequency paths in the chemical beaker shifts periodicity x i due to dipole-dipole interactions between helical nanowires, builds N vibrating modes, ∅ s N = Nexp[−a

Figure 3 .: ( ∂S ϕ 2 ∂η ) vanishes when ϕ 1 ̸
Figure 3. Experimental evidence for mathematical proof of invariants for Cheetah: (A).Elementary mathematical operations carried out by feeding Cheetah video through gel precursors as the structures grow from singlet helical nanowire to doublet to a pair of doublets or namely a multiplate (right to the left).Four such multiplates build a mesh.Five meshes could be investigated at max by adjusting solvent precursor ratio, taking a sample from solution, carrying out SEM (first row, right to left).(B) Below each SEM image, the corresponding optical vortex assembly Z is shown.(C) In the third row, the wireless antenna is switched ON to amplify a few bands by pumping additional electromagnetic signals that are orthogonal transformed vortices SV (∅1) SP(∅2), and SL(∅3).(D) After orthogonal transformation, by using mirrors and feeding two new electromagnetic signals to the gel, the dot products of SV (∅1) & SP(∅2), and SP(∅2), & SL(∅3) are derived.Frequencies are tuned to amplify SV+ SP and SL+ SP, We find significant disappearance of clocks other than the invariant elements of the invariant tree.To the left, an image of the cheetah's invariant tree is shown.Two 3D clock assemblies, one memorized (A ijk ) and the other unknown input (A / ijk ) resonating with memory.Deformation η (η = (A / ijk − A ijk )/A ijk ) in the 3D clock assembly for train and test datasets are taken as differential ∂η/∂∅ signal along three orthogonal axes ∅1, ∅2, ∅3, in general ∅S.The plot is R 3 : (x, y, z) → (ϕ 1, ϕ 2, ϕ 3), The invariant condition: partial derivatives of S, with respect to η, ( ∂S ϕ 1 ∂η ) : ( ∂S ϕ 2 ∂η ) vanishes when ϕ 1 ̸ = ϕ 2 (A : B = tr ( AB T ) ). T denotes transpose, tr trace.

Figure 4 .
Figure 4. Invariant-tree driven synthesis of new classifiers as a dedicated analysis protocol.(A) Four columns show four phases of classifications to analyze events without training.The first column shows single shot learning of a running Cheetah dynamics by a gel.The second column shows the determination of spontaneous classifiers by gel in its vortex assembly.The third column shows combining different classifiers to synthesize new sets of classifiers.The fourth column shows how synthesized classifiers are used to analyze events for which the gel was not trained (less than one shot or LO shot learning; movie S7).Identify the classifiers spontaneously and sort test sets by solving the Clique problem: train & test structures are matched.The classification efficiency score is the combination of three parameters (online tables T2-T8).The ratio of maximum overlapping area between the input and output, the ratio of differences in the clocks used relative to the number of clocks used, and weighted average angular differences between different geometric planes.3D geometric shape mismatch is estimated in the detection score.(B) The live growth in the gel superstructure, synthesis of combinatorial classes adopts distinct structural symmetries which couple by electromagnetic resonance of frequency triplets (υ1, υ2, υ3).(C).Invariant matching is shown by superposing train and the test 3D clock assemblies.It starts from SL, transcends to SP, and finally reaches SL.Then the nested loops run.Differential ∂η/∂∅ signals along three orthogonal axes ∅1, ∅2, ∅3, in general ∅S.