SKA Tensor Net Explorer
Visualize the Tensor Net per layer. The zero-crossing marks the transition from unstructured to structured knowledge accumulation.
Definitions
| Quantity | Definition |
|---|---|
| Tensor Net | Σ (D − ∇z H) · ΔZ |
| ∇z H | −(1/ln2) · z · D(1−D) |
| Zero-crossing | phase transition |
Reference Paper
Abstract
This paper aims to extend the Structured Knowledge Accumulation (SKA) framework recently proposed by mahi. We introduce two core concepts: the Tensor Net function and the characteristic time property of neural learning. First, we reinterpret the learning rate as a time step in a continuous system. This transforms neural learning from discrete optimization into continuous-time evolution. We show that learning dynamics remain consistent when the product of learning rate and iteration steps stays constant. This reveals a time-invariant behavior and identifies an intrinsic timescale of the network. Second, we define the Tensor Net function as a measure that captures the relationship between decision probabilities, entropy gradients, and knowledge change. Additionally, we define its zero-crossing as the equilibrium state between decision probabilities and entropy gradients. We show that the convergence of entropy and knowledge flow provides a natural stopping condition, replacing arbitrary thresholds with an information-theoretic criterion. We also establish that SKA dynamics satisfy a variational principle based on the Euler-Lagrange equation. These findings extend SKA into a continuous and self-organizing learning model. The framework links computational learning with physical systems that evolve by natural laws. By understanding learning as a time-based process, we open new directions for building efficient, robust, and biologically-inspired AI systems.
SKA Explorer Suite
About this App
The Tensor Net captures the balance between decision probabilities D and entropy gradients ∇z H, weighted by the knowledge change ΔZ at each step. When positive, the network is accumulating knowledge in the direction of the entropy gradient. The zero-crossing — marked by dotted vertical lines — signals the onset of structured knowledge accumulation.
Important Note
The layered SKA Neural Network presented here is a discrete approximation (a “shadow”) of the underlying continuous Riemannian Neural Field (RNF).
It is provided for educational purposes only to illustrate the core mechanism of local entropy reduction through decision shifts ΔD.
The true SKA dynamics and all its deeper properties live in the continuous RNF. The layered discretization is useful for teaching and rapid experimentation, but it is not the complete theory.