It would seem that the GPT chat, with its constant updates, is the most high-profile project since the advent of AI. But there is another model that differs from all existing ones, called liquid. Let’s try to figure out what liquid neural networks are, what their advantages are and where they can be used.
A group of researchers from the University of Massachusetts presented LNN liquid neural network or liquid neural network based on recurrent neural network. It differs from the usual neural networks in architecture, so it processes data better.:
When new features and values of AI appear, it can change the structure, increase the number of neurons and connections in each layer. In addition, the model has a different principle of synapse processing. In the classical solution, the strength of the impulses is expressed by the weight value. In LNN, it is a probabilistic process in which responses to inputs can be disproportionate. This “lively” responsiveness also played a role in choosing a name for the technology.
As a prototype, which the developers indicated in the article, no matter how ridiculous it may sound, they took a millimeter worm, which, with a primitive nervous system consisting of just over 300 neurons, could perform quite complex tasks. For example, to study the environment, to look for food.
The structure of the liquid network simulates the transmission of impulses, so it is possible to predict the behavior of the system in the future. In fact, this is already a departure from traditional ideas about the work of AI, in which the state of the model is reflected for a certain time. To put it simply, artificial intelligence does not need to predict, but simply update the information and recalculate the weights in favor of the changed ideas.
The development has preserved the ability of recurrent neurons to store previous data in memory when analyzing the next ones. But I got a number of distinctive features:
dynamic architecture – to increase interpretability, increase the accuracy of data processing;
adaptability – AI adapts to changing data;
continuous learning – takes into account the updated information and continues to learn, independently redistributing values and conditions.
Due to the smaller size of nodes and a large number of connections, the decision-making mechanism becomes more transparent for users and developers. For the same reason, AI as a whole is much smaller than the usual architectures, so it is easier to deploy and scale in the enterprise. This further increases the commercial attractiveness.
Additional advantages include resistance to extraneous noise and interference at the entrance. This should ultimately improve the accuracy of information processing.
Liquid neurons can be used for:
For all its advantages, dynamic AI is not perfect, and theoretically they have problems in use. The main one is a vanishing gradient or a decrease in the error signal with backpropagation. And a complex setup that will require serious investments and time. In some cases, customization may require iterative methods, followed by testing. If the parameters are selected incorrectly, this will greatly affect the system performance.
On the other hand, the initial adaptation to meet specific tasks, for example, is a one-time enterprise. Subsequently, artificial intelligence will independently control the intra-neural parameters. Another problem is the lack of books, recommendations reflecting the principles of architecture, and applicable restrictions.
Artificial intelligence technology is improving, and LNN is a clear proof of this. From the simplest perceptron with a sequential chain of neurons, AI is gradually turning into a powerful functional model capable of solving many tasks.