Main
Video Reverse Search
T-Bit Project
About TAPe
API
Team
Contacts
Demo
Ru
Main
Video Reverse Search
T-Bit Project
About TAPe
API
Team
Contacts
Demo
Ru
The almanac about new method of information processing
What's actually wrong with the concept of AI
Biology and artificial intelligence
Cognitive science: a beginning without an end
Holism and brain studies
Theory of Active Perception
Why perception is necessary for modeling human-like thinking
What's actually wrong with the concept of AI
Evolution of ideas underlying AI: Brief Description
Biology does not understand how the brain works
Why AI does need biology after all
How far artificial neurons are from the real ones
Creating something really similar to how the brain works
Cognitive science: a beginning without an end
Cognitive science has never produced anything practical
Consciousness is not amenable to science
No one knows what consciousness is, everyone keeps talking about it
A sudden idea — the quantum nature of consciousness
Orchestrated objective reduction: what it is and what for
Another theory of consciousness: the integrated information theory
Global workspace theory
Conscious and unconscious thinking. Questions to an academic
Questions for Theories of Consciousness
Ultimate ways to study consciousness without cutting into the brain
Albert Einstein suspected something
Why has psychoanalysis progressed more than science without scientific methods
Insights from intuition and deep observation are not exhausted and are as good as AI
There is no computation in the brain as we all know it. What kind is there?
Why it’s unreasonable to use word Learning in relation to AI
There is a different calculability: what Hilbert and Gödel discovered
Why the brain should be studied as a whole
TAPe models the workings of the mechanisms of perception
Language is a complete system, it’s how it should be studied
The principles by which the Language of Thought functions
The isomorphism of Chinese characters and TAPe
T-Bit: a unit of information 1000x of times more efficient
Evolution of ideas underlying AI: Brief Description
Negative feedback. The central idea is that complex computations are a composition of simple ones. So to solve a complex task, you need to break it down into many smaller ones and solve them.
01
02
Consequently, by the same principle, human reasoning can be broken down into simpler parts — some initial statements can be used as a basis to deduce the solution to the problem using certain logic and rules.
The next step was to represent knowledge about the world in the form of a set of concepts and relations between those concepts, i.e., in the form of semantic networks, or knowledge graphs. Such a semantic rule network can help model reasoning in a particular subject area. This did not work properly until the backpropagation method started to be used for AI training.
04
SL is believed to allow creating, if not a world model for AI, then at least a part of it — a game, a language, etc. — and using the AI model trained on this "part of the world" to solve other tasks as well.
07
Today, this is the most widespread method. It has given rise to such approaches as, for example, Supervised Learning (SL). The SL method in machine learning means that the model is given both the initial data and the result it should achieve with this data.
06
The dominance of negative feedback lasted for several decades. In the 1980s, the concept of AI learning was introduced. What is learning in terms of a computer/engineering system? It is the changing relationships between elements. Changes in the relationships should increase the probability of a correct answer.
03
The method is about introducing the concept of weight for each part of a multilayer neural network consisting of so-called neurons. Weight is the relationship between neurons. Using the method, you can calculate the contribution of each neuron to the changes or errors in the network — that is, to the weights that allow reducing the error. It is from the final problem solution to the initial data input that calculations are made, hence the "back" root in the method name.
05
The creators of DeepMind, Gato, and the like using this approach postulated that they had created an almost AGI. However, they are actually as far from it as any other technology.
08
Technically, humankind has done a tremendous amount of work and created brilliant, ingenious engineering solutions, which, nevertheless, are still as far from intelligence as a calculator — no matter what their creators may say. Eventually, all AI models work with huge sets of facts and data. And so far, it all boils down to cramming even more facts and data into the AI and using even more resources and energy to get... What?
09