AI follows the theoretical framework of modern science
- Sergio Focardi

- 5 days ago
- 2 min read
In this post I discuss how AI even the current generation of Generative AI, follows the framework of modern science.
The framework of modern science developed over a period of several centuries. It is useful to resume the scientific framework in the following points.
First, scientific explanation follows the Deductive Nomological – DN – principle of Carl Hempel and Paul Oppenheim. The DN principle (Hempel and Oppenheim, Studies in the logic of explanation, 1948) states that scientific explanation consists in finding fundamental laws and logically inferring the behaviour of any system from the basic laws plus initial and boundary conditions. Basic laws are fundamental empirical hypotheses.
Many scientists, including the Nobel Laureate Philip Anderson, believe that scientific explanation is hierarchical, in the sense that when we move to large complex systems, we might find emerging laws that cannot be inferred from basic laws but require specific principle. (Anderson, More is different, 1974) However, even accepting the possibility of complex emerging behavior, the DN principles remains valid for a large part of science.
Second. Scientific explanation is abstract and purely structural. The ontological commitment of modern science is minimal, as science does not pretend to describe any qualitative aspect of reality. Science describes structures and relationships between observation. From a physical point of view, the difference between a hamburger and the table on which we are eating is only the structure of molecules, atoms, and other elementary particles.
Third. We cannot look at modern science as the mathematical explanation of given data. Pure data do not exist. Data depends on theory itself. This is clear when we think that data of modern physics are the results of complex observations with instruments whose behavior depends on theory itself. When we make important experiments, we test the entire theory.
Fourth. Empirical test is global. Willard van Orman Quine forcefully stated what is known as the Duhem-Quine thesis: any empirical test is a global test of the entire theory. Individual statements can always be made true through adjustment of the theory.
Fifth. In 1962 Thomas Kuhn argued that science makes progress through discontinuous conceptual jumps (Kuhn, The structure of scientific revolutions, 1962). As new observations cannot be explained with an existing theory, scientist first try to make local adjustments until a radically new theory is proposed. The new theory implements a paradigm shift with respect to the past. Often, new theories are “incommensurable” with the old theories, as they are based on a totally new conceptual and descriptive framework.
Let’s look at AI. Does it fit in the above framework? First, let’s observe that AI is a technology based on a corpus of scientific laws. Therefore, let’s look at the science behind AI.
Learning is a mathematical property of given structures and processes such as neural networks. Learning is based on functional approximation schemes that are mathematical properties. Learning schemes such as backpropagation are also mathematical processes. The empirical part of learning is the discovery that specific tasks such as pattern recognition can be cast in the framework of learning a functional approximation.
Comments