top of page
Search

Science and Artificial Intelligence: Another Great Escape?

  • Writer: Sergio Focardi
    Sergio Focardi
  • 6 days ago
  • 3 min read

It is fair to say that technology is greatly indebted to science. In fact, most technological achievements depend on basic scientific knowledge acquired without having in mind any specific application. For example, modern electronics is greatly indebted to the development of quantum mechanics.

In a number of cases, technological needs originated important scientific discovery. In general, however, technology exploits some basic scientific knowledge. Most technological devices in use today from airplanes to home appliances, from trains to medical equipment, are enabled by scientific discoveries that were made without any specific technological objectives. In any case, behind technology there is solid scientific knowledge.

Thus far, Artificial Intelligence is no exception. Learning, in the sense of artificial intelligence, is a mathematical-logical process. In essence, learning is function approximation. Expert systems were based on logic and the construction of semantic nets, another logical function. The ability of Generative AI to process natural languages is ultimately based on the Statistical Semantic Hypothesis, which is a modern instance of the structuralist approach to linguistics pioneered by Ferdinand de Sauusure.

Still, AI is often presented as an attempt to imitate human thinking. Let’s be clear, there is no mind behind an AI app. Learning does not imply understanding in the human sense. Actually, our knowledge of human mental processes is very limited. We have no idea how conscious thinking is generated in living objects. And we do not have any idea on how mental processes really work.

For example, we have no idea if mental contents are needed for discoveries and innovations. AI might implement some discovery and problem-solving using ideas that go back to Allen Newell and Herbert Simon who explained innovation as the random generation of scenarios plus optimization. However, this process cannot explain the ‘changes of paradigms’ in the sense of Thomas Kuhn that are the basis of scientific progress.

Let’s remark explicitly that science studies structures, not ontologies. Following the Copenhagen interpretation, physics is a set of mathematical laws that connect and predict observations. Science is agnostic about the ontology of reality. Philosophers have put forward different ideas about reality but there is no consensus about the true nature of the world.

Ultimately, humans construct reality starting from perceptions. The initial human ontologies of primitive humans were populated by lots of entities. Progressively science and philosophy have reduced the number of entities. The famous physicist Arthur Stanley Eddington remarked that science has made progress by taking from nature what humans had artificially put in it. In other words, reducing ontologies to a minimum.

Now comes AGI Artificial General Intelligence. The objective is to produce artificial systems that exhibit human-level intelligence. Of course, it is really impossible to make positive or negative forecasts about the future of technology or science. However, I would like to make a few comments on the distance of AGI from the actual level of science.

Because it should be realized that AIG is a deep dive into ontology, an unchartered territory for science.

Simply put, AGI requires a level of science and perhaps philosophy that today does not exist. Science deals only with structures while human thought is something real. The real problem is the qualitative uniqueness of human consciousness. Each human individual is unique in a qualitative sense. While I am writing this post I have sensation, perceptions, emotions that are only mine. If I am in pain, the pain is mine. But this does not make sense. How is it possible that in the infinity of space and time right now this unique thing that is me started to exist?

To arrive at a true AIG there are two possibilities.

First possibility: the behavior produced by ALL human cognitive capabilities can be simulated without assuming any mental state, but only physical structures. Currently AI simulates a subset of human cognitive capabilities without assuming any true mental state. Can it be completely generalized to any human cognition or will or sentiment? Can mental states, true things not structures, have formal representations?

Second possibility: we can manufacture mental states so that we can manufacture minds. Presently we manufacture mental states only through sexual reproduction. We do not know any other way to manufacture minds, and we do not know how sexual reproduction manufactures minds.

I do not want to give an answer on AIG which would be an instance of infinite conceptual hubris. However, I would like to remind that we are very distant from either of the above possibilities. We can build a humanoid robot equipped with sensors. Sensors read signals, electric, magnetic, chemical or other physical signals. But a true AIG should construct an image of the world from sensors. There is no knowledge of these processes.

Perhaps it should be wiser to improve our scientific understanding before engaging in reproducing processes of which we have no knowledge.

 
 
 

Recent Posts

See All

Comments


bottom of page