COMMON GROUND: Artist Sougwen Chung works in her Brooklyn studio. Celeste Sloman/For The Washington Post

AI was conceived as a calculation that independently learns how to perform better and better. In a sense, Intelligence has become Artificial when the capability of self-improvement was included in a processing machine’s features list.

Artificiality is a characteristic of objects that are contrived by human action. An intent – or a casualty – that produces something that – in that very same characteristics – didn’t exist before. In a sense, we can consider ‘natural‘ something that can exist independently from human beings, while ‘artificial‘ is something that is not. What’s so interesting about this matter is that the degrees of autonomy humans are granting machines with promises a day when it won’t be so easy to distinguish natural from artificial in that sense.

The learning capability – which triggers autonomy – drives machines to grow their influence on people’s experiences in such a way that machines began to influence free will deeply, serving the most important social and psychological human processes: decision making. We are facing AI-driven experiences coming to rule our choice options as we are more and more immersed in the networked digital-driven economic sphere of the internet.

We make use of AI to manage the accessibility of resources in the supply chain, logistics. AI’s used in matching individuals’ behavior to drive online advertising on marketing, and it’s even been adopted to judge reliability in credit management in banking. AI plays a crucial role in managing life/death situations: it’s promptly efficient in providing assistance in medical urgency, as it does with clinical diagnosis. AI’s even been weaponized to be used at war. AI’s adopted to understand the uncertainty of vote in political elections, in order to influence them. The info-bubble social media built around users has been mostly empowered by AIs.

BEYOND GOOD AND EVIL

Science considers Artificial Intelligence, not as a threat in itself: the menace hides in the implications inherent in its autonomy. As nuclear technology is not an issue in itself, the risk arises within the application of it, but differently from nuclear technology, we have the AI technology capable to take action in our world autonomously. We are meeting an un-human autonomous intelligence that is artificial and extremely relevant in such a way that when applied to certain conditions it makes us humans depend on it.

AI as a tool influences the choices and experiences of billions of people these days. In its various forms and applications, autonomy applies in a large spread of different intentions and conditions, most of which we aren’t aware of. How might we qualify its presence in our lives as a simple technical device – a tool – when it’s endowed with such independence?

The ethical question, in this case, is based on how easily the chain of events that spring on AI-based decisions is under control – or as they say – human-centered. I call an ethically balanced AI a non-hostile autonomous agent. Philosophers struggled with the Cognitive wheel and the Frame Problem in order to define the reaches – and limits – of the trust we are supposed to grant to such non-hostile artificial entities.

In front of a danger for a human, an AI – as a human-centered tool – has to evaluate all the possible implications of an action to fix the immediate danger, if not immediate, descending from a flux of cause-effects sprung from the first one. The AI might act to solve the immediate danger, but by doing that they might cause collateral damage as well. This is the limit of human-centered AI at this point: is AI capable to manage the evaluation in time to be effective? And even if it can, how AI could be capable to establish priorities between different options regarding the safety of a human?

Well from this perspective at this moment AI is an imperfect tool, albeit science is recurring to innovative and astounding technologies – like for example quantum computing – to override its hurdles.

At this point in the history of Artificial Intelligence, we can focus on comprehending the reach of the autonomy of AI to try to take the best advantage from it without having to face AI as an unpredictably hostile technology.

AUTONOMY

We classify AI autonomy in several steps. As the vehicle autonomous drive classification shows, the design guidelines of AI are represented by algorithms, processed by math, but they are created by semiotics, linguistics, and philosophy.

But it’s quite clear that while all those approaches are instrumental to building a business performance-focused AI, there’s only one discipline to designing a performing non-hostile AI: Ethics. For example, we have exceptionally neat evidence of the power of ethics in AI design when it’s applied in autonomous drive vehicles defining the classes of interdependence between AI and the Pilot:

” If a vehicle has Level 0, Level 1, or Level 2 driver support systems, an active and engaged driver is required. She is always responsible for the vehicle’s operation, must supervise the technology at all times, and must take complete control of the vehicle when necessary.

In the future, if a vehicle has Level 3, Level 4, or Level 5 automated driving systems, the technology takes complete control of the driving without human supervision. However, with Level 3, if the vehicle alerts the driver and requests she takes control of the vehicle, she must be prepared and able to do so. “

Vehicles represent a potential threat for pedestrians, but at the same time also for the pilot and the travelers on-board. Ethics guides the definition of a ‘responsibility balance’ shared between the capability of the autonomous AI and the vigilant attention of the pilot. When AI is performing at the highest level of efficiency, human attention is less – or completely not – needed. And, in reverse, when AI is just able to reduce risks slightly, the pilot assumes the major responsibility of risk management. There’s a sort of ‘net-zero’ risk balance that represents the perfect condition of an ethical AI performance: this approach assigns to AI proper autonomy in risk management accordingly to an overall ‘safety guarantee’ shared with humans. And this is a fundamental guideline to design a non-hostile AI.

Isaac Asimov’s Robotics laws are a clear example of the responsibility of algorithm designers to prevent the onset of potential or effective harm. The Laws are a logic sequence – the primeval structure of algorithms:

First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm. Second Law: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. Asimov later added the Zeroth Law: A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

This approach is clear: the only way to have non-hostile AI is to make them so. If we don’t do it, AI is clearly inevitably destined to perform as a hostile entity: its capability of autonomy requires active ethical engineering.

AI’s tool performs actions at different degrees of independence, which you don’t tell from human’s actions unless you, the human, perceive the differences between those actions and the actions performed by a human herself. What fascinates me about Alan Turing’s imitation game is how it blends the differences between the fact and its simulation. One of its interpretations postulates that if you can simulate reality well enough, that simulation becomes reality.

So the worst question we may ask is: ‘ is AI capable of thinking? ‘. Maybe we ought to opt for the most relevant: ‘is AI capable of self-awareness?.

INTENTIONALITY

It’s so relevant to understand how Artificial Intelligence is capable of intentionality. Many consider intentionality as the primeval evidence of consciousness. As the definition of consciousness is subject to many different philosophical understandings – no one of which we are capable to consider as the true explanation of it – there’s a representationalist theory of consciousness which extends the treatment of intentionality to that of consciousness, showing that if intentionality is well understood in representational terms, then so can be the phenomena of consciousness in whichever sense of that fraught term.

We can admit that the action of representations expresses inherent intentionality, so explaining consciousness (1). Today’s AIs – mostly funded on the Machine Learning approach which relates to neural networks frameworks – are capable to manage symbols – which means the ability of generalization and classification, which leads to linguistics – and – restricted to its programmed duties – a form of cause-effect chain elaboration capability. In these terms, today’s AI is capable of prefixed – somehow specific – intentionality, but – unless programmed to – it is yet missing self-perception.

In our research, we can admit that AIs may express intentionality by imitation – and so performing causality-driven tasks – but as long as they lack the goal of self-perception – meaning the Leibniz’ philosophical apperception, the process of understanding itself as something perceived – they can’t match the wider and deeper structure of self-awareness typical, for example, of some living beings.

But we can anyway teach AI to simulate some specific effects of self-awareness at a point of perfection where it’s not important what the mechanics are, which their nature is.

Today’s technology seems to have limits, and the apperception is quite far away to happen in machines as long as they don’t reproduce thoughts, but they imitate the outcomes of them quite similar. Intentionality can’t be a serious term to evaluate how dangerous AI could be, but surely it’s a solid term to consider understanding how humans can be dangerous for themselves.

FOOTNOTES

(I) (A) Conscious awareness of one’s own mental states, and “conscious states” in the particular sense of states whose subjects are aware of being in them. (B) Introspection and one’s privileged access to the internal character of one’s experience itself. (C) Being in a sensory state that has a distinctive qualitative property, such as the color one experiences in having a visual experience, or the timbre of a heard sound. (D The phenomenal matter of “what it’s like” for the subject to be in a particular mental state, especially what it is like for that subject to experience a particular qualitative property as in (C). Excerpt from https://plato.stanford.edu/entries/consciousness/#CreCon

BIBLIOGRAPHY

Isaac Asimov, I, Robot, Gnome Press, 1950

Daniel C. Dennet, Brainchildren. Penguin Books, 1998

George F. Luger – editor, Computation & Intelligence, American Association for Artificial Intelligence, MIT Press, 1995

LinkedIn
Share
Instagram
Follow by Email