Artificial Intelligence ethics by design: Intentionality and Autonomy.

COMMON GROUND: Artist Sougwen Chung works in her Brooklyn studio. Celeste Sloman/For The Washington Post

AI was conceived as a calculation that independently learns how to perform better and better. In a sense, Intelligence has become Artificial when the capability of self-improvement was included in a processing machine’s features list.

Artificiality is a characteristic of objects that are contrived by human action. An intent – or a casualty – that produces something that – in that very same characteristics – didn’t exist before. In a sense, we can consider ‘natural‘ something that can exist independently from human beings, while ‘artificial‘ is something that is not. What’s so interesting about this matter is that the degrees of autonomy humans are granting machines with promises a day when it won’t be so easy to distinguish natural from artificial in that sense.

The learning capability – which triggers autonomy – drives machines to grow their influence on people’s experiences in such a way that machines began to influence free will deeply, serving the most important social and psychological human processes: decision making. We are facing AI-driven experiences coming to rule our choice options as we are more and more immersed in the networked digital-driven economic sphere of the internet.

We make use of AI to manage the accessibility of resources in the supply chain, logistics. AI’s used in matching individuals’ behavior to drive online advertising on marketing, and it’s even been adopted to judge reliability in credit management in banking. AI plays a crucial role in managing life/death situations: it’s promptly efficient in providing assistance in medical urgency, as it does with clinical diagnosis. AI’s even been weaponized to be used at war. AI’s adopted to understand the uncertainty of vote in political elections, in order to influence them. The info-bubble social media built around users has been mostly empowered by AIs.

BEYOND GOOD AND EVIL

Science considers Artificial Intelligence, not as a threat in itself: the menace hides in the implications inherent in its autonomy. As nuclear technology is not an issue in itself, the risk arises within the application of it, but differently from nuclear technology, we have the AI technology capable to take action in our world autonomously. We are meeting an un-human autonomous intelligence that is artificial and extremely relevant in such a way that when applied to certain conditions it makes us humans depend on it.

AI as a tool influences the choices and experiences of billions of people these days. In its various forms and applications, autonomy applies in a large spread of different intentions and conditions, most of which we aren’t aware of. How might we qualify its presence in our lives as a simple technical device – a tool – when it’s endowed with such independence?

The ethical question, in this case, is based on how easily the chain of events that spring on AI-based decisions is under control – or as they say – human-centered. I call an ethically balanced AI a non-hostile autonomous agent. Philosophers struggled with the Cognitive wheel and the Frame Problem in order to define the reaches – and limits – of the trust we are supposed to grant to such non-hostile artificial entities.

In front of a danger for a human, an AI – as a human-centered tool – has to evaluate all the possible implications of an action to fix the immediate danger, if not immediate, descending from a flux of cause-effects sprung from the first one. The AI might act to solve the immediate danger, but by doing that they might cause collateral damage as well. This is the limit of human-centered AI at this point: is AI capable to manage the evaluation in time to be effective? And even if it can, how AI could be capable to establish priorities between different options regarding the safety of a human?

Well from this perspective at this moment AI is an imperfect tool, albeit science is recurring to innovative and astounding technologies – like for example quantum computing – to override its hurdles.

At this point in the history of Artificial Intelligence, we can focus on comprehending the reach of the autonomy of AI to try to take the best advantage from it without having to face AI as an unpredictably hostile technology.

AUTONOMY

We classify AI autonomy in several steps. As the vehicle autonomous drive classification shows, the design guidelines of AI are represented by algorithms, processed by math, but they are created by semiotics, linguistics, and philosophy.

But it’s quite clear that while all those approaches are instrumental to building a business performance-focused AI, there’s only one discipline to designing a performing non-hostile AI: Ethics. For example, we have exceptionally neat evidence of the power of ethics in AI design when it’s applied in autonomous drive vehicles defining the classes of interdependence between AI and the Pilot:

” If a vehicle has Level 0, Level 1, or Level 2 driver support systems, an active and engaged driver is required. She is always responsible for the vehicle’s operation, must supervise the technology at all times, and must take complete control of the vehicle when necessary.

In the future, if a vehicle has Level 3, Level 4, or Level 5 automated driving systems, the technology takes complete control of the driving without human supervision. However, with Level 3, if the vehicle alerts the driver and requests she takes control of the vehicle, she must be prepared and able to do so. “

Vehicles represent a potential threat for pedestrians, but at the same time also for the pilot and the travelers on-board. Ethics guides the definition of a ‘responsibility balance’ shared between the capability of the autonomous AI and the vigilant attention of the pilot. When AI is performing at the highest level of efficiency, human attention is less – or completely not – needed. And, in reverse, when AI is just able to reduce risks slightly, the pilot assumes the major responsibility of risk management. There’s a sort of ‘net-zero’ risk balance that represents the perfect condition of an ethical AI performance: this approach assigns to AI proper autonomy in risk management accordingly to an overall ‘safety guarantee’ shared with humans. And this is a fundamental guideline to design a non-hostile AI.

Isaac Asimov’s Robotics laws are a clear example of the responsibility of algorithm designers to prevent the onset of potential or effective harm. The Laws are a logic sequence – the primeval structure of algorithms:

First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm. Second Law: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. Asimov later added the Zeroth Law: A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

This approach is clear: the only way to have non-hostile AI is to make them so. If we don’t do it, AI is clearly inevitably destined to perform as a hostile entity: its capability of autonomy requires active ethical engineering.

AI’s tool performs actions at different degrees of independence, which you don’t tell from human’s actions unless you, the human, perceive the differences between those actions and the actions performed by a human herself. What fascinates me about Alan Turing’s imitation game is how it blends the differences between the fact and its simulation. One of its interpretations postulates that if you can simulate reality well enough, that simulation becomes reality.

So the worst question we may ask is: ‘ is AI capable of thinking? ‘. Maybe we ought to opt for the most relevant: ‘is AI capable of self-awareness?.

INTENTIONALITY

It’s so relevant to understand how Artificial Intelligence is capable of intentionality. Many consider intentionality as the primeval evidence of consciousness. As the definition of consciousness is subject to many different philosophical understandings – no one of which we are capable to consider as the true explanation of it – there’s a representationalist theory of consciousness which extends the treatment of intentionality to that of consciousness, showing that if intentionality is well understood in representational terms, then so can be the phenomena of consciousness in whichever sense of that fraught term.

We can admit that the action of representations expresses inherent intentionality, so explaining consciousness (1). Today’s AIs – mostly funded on the Machine Learning approach which relates to neural networks frameworks – are capable to manage symbols – which means the ability of generalization and classification, which leads to linguistics – and – restricted to its programmed duties – a form of cause-effect chain elaboration capability. In these terms, today’s AI is capable of prefixed – somehow specific – intentionality, but – unless programmed to – it is yet missing self-perception.

In our research, we can admit that AIs may express intentionality by imitation – and so performing causality-driven tasks – but as long as they lack the goal of self-perception – meaning the Leibniz’ philosophical apperception, the process of understanding itself as something perceived – they can’t match the wider and deeper structure of self-awareness typical, for example, of some living beings.

But we can anyway teach AI to simulate some specific effects of self-awareness at a point of perfection where it’s not important what the mechanics are, which their nature is.

Today’s technology seems to have limits, and the apperception is quite far away to happen in machines as long as they don’t reproduce thoughts, but they imitate the outcomes of them quite similar. Intentionality can’t be a serious term to evaluate how dangerous AI could be, but surely it’s a solid term to consider understanding how humans can be dangerous for themselves.

FOOTNOTES

(I) (A) Conscious awareness of one’s own mental states, and “conscious states” in the particular sense of states whose subjects are aware of being in them. (B) Introspection and one’s privileged access to the internal character of one’s experience itself. (C) Being in a sensory state that has a distinctive qualitative property, such as the color one experiences in having a visual experience, or the timbre of a heard sound. (D The phenomenal matter of “what it’s like” for the subject to be in a particular mental state, especially what it is like for that subject to experience a particular qualitative property as in (C). Excerpt from https://plato.stanford.edu/entries/consciousness/#CreCon

BIBLIOGRAPHY

Isaac Asimov, I, Robot, Gnome Press, 1950

Daniel C. Dennet, Brainchildren. Penguin Books, 1998

George F. Luger – editor, Computation & Intelligence, American Association for Artificial Intelligence, MIT Press, 1995

Praise to the unknown unknowns of transcendence beyond the digital world.

I was laying in bed, sleepless late at night at Shangri-La Hotel in Singapore, Tower Wing. At that point, I felt so frustrated. I just realized that I could not solve everything with an app.

But let me begin from the start. It was 2009.

Well ,that meant a lot to me. In the advertising, I clearly understood that ‘Everything was already associated to an app‘. The statement implied that Apps would offer a different way to experience-solve every possible event occuring in my life. The voice – so promptly – implied that I could have every need solved through an app, even needs I could not imagine the existence of.

Totally engaged in the promise, I could not wait to enjoy this new superpower. Strangely I was more attracted to the potential linked to the solution of unknown needs coming from the Apps superpower more than the actual satisfaction of known needs, you know. In that sense, I felt invulnerable against perils I could not even imagine, in an omnipotent condition of pre-satisfacted events for all my lifetime. I could do everything possible. In that sense, in a life where every need can be satisfied, there is an existence of no-needs. Ultimately boring but exalting at the same time, right?

I was very excited.

But that same very night, Donald Rumsfeld – Vice President of the United States of America – got into my hotel room TV with his most famous press conference, giving to me a different perspective on my own capability to identify and understand what’s possible – and not possible.

In that 4 quadrant monitor of reality around what’s known and what’s unknown – which evidenced my poor capability to understand what’s beyond the obvious – I realised that Apps were both a superpower and a trap.

Only by the fact the unknown unknowns space exists, I can define – and overcome – the bias space: the unknown knowns space. Because only if I can consider the existence of an unattainable space of unknown mystery I can bear the material limitations defined by the known knowns, also I can enjoy the most exalting ludic experience of the what-if, might-be research, the exploration of alternative occurrences given by the exploration of the known unknowns space.

I got my all powerful condition reduced to null: if there’s an App for everything, there is one to explore the unknowns, transforming the unknown unknowns into known unknowns and then into known knowns. Destroying mystery, this god-like aura would destroy all the need I have to understand. It would automatically seek-and-destroy all the motivation to move on to explore, research, and meet new unknowns.

Differently from the Zen, this materialistic way of being would not grant bliss, It brings you into a state of perpetual instantly-solved struggles, bringing understanding through instant explanations. Needs will come, but they will be solved immediately with the use of material enablers, the App. In the transcendence culture, needs are part of the existence as everything else. In the transcendence state, you are deeply beyond the need to understand, in a way in the Apps super powered existence is giving you a tool to overcome the need to understand by avoiding it, not contemplating it. When you simply can avoid to understand because an app is doing it at your place, you easily become a mind-slave, lashed to the idea that your freedom depends on to the capability of choosing between different apps, different websites, through a search engine that could solve every need.

In other words: while intrinsically bonded to the systematic all-powerful capability of the post-internet environment, you loose the motivation to go beyond your own existence.

Post-pandemic world, I am my shelter

January 2020. In the Chinese province of Wuhan, a different virus is first discovered to cause bad cold and severe acute respiratory syndrome (SARS). The virus belongs to a new corona type, whose human immunity system isn’t prepared for. Only two months later: in March 2020, the World Health Organization declares a Global Pandemic state.

The city of Milan, Italy, it’s the most affected city outside China, as the plague expands fast to the rest of Europe, the Americas, and Africa. While the Chinese government locks down the whole province of Wuhan, a few weeks later Italian Prime minister of Italy forces social distance and preventive quarantine for every citizen. It is the first western head of state to make use of restrictions on personal freedom, which become even tighter by the middle of March 2020 when Italy introduces total shut-down inside the national border, soon followed by other countries: Germany, Spain, United States first.

Milan, 22 March 2020

Tribal Principles

We could sum up that life is a quest for competencies. Evolution urges the nature of life. In the art of survival, you have to understand how to weaponize knowledge. You need to understand how competencies are tools of excellence in your environment. Competences are more than fundamental, they are the actual survival kit.

Competences

Competencies have social origins. They are not just a matter of individual advancements and research, as we naturally exist only as a byproduct of our group of references. The size and the reach of the group are not based on the contextual physical presence. We do live in the age of the internet and the hyper-distributed network is the social construction we refer to.

. The size and the reach of the group are not based on the contextual physical presence. We do live in the age of the internet and the hyper-distributed network is the social construction we refer to.

A new generation of tools to build values and competencies are needed at this age when social values that grow in the communication space without physical connection do shape how we connect physically. We are under a perpetual storm of a meme in a light-speed media environment that makes our senses continuously turning. We need a continuously new way of learning, of understanding. So we re-define at the fastest rate the rankings in social reputation. In this climate, the compulsion to belong has grown far stronger than the compulsion to understand. We need to continuously find new ways to shift-and-enhance our capabilities, while the work we do becomes obsolete faster and faster, resources are available for the masses as the quality falls, geographical borders are blurring, if not disappearing while political-ideological borders are already broken.

Which are our point of references? Which competencies do we need to hold in order to breathe above the whirlpool?

Instincts

Consider the right and wrong. In the era of perpetual stimuli and visual language dominion, in the virtual clash of cultures: in which way we can do it right when it’s so difficult to know the way we could get wrong? It’s like to be in an ever spinning tribal ritual, where when everything is impossible to understand, salvation resides in the attitudes that stem from strong loyalty to one’s own tribe or group like our ancestors did.

We form our post-internet tribe first by shaping the projections to which our imagination belongs to. We need to grasp the lowest instincts as they are the strongest point of certainty of our physicality when the immaterial space is dominated by the digital flow, that is the technological sense. The brands, luxury brands – for example – along the 2000s underwent this tribal revision made of golden-toothed rappers, bad words, display of violence, and squandered wealth.

It’s the age of Tinder fast casual sex encounters approaches, and Amazon’s hyper-shopping capabilities: it’s the age heralded by Facebook, by the super-gossip engine for the masses Facebook. No wonder that fake and true news collide there: social media are not supposed to spread the truth, they are there to spread the consent.

Tribal principles for the internet age

The First Tribal principle is for self-commitment and self-referentiality.

The Second Tribal principle is for consent and measurement of social rankings as a bond.

The Third Tribal principle is for sensual enjoyment and consumption.

The Fourth Tribal principle is for rituality and devices fetishism.

The Fifth Tribal principle is the exaltation of pain and denial of death.

The Sixth Tribal principle is the ostentation of goodness and happiness formalism.

The Seventh Tribal principle is the respect of the Law as the Rule of the Crowd and Family Ties to include friends-for-life and temporary connections as well.

New tribal principles and new tribal competencies. Meme and Influencers feed the new ethnic values, as the skills of make-believe have become far more important than real reporting. Photoshop has become the most important survival tool, as our image has become more real than us.

https://en.wikipedia.org/wiki/Filter_bubble

LinkedIn
Share
Instagram
Follow by Email