A Shared Fate: Key Takeaways from the Risk and Artificial Intelligence Workshop

Centre for Strategic Futures
15 min readDec 3, 2019

--

By Leon Kong, Manoj Harjani and Lim Pei Shan

The Centre for Strategic Futures (CSF), the Centre for the Study of Existential Risk (CSER) and the Leverhulme Centre for the Future of Intelligence (CFI) jointly organised the Risk and Artificial Intelligence (AI) Workshop from 5–7 September 2018. The workshop brought together an international group comprising Singapore government officials and thought leaders from academia and industry for in-depth discussions about foresight, strategic risks and existential risks. The workshop also discussed opportunities and risks in AI development, present and future, as well as AI governance and the importance of international cooperation for beneficial and responsible AI.

The Risk and AI workshop was the first event that CSF co-organised with these international partners and was designed to be a meeting of minds across cultures, disciplines and sectors. The diverse profile of the participants resulted in fruitful conversations that generated new insights about risk and AI. The workshop also brought together CSF, CSER and CFI’s networks, facilitating the building of bridges between thinkers from Asia and the West. These new connections have been greatly enriching and have provided us here at the Centre with much food for thought. The following paragraphs summarise the key takeaways from the workshop.

RISK

Three tensions in risk

There are three tensions in addressing risk that need to be articulated from the start. The first tension is that every risk comes with opportunity; these upsides need to be recognised and taken advantage of. For example, ageing is a challenge faced by many countries that also presents significant opportunities for new products and services that could shape social norms and behaviour in the long run. Governments, businesses and third sector organisations may respond to and manage risk well, but may not capitalize on opportunities and potential innovations generated by risk.

The second tension is the need to balance between immediate demands and longer-term concerns. Climate change, for example, is a difficult risk to address because it is a long-term issue requiring immediate responses. For such risks, it is normal for people and organisations to make short-term decisions resulting in ignorance or neglect of broader concerns that gain importance in the long run.

The third is the tension between strategic and operational risk, where one may be identified and mitigated at the expense of the other.

Diverse perspectives and abstract elements

Risk assessments are often unconventional and dependent on the inclusion of diverse and even contrarian perspectives. In addition, relevant expertise at various stages of the process is also essential. For example, risks have traditionally been dealt with at the governmental or organisational level, but digitalisation has made risk a more widely understood issue in society.

Risk assessment can therefore use the methods and findings of other fields, such as behavioural science, to achieve greater insight into the management of risk appetite. In addition, it is difficult to find leaders who can manage a team of diverse individuals that brings together a range of perspectives. Leaders should not be too quick to dismiss contrarian views as stupid, sinister or slothful, simply because they feel uncomfortable confronting those views.

Abstract elements related to risk — including meanings, relationships, processes and experiences — also merit attention. A participant observed that an individual’s perception of identity, allegiances, commitments and motivations precedes action. However, organisational risk management often neglects such considerations. Paying more attention to the abstract elements of risk could help organisations and governments better understand why individuals often respond to risk in seemingly illogical ways. For example, the take-up rate for life insurance is much higher than for travel insurance, even though risks from taking a flight are greater.

Managing unprecedented or surprising risks

The likelihood and impact framework to assess risks may fail to accurately ascertain the likelihood of unprecedented situations — such as a terrorist attack, which has not happened in Singapore’s recent history. Nonetheless, it should not be necessary to have experienced an attack in order to predict future ones; existing infrastructure such as data and AI may give insight into unprecedented situations.

How can we put a spotlight on issues of AI risks to start conversations without causing unnecessary panic?

Another way to manage such surprises in risk assessment is by over-resourcing. Thus, when allocating manpower, budget and attention, risk managers cannot operate on lean optimisation models. National risk portfolios should therefore be treated like investment portfolios. This means that projects can be systematically prioritised, even when there may not be any guarantee that the crisis situations calling for these projects to be implemented will actually materialise.

Co-owning risk, neutrality and the license to operate

The public should not be perceived and treated as the weak link in relation to risk management. Instead, the goal should be to achieve informed decision-making across groups in society to encourage social resilience and co-ownership of risks. Alongside this, introducing the idea of an individual or corporate “risk quotient” similar to existing ideas of intelligence or emotional quotients could highlight the importance of understanding and dealing with risks.

In addition, it is important to be neutral in communicating and educating the public about risk. Advocacy or sensationalism leads to a loss of trust and ineffectual risk communication. One participant lamented

the tendency of the media to exaggerate risks, referencing “Skynet” and “Terminator” as caricatures of the future risks of AI. He noted the challenge inherent in generating an appropriate amount of attention for the right issues without causing unnecessary panic.

Another participant referenced the notion of a “license to operate” that is implicitly granted to organisations by the public, which acts as an imperative for organisations to communicate risks to the public. The failure to communicate risks appropriately can undermine public trust in the organisation, removing this implicit license. Organisations therefore have to be in tune with public sentiments and concerns, and work actively to address and discuss them.

Communicating risk to the public

Baruch Fishchhoff’s widely cited 1995 paper outlines the following developmental stages in risk management:

A. All we have to do is get the numbers right

B. All we have to do is tell them the numbers

C. All we have to do is explain what we mean by the numbers

D. All we have to do is show them that they’ve accepted similar risks

E. All we have to do is show them that it’s a good deal for them

F. All we have to do is treat them nice

G. All we have to do is make them partners

H. All of the above [1]

Participants used this framework to illustrate the evolution of risk communication and the challenges involved in getting it right. For example, participants spoke about the “lived experience” of National Service as a means to give everyone a stake in understanding and dealing with the risk of military conflict. One participant said it was important for such issues to “touch the lives of people” in some way, in order for the public to develop a deep understanding and appreciation of managing these risks. Giving the public a personal stake in risk is a practical example of achieving Stage G in Fishchhoff’s framework.

Attention to “risk as a feeling” with an awareness of the affect heuristic is also a significant component of risk communication.[2] A participant spoke about the “arithmetic of compassion” and how the same statistics can have substantially varying effects on the same audience depending on how they are represented. For example, a “10% probability” is not as convincing to an individual as a “one in 10 chance”. A call to give $300,000 to save one life is more effective at attracting donations, compared to another to give the same amount but for eight lives. Moving beyond the individual to the group, therefore, tends to weaken the effect of compassion.

There was much discussion about the power of gaming as a simulation and experiential tool to communicate risk. While war games and simulations to deal with crises or worst-case scenarios are more commonplace, games can also effectively communicate more abstract concepts such as complexity to the public. One participant cited how a variation of rock-paper-scissors was used to teach individuals about complexity and the importance of additional variables to enhance password strength.

Communicating risk internally

While identifying the right risks in an organisation is an important concern, a more fundamental question to ask is whether communicating such risks actually leads to any action and change. A participant noted that the latter involves understanding the mind-sets and assumptions of decision-makers. Beyond presenting the facts about a risk, it is also important to uncover preconceptions and build conversations around identified risks in order to embed new ideas and ways of thinking about the issue.

The need to deal with pushback and defensiveness from internal stakeholders regarding the credibility and urgency of identified risks is a significant challenge. In particular, the notion of “deep regret” was raised by a participant in a discussion about the hindrances and potential sources of failure to identify risks. “Deep regret” refers to being so crippled with anxiety that one is unable to think beyond conventional ways in which the future can develop.

By removing humans from the knowledge-creation process, AI is fundamentally altering the way we experience “knowing” things.

ARTIFICIAL INTELLIGENCE

The future of intelligence

AI already affects every aspect of our lives, from smartphones to healthcare and transportation, and its pervasiveness will only increase in the future. Participants generally believed that this pervasiveness will change fundamental aspects of humanity and thus effect significant changes across society. When speaking about AI’s effect on individuals and society, however, they tended to focus narrowly on present-day AI technologies.

One way that AI is changing humanity is through its role in the production of knowledge. AI removes humans from the knowledge-creation process, fundamentally altering the way we experience “knowing” things. In the past, humans explored various phenomena through the “human experience” of inductive and deductive reasoning. Today, however, the process of seeking knowledge involves a far greater reliance on technology, especially on AI with algorithms that we may not fully understand. Taken to the logical extreme, this threatens to turn humans from participants in the knowledge-creation process into mere spectators of it.

Following from this, the development of AI (and digital technologies more generally) may prevent humans from developing certain intellectual capabilities. Studies have shown that people consume literature on digital screens differently than in print and that reading off a screen may inhibit “type 2” or slow, deliberate, conscious thinking.[3] If digital reading is in fact inferior to analogue reading, Singapore has particular cause for concern, as our online borrowing rates are increasing while physical loan rates are falling.[4]

More or less diversity? More or less equality?

Participants were divided on whether AI and digital technology are eroding diversity in societies. Some said that AI and digital technology allow for greater diversity, given that AI enables a greater personalisation of services. However, another participant observed that on a deeper level, these technologies are based on mostly uniform operating systems and processes, which results in an increasingly ubiquitous digital experience. With the pervasive adoption of such technologies, there could be a real risk of the “homogenisation of thought.”

AI’s erosion of human capabilities and diversity could make society less resilient. If it is true that “type 2” thinking is impaired by reading exclusively digital literature, then one day, when the entire world is made up of “digital natives”, countries may suffer from a structural lack of “type 2” thinking, with many potential unforeseen consequences. A participant observed that diversity in a society allows for adaptation, in the same way genetic diversity in a species promotes evolutionary fitness. Thus, reduced diversity opens up a risk of society being less able to adapt to the future.

Participants were also divided on whether increasing adoption and development of AI would lead to a democratisation of service delivery, or widen and further entrench socioeconomic inequality. One participant observed that in the banking industry, AI and data allows banks to provide services to individuals who would otherwise not have access to financing. In the same vein, another participant said that further development of AI would automate costly processes such as analysing tumour imagery, driving prices down and democratising service delivery in the developing world. However, a third participant thought that AI might result in greater socioeconomic inequality between those who were able to exploit it and those who were not.[5]

As AI removes more sources of human agency, might we lose meaning and purpose in our lives?

Social pressures resulting from resentment against AI adoption and development may lead to a backlash against AI and to social conflict more generally. Potential sources of discontent are varied, and could include the aforementioned widening of inequality due to unequal adoption of AI. A participant suggested that a more interesting source of discontent could arise from an increasing sense of “disenchantment” with life due to human functions being ever-increasingly driven by technology. He said this could result in a sense of aimlessness and alienation. He observed that many youth joining terrorist groups are in fact seeking a sense of meaning in their lives, which they cannot find in consumeristic secular societies seemingly unable to provide its members with a higher purpose. If AI increasingly removes sources of human agency, it may exacerbate this sense of alienation.

Geo-economics and data protection

Instead of thinking about the socio-economic impact of AI in terms of haves and have-nots, one participant suggested thinking of it in terms of exports and imports. Robust AI markets, such as those in the West and in China, may export their AI to weaker markets, creating a technological dependency of those weaker markets on the stronger ones, leading to a sort of “tech colonialism”. Another participant responded that the ship has sailed on this issue: the US and China are already the chief exporters of AI technology and their dominance will almost certainly persist. The key question is not around which country would drive the development of AI in the future, but around how countries can guard and maintain control over their data, which is critical to the development of AI algorithms.

Conversations about the ethics of AI currently lag way behind its technical development.

The ethical and the technical

Given the wide-ranging social implications of AI, it may be desirable to consciously embed ethical values into AI and technology more generally. This rests on the assumption that the way AI and technology are designed can ameliorate their negative social consequences. This requires tech companies to be more involved in AI ethics conversations.

One AI practitioner, however, observed that attendees at the workshop were not representative of those developing Machine Learning Systems (MLS). MLS developers have technical concerns and do not give much thought to wider questions of philosophy and ethics. As a result, he doubted that MLS developers are consciously designing technology with a view to its wider social ramifications. Thus, the conversation around AI ethics lags far behind the technical development of AI. As a start, he thought it might help if the social concerns and philosophical positions outlined at the workshop could be translated into practical, technical guides for AI developers.[6]

Teaching AI to serve humans

Discussion about the long-term trajectory of AI revolved around Artificial General Intelligence (AGI). In particular, participants were preoccupied with the extent to which AGI would be human-like, and the implications this would have on our treatment of AGI, including broader safety considerations.

For AI to be most useful to humans, it has to be human-centric, understanding human needs and motivations, social and cultural norms, and common sense. Some participants thought that for this to happen, AI needs to learn like humans do, which is not the case at present.[7] AI today learns from large data sets, is unable to learn “on the fly” and finds it difficult to adapt to a situation in real time. While AI today is capable of a range of functions, such as emotional recognition, it is incapable of understanding emotions the way humans do. Additionally, developing AGI without a sense of self is a potentially dangerous approach. AGI requires self-recognition, motivations and values in order to empathise with humans.

One panellist opined that to create AGI with human-like intelligence, we need to start by understanding the human brain before applying it to AI. True human “intelligence” is not found in current databased intelligence technology, which relies on big data analytics. Big data analytics simply fit data to expected output. They do not reveal the underlying principles of intelligence which are important in rectifying the current deficiencies of AI.

When discussing humanising AGI, we need principles to guide the way we relate to AGI in order to prevent it from becoming a social risk. One participant observed that training AGI entities to be benign to humans must involve more than providing them with goals, which would simply turn AGI entities into “savants”, uncompromisingly putting all their energy into tasks. Like humans, he thought it necessary to raise AGI entities much like we raise children, allowing for a similar amount of time (about 15 years) for the entity to learn and grow before becoming autonomous. He also noted that autonomous beings would have to be treated as moral agents and could not therefore be treated as “slaves”. This poses a conundrum, however, given that such entities would presumably be created to serve humans.

Domestic governance and self-regulation

Ethical principles vary regionally and are heavily influenced by history and culture. Consequently, different national AI governance frameworks have different focal areas. In some countries, the focus is on human rights. In Singapore, the Infocomm Media Development Authority’s (IMDA) model AI governance framework emphasizes economic considerations. The economic value of big data and AI is in the opportunity they provide for more personalised services to consumers. However, if improperly handled, implementing AI and big data could lead to consumer pushback, which may hinder further adoption and development. Therefore, it is important to establish platforms that encourage dialogue among tech providers, users and consumers to determine the eventual form of an AI governance framework.

Even when emphasising economic considerations, different commercial sectors have different priorities and will therefore prioritise the same set of ethical principles differently in ways that best fit their respective business needs. To accommodate this diversity, guidance on AI ethical standards should be kept simple enough to capture the essence of existing risk management and compliance. Additionally, many companies already have internal ethical standards that go beyond what is required by regulations. Instead of developing new ethical standards for AI, regulators could make sure that AI complies with companies’ existing internal ethical standards. Ideally, AI ethics would be largely self-regulatory.

Global governance challenges

The growing rhetoric of a race for national strategic dominance in AI poses significant risks. Such rhetoric may incentivise corner-cutting on safety and governance, and dampen the kind of thoughtful and multi-stakeholder international collaboration required to achieve broadly beneficial AI. Additionally, a “race for technological advantage” could increase the risk of competition in AI causing real conflict, as this may encourage countries to see competitors as threats or even enemies. Emphasising the global benefits of AI and of international cooperation in AI development can help to counteract this rhetoric.

The goal of such cooperation is to establish international AI norms to mitigate these risks. In general, shared understandings of right and wrong conduct, or norms, are established over time by those who participate in shared practices. These norms are principles that represent collective expectations that are both widely accepted and internalised by an international community. Proposing general principles is important, but norms cannot be imposed and will only result from shared interests, deliberation, consensus, the evolution of a common language and the development of a collective sense of responsibility.

Establishing international AI norms requires a shared language, shared risks and ultimately, a shared fate.

Establishing international norms will be challenging, given the plurality of cultures and languages in the world. Some participants pointed out that it is not impossible, as evinced by a range of existing international norms. Others, however, felt sceptical about the notion of successful international cooperation. One participant noted that there is no international agreement on values related to a range of issues, from human to animal rights, and we are unlikely to reach a consensus in the foreseeable future. Yet another participant observed that “East” and “West” often do not see eye to eye, especially when it comes to cyber conflict regulation. He observed that many in the West erroneously believe that governments in the East utilise technology to control their populations. Nonetheless, he said Singapore could and should seek to facilitate the emergence of a distinct viewpoint on AI cooperation amidst the clash in Eastern and Western discourse.

In general, participants agreed that we should look to existing and successful areas of international cooperation in risk management in order to develop solutions for AI risk. For instance, a participant mentioned that there is successful international cooperation on containing pandemics. This involves “creating communities of shared risks and transiting them to communities of shared fate.” Appreciation of risk alone is insufficient to bind a community; an effective response can only be achieved through “staying together,” or creating this community of shared fate.

1. Baruch Fischhoff, “Risk Perception and Communication Unplugged: Twenty Years of Process”, Risk Analysis 15.2 (1995), accessed 17 June 2019, https://www.cmu.edu/epp/people/faculty/research/Fischhoff-RAUnplugged-RA.pdf

2. The affect heuristic is mental shortcut (heuristic) where emotions (affect) are used to quickly make decisions

3. Daniel Kahneman, Thinking, Fast and Slow (New York: Farrar, Strauss and Giroux, 2001); see also: Maryanne Wolf, “Skim Reading is the New Normal. The Effect on Society is Profound”, The Guardian, 25 Aug 2018, accessed 17 June 2019, https://www.theguardian.com/commentisfree/2018/aug/25/skim-reading-new-normal-maryanne-wolf

4. Annabeth Leow, “Physical Library Loans Fall as E-Books Gain Popularity”, The Straits Times, 11 May 2017, accessed 17 June 2019, https://www.straitstimes.com/singapore/physical-library-loansfall-as-e-books-gain-popularity

5. See: “Say Hello to the New Work Order”, p. 45–52

6. Personal Data Protection Commission, “Discussion Paper on Artificial Intelligence (AI) and Personal Data — Fostering Responsible Development and Adoption of AI”, accessed 11 June 2019, https://www.pdpc.gov.sg/Resources/Discussion-Paper-on-AI-and-Personal-Data

7. However, an opposing view states that allowing AI to model human behaviour might enable AI to more easily manipulate humans

Lim Pei Shan is the Head of the Centre for Strategic Futures.

Leon Kong was Senior Strategist and Manoj Harjani was Lead Strategist at the Centre for Strategic Futures.

The views expressed in this blog are those of the authors and do not reflect the official position of Centre for Strategic Futures or any agency of the Government of Singapore.

--

--

Centre for Strategic Futures

Welcome to CSF Singapore’s blog site, a space to share our shorter think-pieces and reflections. Visit our main website at www.csf.gov.sg