MINDS AND MACHINES

Realising Meaningful Human Control Over Automated Driving Systems: A Multidisciplinary Approach
de Sio FS, Mecacci G, Calvert S, Heikoop D, Hagenzieker M and van Arem B
The paper presents a framework to realise "meaningful human control" over Automated Driving Systems. The framework is based on an original synthesis of the results of the multidisciplinary research project "Meaningful Human Control over Automated Driving Systems" lead by a team of engineers, philosophers, and psychologists at Delft University of the Technology from 2017 to 2021. Meaningful human control aims at protecting safety and reducing responsibility gaps. The framework is based on the core assumption that human persons and institutions, not hardware and software and their algorithms, should remain ultimately-though not necessarily directly-in control of, and thus morally responsible for, the potentially dangerous operation of driving in mixed traffic. We propose an Automated Driving System to be under meaningful human control if it behaves according to the relevant reasons of the relevant human actors (tracking), and that any potentially dangerous event can be related to a human actor (tracing). We operationalise the requirements for meaningful human control through multidisciplinary work in philosophy, behavioural psychology and traffic engineering. The tracking condition is operationalised via a proximal scale of reasons and the tracing condition via an evaluation cascade table. We review the implications and requirements for the behaviour and skills of human actors, in particular related to supervisory control and driver education. We show how the evaluation cascade table can be applied in concrete engineering use cases in combination with the definition of core components to expose deficiencies in traceability, thereby avoiding so-called responsibility gaps. Future research directions are proposed to expand the philosophical framework and use cases, supervisory control and driver education, real-world pilots and institutional embedding.
The computational origin of representation
Piantadosi ST
Each of our theories of mental representation provides some insight into how the mind works. However, these insights often seem incompatible, as the debates between symbolic, dynamical, emergentist, sub-symbolic, and grounded approaches to cognition attest. Mental representations-whatever they are-must share many features with each of our theories of representation, and yet there are few hypotheses about how a synthesis could be possible. Here, I develop a theory of the underpinnings of symbolic cognition that shows how sub-symbolic dynamics may give rise to higher-level cognitive representations of structures, systems of knowledge, and algorithmic processes. This theory implements a version of conceptual role semantics by positing an internal universal representation language in which learners may create mental models to capture dynamics they observe in the world. The theory formalizes one account of how truly novel conceptual content may arise, allowing us to explain how even elementary logical and computational operations may be learned from a more primitive basis. I provide an implementation that learns to represent a variety of structures, including logic, number, kinship trees, regular languages, context-free languages, domains of theories like magnetism, dominance hierarchies, list structures, quantification, and computational primitives like repetition, reversal, and recursion. This account is based on simple discrete dynamical processes that could be implemented in a variety of different physical or biological systems. In particular, I describe how the required dynamics can be directly implemented in a connectionist framework. The resulting theory provides an "assembly language" for cognition, where high-level theories of symbolic computation can be implemented in simple dynamics that themselves could be encoded in biologically plausible systems.
The Governance of Unmanned Aircraft Systems (UAS): Aviation Law, Human Rights, and the Free Movement of Data in the EU
Pagallo U and Bassi E
The paper deals with the governance of Unmanned Aircraft Systems (UAS) in European law. Three different kinds of balance have been struck between multiple regulatory systems, in accordance with the sector of the governance of UAS which is taken into account. The first model regards the field of civil aviation law and its European Union (EU)'s regulation: the model looks like a traditional mix of top-down regulation and soft law. The second model concerns the EU general data protection law, the GDPR, which has set up a co-regulatory framework summed up with the principle of accountability also, but not only, in the field of drones. The third model of governance has been adopted by the EU through methods of legal experimentation and coordination mechanisms for UAS. The overall aim of the paper is to elucidate the ways in which such three models interact, insisting on differences and similarities with other technologies (e.g. self-driving cars), and further legal systems (e.g. the US).
It's Friendship, Jim, but Not as We Know It: A Degrees-of-Friendship View of Human-Robot Friendships
Ryland H
This article argues in defence of human-robot friendship. I begin by outlining the standard Aristotelian view of friendship, according to which there are certain necessary conditions which x must meet in order to 'be a friend'. I explain how the current literature typically uses this Aristotelian view to to human-robot friendships on theoretical and ethical grounds. Theoretically, a robot cannot be our friend because it cannot meet the requisite necessary conditions for friendship. Ethically, human-robot friendships are wrong because they are deceptive (the robot does not meet the conditions for being a friend), and could also make it more likely that we will favour 'perfect' robots, and disrespect, exploit, or exclude other human beings. To argue against the above position, I begin by outlining and assessing current attempts to reject the theoretical argument-that we cannot befriend robots. I argue that the current attempts are problematic, and do little to support the claim that we can be friends with robots (rather than in some future time). I then use the standard Aristotelian view as a touchstone to develop a new view. On my view, it is theoretically possible for humans to have some degree of friendship with social robots I explain how my view avoids ethical concerns about human-robot friendships being deceptive, and/or leading to the disrespect, exploitation, or exclusion of other human beings.
Value Sensitive Design to Achieve the UN SDGs with AI: A Case of Elderly Care Robots
Umbrello S, Capasso M, Balistreri M, Pirni A and Merenda F
Healthcare is becoming increasingly automated with the development and deployment of care robots. There are many benefits to care robots but they also pose many challenging ethical issues. This paper takes care robots for the elderly as the subject of analysis, building on previous literature in the domain of the ethics and design of care robots. Using the value sensitive design (VSD) approach to technology design, this paper extends its application to care robots by integrating the values of care, values that are specific to AI, and higher-scale values such as the United Nations Sustainable Development Goals (SDGs). The ethical issues specific to care robots for the elderly are discussed at length alongside examples of specific design requirements that work to ameliorate these ethical concerns.
Correction to: Analysing the Combined Health, Social and Economic Impacts of the Corona Virus Pandemic Using Agent‑Based Social Simulation
Dignum F, Dignum V, Davidsson P, Ghorbani A, van der Hurk M, Jensen M, Kammler C, Lorig F, Ludescher LG, Melchior A, Mellema R, Pastrav C, Vanhee L and Verhagen H
[This corrects the article DOI: 10.1007/s11023-020-09527-6.].
Limits of Optimization
Carissimo C and Korecki M
Optimization is about finding the best available object with respect to an objective function. Mathematics and quantitative sciences have been highly successful in formulating problems as optimization problems, and constructing clever processes that find optimal objects from sets of objects. As computers have become readily available to most people, optimization and optimized processes play a very broad role in societies. It is not obvious, however, that the optimization processes that work for mathematics and abstract objects should be readily applied to complex and open social systems. In this paper we set forth a framework to understand when optimization is limited, particularly for complex and open social systems.
Intervention and Identifiability in Latent Variable Modelling
Romeijn JW and Williamson J
We consider the use of interventions for resolving a problem of unidentified statistical models. The leading examples are from latent variable modelling, an influential statistical tool in the social sciences. We first explain the problem of statistical identifiability and contrast it with the identifiability of causal models. We then draw a parallel between the latent variable models and Bayesian networks with hidden nodes. This allows us to clarify the use of interventions for dealing with unidentified statistical models. We end by discussing the philosophical and methodological import of our result.
Predictive Processing and the Representation Wars
Williams D
Clark has recently suggested that predictive processing advances a theory of neural function with the resources to put an ecumenical end to the "representation wars" of recent cognitive science. In this paper I defend and develop this suggestion. First, I broaden the representation wars to include three foundational challenges to representational cognitive science. Second, I articulate three features of predictive processing's account of internal representation that distinguish it from more orthodox representationalist frameworks. Specifically, I argue that it posits a resemblance-based representational architecture with organism-relative contents that functions in the service of pragmatic success, not veridical representation. Finally, I argue that internal representation so understood is either impervious to the three anti-representationalist challenges I outline or can actively embrace them.
Ethics as a Service: A Pragmatic Operationalisation of AI Ethics
Morley J, Elhalal A, Garcia F, Kinsey L, Mökander J and Floridi L
As the range of potential uses for Artificial Intelligence (AI), in particular machine learning (ML), has increased, so has awareness of the associated ethical issues. This increased awareness has led to the realisation that existing legislation and regulation provides insufficient protection to individuals, groups, society, and the environment from AI harms. In response to this realisation, there has been a proliferation of principle-based ethics codes, guidelines and frameworks. However, it has become increasingly clear that a significant gap exists between the theory of AI ethics principles and the practical design of AI systems. In previous work, we analysed whether it is possible to close this gap between the 'what' and the 'how' of AI ethics through the use of tools and methods designed to help AI developers, engineers, and designers translate principles into practice. We concluded that this method of closure is currently ineffective as almost all existing translational tools and methods are either too flexible (and thus vulnerable to ethics washing) or too strict (unresponsive to context). This raised the question: if, even with technical guidance, AI ethics is challenging to embed in the process of algorithmic design, is the entire pro-ethical design endeavour rendered futile? And, if no, then how can AI ethics be made useful for AI practitioners? This is the question we seek to address here by exploring why principles and technical translational tools are still needed even if they are limited, and how these limitations can be potentially overcome by providing theoretical grounding of a concept that has been termed 'Ethics as a Service.'
Conformity Assessments and Post-market Monitoring: A Guide to the Role of Auditing in the Proposed European AI Regulation
Mökander J, Axente M, Casolari F and Floridi L
The proposed European Artificial Intelligence Act (AIA) is the first attempt to elaborate a general legal framework for AI carried out by any major global economy. As such, the AIA is likely to become a point of reference in the larger discourse on how AI systems can (and should) be regulated. In this article, we describe and discuss the two primary enforcement mechanisms proposed in the AIA: the that providers of high-risk AI systems are expected to conduct, and the that providers must establish to document the performance of high-risk AI systems throughout their lifetimes. We argue that the AIA can be interpreted as a proposal to establish a Europe-wide ecosystem for conducting AI auditing, albeit in other words. Our analysis offers two main contributions. First, by describing the enforcement mechanisms included in the AIA in terminology borrowed from existing literature on AI auditing, we help providers of AI systems understand how they can prove adherence to the requirements set out in the AIA in practice. Second, by examining the AIA from an auditing perspective, we seek to provide transferable lessons from previous research about how to refine further the regulatory approach outlined in the AIA. We conclude by highlighting seven aspects of the AIA where amendments (or simply clarifications) would be helpful. These include, above all, the need to translate vague concepts into verifiable criteria and to strengthen the institutional safeguards concerning conformity assessments based on internal checks.
Ethical Considerations in the Application of Artificial Intelligence to Monitor Social Media for COVID-19 Data
Flores L and Young SD
The COVID-19 pandemic and its related policies (e.g., stay at home and social distancing orders) have increased people's use of digital technology, such as social media. Researchers have, in turn, utilized artificial intelligence to analyze social media data for public health surveillance. For example, through machine learning and natural language processing, they have monitored social media data to examine public knowledge and behavior. This paper explores the ethical considerations of using artificial intelligence to monitor social media to understand the public's perspectives and behaviors surrounding COVID-19, including potential risks and benefits of an AI-driven approach. Importantly, investigators and ethics committees have a role in ensuring that researchers adhere to ethical principles of respect for persons, beneficence, and justice in a way that moves science forward while ensuring public safety and confidence in the process.
How a Minimal Learning Agent can Infer the Existence of Unobserved Variables in a Complex Environment
Eva B, Ried K, Müller T and Briegel HJ
According to a mainstream position in contemporary cognitive science and philosophy, the use of abstract compositional concepts is amongst the most characteristic indicators of meaningful deliberative thought in an organism or agent. In this article, we show how the ability to develop and utilise abstract conceptual structures can be achieved by a particular kind of learning agent. More specifically, we provide and motivate a concrete operational definition of what it means for these agents to be in possession of abstract concepts, before presenting an explicit example of a minimal architecture that supports this capability. We then proceed to demonstrate how the existence of abstract conceptual structures can be operationally useful in the process of employing previously acquired knowledge in the face of new experiences, thereby vindicating the natural conjecture that the cognitive functions of abstraction and generalisation are closely related.
The Role of A Priori Belief in the Design and Analysis of Fault-Tolerant Distributed Systems
Cignarale G, Schmid U, Tahko T and Kuznets R
The debate around the notions of a priori knowledge and a posteriori knowledge has proven crucial for the development of many fields in philosophy, such as metaphysics, epistemology, metametaphysics etc. We advocate that the recent debate on the two notions is also fruitful for man-made distributed computing systems and for the epistemic analysis thereof. Following a recently proposed modal and fallibilistic account of a priori knowledge, we elaborate the corresponding concept of a priori belief: We propose a rich taxonomy of types of a priori beliefs and their role for the different agents that participate in the system engineering process, which match the existing view exceedingly well and are particularly promising for explaining and dealing with unexpected behaviors in fault-tolerant distributed systems. Developing such a philosophical foundation will provide a sound basis for eventually implementing our ideas in a suitable epistemic reasoning and analysis framework and, hence, constitutes a mandatory first step for developing methods and tools to cope with the various challenges that emerge in such systems.
Analysing the Combined Health, Social and Economic Impacts of the Corovanvirus Pandemic Using Agent-Based Social Simulation
Dignum F, Dignum V, Davidsson P, Ghorbani A, van der Hurk M, Jensen M, Kammler C, Lorig F, Ludescher LG, Melchior A, Mellema R, Pastrav C, Vanhee L and Verhagen H
During the COVID-19 crisis there have been many difficult decisions governments and other decision makers had to make. E.g. do we go for a total lock down or keep schools open? How many people and which people should be tested? Although there are many good models from e.g. epidemiologists on the spread of the virus under certain conditions, these models do not directly translate into the interventions that can be taken by government. Neither can these models contribute to understand the economic and/or social consequences of the interventions. However, effective and sustainable solutions need to take into account this combination of factors. In this paper, we propose an agent-based social simulation tool, ASSOCC, that supports decision makers understand possible consequences of policy interventions, but exploring the combined social, health and economic consequences of these interventions.
The Ethical Governance of the Digital During and After the COVID-19 Pandemic
Taddeo M
Linking Human And Machine Behavior: A New Approach to Evaluate Training Data Quality for Beneficial Machine Learning
Hagendorff T
Machine behavior that is based on learning algorithms can be significantly influenced by the exposure to data of different qualities. Up to now, those qualities are solely measured in technical terms, but not in ethical ones, despite the significant role of training and annotation data in supervised machine learning. This is the first study to fill this gap by describing new dimensions of data quality for supervised machine learning applications. Based on the rationale that different social and psychological backgrounds of individuals correlate in practice with different modes of human-computer-interaction, the paper describes from an ethical perspective how varying qualities of behavioral data that individuals leave behind while using digital technologies have socially relevant ramification for the development of machine learning applications. The specific objective of this study is to describe how training data can be selected according to ethical assessments of the behavior it originates from, establishing an innovative filter regime to transition from the big data rationale  =  to a more selective way of processing data for training sets in machine learning. The overarching aim of this research is to promote methods for achieving beneficial machine learning applications that could be widely useful for industry as well as academia.
Discovering Brain Mechanisms Using Network Analysis and Causal Modeling
Colombo M and Weinberger N
Mechanist philosophers have examined several strategies scientists use for discovering causal mechanisms in neuroscience. Findings about the anatomical organization of the brain play a central role in several such strategies. Little attention has been paid, however, to the use of network analysis and causal modeling techniques for mechanism discovery. In particular, mechanist philosophers have not explored whether and how these strategies incorporate information about the anatomical organization of the brain. This paper clarifies these issues in the light of the distinction between structural, functional and effective connectivity. Specifically, we examine two quantitative strategies currently used for causal discovery from functional neuroimaging data: dynamic causal modeling and probabilistic graphical modeling. We show that dynamic causal modeling uses findings about the brain's anatomical organization to improve the statistical estimation of parameters in an already specified causal model of the target brain mechanism. Probabilistic graphical modeling, in contrast, makes no appeal to the brain's anatomical organization, but lays bare the conditions under which correlational data suffice to license reliable inferences about the causal organization of a target brain mechanism. The question of whether findings about the anatomical organization of the brain can and should constrain the inference of causal networks remains open, but we show how the tools supplied by graphical modeling methods help to address it.
Is Your Neural Data Part of Your Mind? Exploring the Conceptual Basis of Mental Privacy
Wajnerman Paz A
It has been argued that neural data (ND) are an especially sensitive kind of personal information that could be used to undermine the control we should have over access to our mental states (i.e. our mental privacy), and therefore need a stronger legal protection than other kinds of personal data. The Morningside Group, a global consortium of interdisciplinary experts advocating for the ethical use of neurotechnology, suggests achieving this by treating legally ND as a body organ (i.e. protecting them through bodily integrity). Although the proposal is currently shaping ND-related policies (most notably, a Neuroprotection Bill of Law being discussed by the Chilean Senate), it is not clear what its conceptual and legal basis is. Treating legally something as something else requires some kind of analogical reasoning, which is not provided by the authors of the proposal. In this paper, I will try to fill this gap by addressing ontological issues related to neurocognitive processes. The substantial differences between ND and body organs or organic tissue cast doubt on the idea that the former should be covered by bodily integrity. Crucially, ND are not constituted by organic material. Nevertheless, I argue that the ND of a subject are analogous to neurocognitive properties of her brain. I claim that (i) ' ND are a 'medium independent' property that can be characterized as natural semantic personal information about her brain and that (ii) ' brain not only instantiates this property but also has an exclusive ontological relationship with it: This information constitutes a domain that is unique to her neurocognitive architecture.
An Analysis of the Interaction Between Intelligent Software Agents and Human Users
Burr C, Cristianini N and Ladyman J
Interactions between an intelligent software agent (ISA) and a human user are ubiquitous in everyday situations such as access to information, entertainment, and purchases. In such interactions, the ISA mediates the user's access to the content, or controls some other aspect of the user experience, and is not designed to be neutral about outcomes of user choices. Like human users, ISAs are driven by goals, make autonomous decisions, and can learn from experience. Using ideas from bounded rationality (and deploying concepts from artificial intelligence, behavioural economics, control theory, and game theory), we frame these interactions as instances of an ISA whose reward depends on actions performed by the user. Such agents benefit by steering the user's behaviour towards outcomes that maximise the ISA's utility, which may or may not be aligned with that of the user. Video games, news recommendation aggregation engines, and fitness trackers can all be instances of this general case. Our analysis facilitates distinguishing various subcases of interaction (i.e. deception, coercion, trading, and nudging), as well as second-order effects that might include the possibility for adaptive interfaces to induce behavioural addiction, and/or change in user belief. We present these types of interaction within a conceptual framework, and review current examples of persuasive technologies and the issues that arise from their use. We argue that the nature of the feedback commonly used by learning agents to update their models and subsequent decisions could steer the behaviour of human users away from what benefits them, and in a direction that can undermine autonomy and cause further disparity between actions and goals as exemplified by addictive and compulsive behaviour. We discuss some of the ethical, social and legal implications of this technology and argue that it can sometimes exploit and reinforce weaknesses in human beings.
Find the Gap: AI, Responsible Agency and Vulnerability
Vallor S and Vierkant T
The , commonly described as a core challenge for the effective governance of, and trust in, AI and autonomous systems (AI/AS), is traditionally associated with a failure of the epistemic and/or the control condition of moral responsibility: the ability to know what we are doing and exercise competent control over this doing. Yet these two conditions are a red herring when it comes to understanding the responsibility challenges presented by AI/AS, since evidence from the cognitive sciences shows that individual humans face very similar responsibility challenges with regard to these two conditions. While the problems of epistemic opacity and attenuated behaviour control are not unique to AI/AS technologies (though they can be exacerbated by them), we show that we can learn important lessons for AI/AS development and governance from how philosophers have recently revised the traditional concept of moral responsibility in response to these challenges to responsible human agency from the cognitive sciences. The resulting instrumentalist views of responsibility, which emphasize the forward-looking and flexible role of agency cultivation, hold considerable promise for integrating AI/AS into a healthy moral ecology. We note that there nevertheless a gap in AI/AS responsibility that has yet to be extensively studied and addressed, one grounded in a relational asymmetry of between human agents and sociotechnical systems like AI/AS. In the conclusion of this paper we note that attention to this vulnerability gap must inform and enable future attempts to construct trustworthy AI/AS systems and preserve the conditions for responsible human agency.