I write for Sparrho and work with Curators to translate and disseminate research to the public.
I believe scientific thinking fosters progress and is beneficial to society.
The relationship between AI and humans is the hot topic of the day. Who will rule whom?
AI - here to help or rule us?
You might wonder if Artificial Intelligence would replace your job. Or, you might have deeper worries: would fast learning AI make humanity extinct?
Ever since Karel Čapek coined the word ’robot’ back in 1921, we’ve been dreading a future where the machines would take over.
In the 1940 Isaac Asimov calmed our nerves with his Laws of Robotics, stating that intelligent machines should not do anything that threaten human existence.
But then came Terminator in the 80s with self-conscious Skynet, hell-bent on wiping us out. Matrix was subtler: it offered us a trade-off. We can be batteries for the machines and they’ll drug us with a virtual reality in exchange.
So here is the current question: will AI ever really become self-aware? Will there be an Ava, like in the film Ex-Machina, who kills her creator and escapes to live a human life?
Note of caution: AI can already beat top players in Texas hold’em poker.
Explore our board to learn about the ethical and practical questions that inform the debate around AI.
Read research of how coding human values into AI aims to avoid unintended behaviour and how human-AI teams can become efficient by developing a ’theory of minds’.
See what AI techniques are being developed to predict and avoid road accidents, assist in education and mass-analyse DNA in forensic labs.
Here at Sparrho, we have an ever-expanding database of the latest published papers, so don’t forget to come back and check the latest science on AI!
Abstract: Deep learning has led to significant advances in artificial intelligence, in part, by adopting strategies motivated by neurophysiology. However, it is unclear whether deep learning could occur in the real brain. Here, we show that a deep learning algorithm that utilizes multi-compartment neurons might help us to understand how the neocortex optimizes cost functions. Like neocortical pyramidal neurons, neurons in our model receive sensory information and higher-order feedback in electrotonically segregated compartments. Thanks to this segregation, neurons in different layers of the network can coordinate synaptic weight updates. As a result, the network learns to categorize images better than a single layer network. Furthermore, we show that our algorithm takes advantage of multilayer architectures to identify useful higher-order representations-the hallmark of deep learning. This work demonstrates that deep learning can be achieved using segregated dendritic compartments, which may help to explain the morphology of neocortical pyramidal neurons.
Pub.: 06 Dec '17, Pinned: 08 Dec '17
Abstract: Autonomous lifelong development and learning is a fundamental capability of humans, differentiating them from current deep learning systems. However, other branches of artificial intelligence have designed crucial ingredients towards autonomous learning: curiosity and intrinsic motivation, social learning and natural interaction with peers, and embodiment. These mechanisms guide exploration and autonomous choice of goals, and integrating them with deep learning opens stimulating perspectives. Deep learning (DL) approaches made great advances in artificial intelligence, but are still far away from human learning. As argued convincingly by Lake et al., differences include human capabilities to learn causal models of the world from very little data, leveraging compositional representations and priors like intuitive physics and psychology. However, there are other fundamental differences between current DL systems and human learning, as well as technical ingredients to fill this gap, that are either superficially, or not adequately, discussed by Lake et al. These fundamental mechanisms relate to autonomous development and learning. They are bound to play a central role in artificial intelligence in the future. Current DL systems require engineers to manually specify a task-specific objective function for every new task, and learn through off-line processing of large training databases. On the contrary, humans learn autonomously open-ended repertoires of skills, deciding for themselves which goals to pursue or value, and which skills to explore, driven by intrinsic motivation/curiosity and social learning through natural interaction with peers. Such learning processes are incremental, online, and progressive. Human child development involves a progressive increase of complexity in a curriculum of learning where skills are explored, acquired, and built on each other, through particular ordering and timing. Finally, human learning happens in the physical world, and through bodily and physical experimentation, under severe constraints on energy, time, and computational resources. In the two last decades, the field of Developmental and Cognitive Robotics (Cangelosi and Schlesinger, 2015, Asada et al., 2009), in strong interaction with developmental psychology and neuroscience, has achieved significant advances in computational
Pub.: 05 Dec '17, Pinned: 08 Dec '17
Abstract: Theory of Mind is the ability to attribute mental states (beliefs, intents, knowledge, perspectives, etc.) to others and recognize that these mental states may differ from one's own. Theory of Mind is critical to effective communication and to teams demonstrating higher collective performance. To effectively leverage the progress in Artificial Intelligence (AI) to make our lives more productive, it is important for humans and AI to work well together in a team. Traditionally, there has been much emphasis on research to make AI more accurate, and (to a lesser extent) on having it better understand human intentions, tendencies, beliefs, and contexts. The latter involves making AI more human-like and having it develop a theory of our minds. In this work, we argue that for human-AI teams to be effective, humans must also develop a theory of AI's mind - get to know its strengths, weaknesses, beliefs, and quirks. We instantiate these ideas within the domain of Visual Question Answering (VQA). We find that using just a few examples(50), lay people can be trained to better predict responses and oncoming failures of a complex VQA model. Surprisingly, we find that having access to the model's internal states - its confidence in its top-k predictions, explicit or implicit attention maps which highlight regions in the image (and words in the question) the model is looking at (and listening to) while answering a question about an image - do not help people better predict its behavior
Pub.: 03 Apr '17, Pinned: 10 May '17
Abstract: Although scientists have calculated the significant positive welfare effects of Artificial Intelligence (AI), fear mongering continues to hinder AI development. If regulations in this sector stifle our active imagination, we risk wasting the true potential of AIs dynamic efficiencies. Not only would Schumpeter dislike us for spoiling creative destruction, but the AI thinkers of the future would also rightfully see our efforts as the ‘dark age’ of human advancement. This article provides a brief philosophical introduction to artificial intelligence; categorizes artificial intelligence to shed light on what we have and know now and what we might expect from the prospective developments; reflects thoughts of worldwide famous thinkers to broaden our horizons; provides information on the attempts to regulate artificial intelligence from a legal perspective; and discusses how the legal approach needs to be to ensure the balance between artificial intelligence development and human control over them, and to ensure friendly artificial intelligence.
Pub.: 02 Jun '16, Pinned: 10 May '17
Abstract: This book-length article combines several peer reviewed papers and new material to analyze the issues of ethical artificial intelligence (AI). The behavior of future AI systems can be described by mathematical equations, which are adapted to analyze possible unintended AI behaviors and ways that AI designs can avoid them. This article makes the case for utility-maximizing agents and for avoiding infinite sets in agent definitions. It shows how to avoid agent self-delusion using model-based utility functions and how to avoid agents that corrupt their reward generators (sometimes called "perverse instantiation") using utility functions that evaluate outcomes at one point in time from the perspective of humans at a different point in time. It argues that agents can avoid unintended instrumental actions (sometimes called "basic AI drives" or "instrumental goals") by accurately learning human values. This article defines a self-modeling agent framework and shows how it can avoid problems of resource limits, being predicted by other agents, and inconsistency between the agent's utility function and its definition (one version of this problem is sometimes called "motivated value selection"). This article also discusses how future AI will differ from current AI, the politics of AI, and the ultimate use of AI to help understand the nature of the universe and our place in it.
Pub.: 17 Nov '15, Pinned: 10 May '17
Abstract: Abstract Accident prediction is one of the most critical aspects of road safety, whereby an accident can be predicted before it actually occurs and precautionary measures taken to avoid it. For this purpose, accident prediction models are popular in road safety analysis. Artificial intelligence (AI) is used in many real world applications, especially where outcomes and data are not same all the time and are influenced by occurrence of random changes. This paper presents a study on the existing approaches for the detection of unsafe driving patterns of a vehicle used to predict accidents. The literature covered in this paper is from the past 10 years, from 2004 to 2014. AI techniques are surveyed for the detection of unsafe driving style and crash prediction. A number of statistical methods which are used to predict the accidents by using different vehicle and driving features are also covered in this paper. The approaches studied in this paper are compared in terms of datasets and prediction performance. We also provide a list of datasets and simulators available for the scientific community to conduct research in the subject domain. The paper also identifies some of the critical open questions that need to be addressed for road safety using AI techniques.AbstractAccident prediction is one of the most critical aspects of road safety, whereby an accident can be predicted before it actually occurs and precautionary measures taken to avoid it. For this purpose, accident prediction models are popular in road safety analysis. Artificial intelligence (AI) is used in many real world applications, especially where outcomes and data are not same all the time and are influenced by occurrence of random changes. This paper presents a study on the existing approaches for the detection of unsafe driving patterns of a vehicle used to predict accidents. The literature covered in this paper is from the past 10 years, from 2004 to 2014. AI techniques are surveyed for the detection of unsafe driving style and crash prediction. A number of statistical methods which are used to predict the accidents by using different vehicle and driving features are also covered in this paper. The approaches studied in this paper are compared in terms of datasets and prediction performance. We also provide a list of datasets and simulators available for the scientific community to conduct research in the subject domain. The paper also identifies some of the critical open questions that need to be addressed for road safety using AI techniques.
Pub.: 01 Oct '16, Pinned: 10 May '17
Abstract: From the enraged robots in the 1920 play R.U.R. to the homicidal computer H.A.L. in 2001: A Space Odyssey, science fiction writers have embraced the dark side of artificial intelligence (AI) ever since the concept entered our collective imagination. Sluggish progress in AI research, especially during the “AI winter” of the 1970s and 1980s, made such worries seem far-fetched. But recent breakthroughs in machine learning and vast improvements in computational power have brought a flood of research funding— and fresh concerns about where AI may lead us. One researcher now speaking up is Stuart Russell, a computer scientist at the University of California, Berkeley, who with Peter Norvig, director of research at Google, wrote the premier AI textbook, Artificial Intelligence: A Modern Approach, now in its third edition. Last year, Russell joined the Centre for the Study of Existential Risk at Cambridge University in the United Kingdom as an AI expert focusing on “risks that could lead to human extinction.” Among his chief concerns, which he aired at an April meeting in Geneva, Switzerland, run by the United Nations, is the danger of putting military drones and weaponry under the full control of AI systems. This interview has been edited for clarity and brevity.
Pub.: 18 Jul '15, Pinned: 10 May '17
Abstract: Abstract The field of Artificial Intelligence in Education (AIED) has undergone significant developments over the last twenty-five years. As we reflect on our past and shape our future, we ask two main questions: What are our major strengths? And, what new opportunities lay on the horizon? We analyse 47 papers from three years in the history of the Journal of AIED (1994, 2004, and 2014) to identify the foci and typical scenarios that occupy the field of AIED. We use those results to suggest two parallel strands of research that need to take place in order to impact education in the next 25 years: One is an evolutionary process, focusing on current classroom practices, collaborating with teachers, and diversifying technologies and domains. The other is a revolutionary process where we argue for embedding our technologies within students’ everyday lives, supporting their cultures, practices, goals, and communities.AbstractThe field of Artificial Intelligence in Education (AIED) has undergone significant developments over the last twenty-five years. As we reflect on our past and shape our future, we ask two main questions: What are our major strengths? And, what new opportunities lay on the horizon? We analyse 47 papers from three years in the history of the Journal of AIED (1994, 2004, and 2014) to identify the foci and typical scenarios that occupy the field of AIED. We use those results to suggest two parallel strands of research that need to take place in order to impact education in the next 25 years: One is an evolutionary process, focusing on current classroom practices, collaborating with teachers, and diversifying technologies and domains. The other is a revolutionary process where we argue for embedding our technologies within students’ everyday lives, supporting their cultures, practices, goals, and communities.
Pub.: 01 Jun '16, Pinned: 10 May '17
Abstract: IUIs aim to incorporate intelligent automated capabilities in human computer interaction, where the net impact is a human-computer interaction that improves performance or usability in critical ways. It also involves designing and implementing an artificial intelligence (AI) component that effectively leverages human skills and capabilities, so that human performance with an application excels. IUIs embody capabilities that have traditionally been associated more strongly with humans than with computers: how to perceive, interpret, learn, use language, reason, plan, and decide.
Pub.: 17 Feb '17, Pinned: 10 May '17
Abstract: Electropherograms are produced in great numbers in forensic DNA laboratories as part of everyday criminal casework. Before the results of these electropherograms can be used they must be scrutinised by analysts to determine what the identified data tells us about the underlying DNA sequences and what is purely an artefact of the DNA profiling process. A technique that lends itself well to such a task of classification in the face of vast amounts of data is the use of artificial neural networks. These networks, inspired by the workings of the human brain, have been increasingly successful in analysing large datasets, performing medical diagnoses, identifying handwriting, playing games, or recognising images. In this work we demonstrate the use of an artificial neural network which we train to 'read' electropherograms and show that it can generalise to unseen profiles.
Pub.: 05 Aug '16, Pinned: 10 May '17
Abstract: As human-machine communication has yet to become prevalent, the rules of interactions between human and intelligent machines need to be explored. This study aims to investigate a specific question: During human users' initial interactions with artificial intelligence, would they reveal their personality traits and communicative attributes differently from human-human interactions? A sample of 245 participants was recruited to view six targets' twelve conversation transcripts on a social media platform: Half with a chatbot Microsoft's Little Ice, and half with human friends. The findings suggested that when the targets interacted with Little Ice, they demonstrated different personality traits and communication attributes from interacting with humans. Specifically, users tended to be more open, more agreeable, more extroverted, more conscientious and self-disclosing when interacting with humans than with AI. The findings not only echo Mischel's cognitive-affective processing system model but also complement the Computers Are Social Actors Paradigm. Theoretical implications were discussed.
Pub.: 03 Mar '17, Pinned: 10 May '17
Abstract: In this work, we present and analyze reported failures of artificially intelligent systems and extrapolate our analysis to future AIs. We suggest that both the frequency and the seriousness of future AI failures will steadily increase. AI Safety can be improved based on ideas developed by cybersecurity experts. For narrow AIs safety failures are at the same, moderate, level of criticality as in cybersecurity, however for general AI, failures have a fundamentally different impact. A single failure of a superintelligent system may cause a catastrophic event without a chance for recovery. The goal of cybersecurity is to reduce the number of successful attacks on the system; the goal of AI Safety is to make sure zero attacks succeed in bypassing the safety mechanisms. Unfortunately, such a level of performance is unachievable. Every security system will eventually fail; there is no such thing as a 100% secure system.
Pub.: 25 Oct '16, Pinned: 10 May '17
Abstract: We argue that there already exists de facto artificial intelligence policy - a patchwork of policies impacting the field of AI's development in myriad ways. The key question related to AI policy, then, is not whether AI should be governed at all, but how it is currently being governed, and how that governance might become more informed, integrated, effective, and anticipatory. We describe the main components of de facto AI policy and make some recommendations for how AI policy can be improved, drawing on lessons from other scientific and technological domains.
Pub.: 29 Aug '16, Pinned: 10 May '17
Abstract: Many applied settings in empirical economics involve simultaneous estimation of a large number of parameters. In particular, applied economists are often interested in estimating the effects of many-valued treatments (like teacher effects or location effects), treatment effects for many groups, and prediction models with many regressors. In these settings, machine learning methods that combine regularized estimation and data-driven choices of regularization parameters are useful to avoid over-fitting. In this article, we analyze the performance of a class of machine learning estimators that includes ridge, lasso and pretest in contexts that require simultaneous estimation of many parameters. Our analysis aims to provide guidance to applied researchers on (i) the choice between regularized estimators in practice and (ii) data-driven selection of regularization parameters. To address (i), we characterize the risk (mean squared error) of regularized estimators and derive their relative performance as a function of simple features of the data generating process. To address (ii), we show that data-driven choices of regularization parameters, based on Stein's unbiased risk estimate or on cross-validation, yield estimators with risk uniformly close to the risk attained under the optimal (unfeasible) choice of regularization parameters. We use data from recent examples in the empirical economics literature to illustrate the practical applicability of our results.
Pub.: 31 Mar '17, Pinned: 10 May '17
Abstract: Unlike perfect-information games, imperfect-information games cannot be solved by decomposing the game into subgames that are solved independently. Instead, all decisions must consider the strategy of the game as a whole, and more computationally intensive algorithms are used. While it is not possible to solve an imperfect-information game exactly through decomposition, it is possible to approximate solutions, or improve existing strategies, by solving disjoint subgames. This process is referred to as subgame solving. We introduce subgame solving techniques that outperform prior methods both in theory and practice. We also show how to adapt them, and past subgame solving techniques, to respond to opponent actions that are outside the original action abstraction; this significantly outperforms the prior state-of-the-art approach, action translation. Finally, we show that subgame solving can be repeated as the game progresses down the tree, leading to lower exploitability. Subgame solving is a key component of Libratus, the first AI to defeat top humans in heads-up no-limit Texas hold'em poker.
Pub.: 08 May '17, Pinned: 10 May '17
Abstract: Providing an efficient strategy to navigate safely through unsignaled intersections is a difficult task that requires determining the intent of other drivers. We explore the effectiveness of using Deep Reinforcement Learning to handle intersection problems. Combining several recent advances in Deep RL, were we able to learn policies that surpass the performance of a commonly-used heuristic approach in several metrics including task completion time and goal success rate. Our analysis, and the solutions learned by the network point out several short comings of current rule-based methods. The fact that Deep RL policies resulted in collisions, although rarely, combined with the limitations of the policy to generalize well to out-of-sample scenarios suggest a need for further research.
Pub.: 02 May '17, Pinned: 10 May '17
Abstract: As manufacturing processes become increasingly automated, so should tool condition monitoring (TCM) as it is impractical to have human workers monitor the state of the tools continuously. Tool condition is crucial to ensure the good quality of products: Worn tools affect not only the surface quality but also the dimensional accuracy, which means higher reject rate of the products. Therefore, there is an urgent need to identify tool failures before it occurs on the fly. While various versions of intelligent tool condition monitoring have been proposed, most of them suffer from a cognitive nature of traditional machine learning algorithms. They focus on the how to learn process without paying attention to other two crucial issues: what to learn, and when to learn. The what to learn and the when to learn provide self regulating mechanisms to select the training samples and to determine time instants to train a model. A novel tool condition monitoring approach based on a psychologically plausible concept, namely the metacognitive scaffolding theory, is proposed and built upon a recently published algorithm, recurrent classifier (rClass). The learning process consists of three phases: what to learn, how to learn, when to learn and makes use of a generalized recurrent network structure as a cognitive component. Experimental studies with real-world manufacturing data streams were conducted where rClass demonstrated the highest accuracy while retaining the lowest complexity over its counterparts.
Pub.: 06 May '17, Pinned: 10 May '17
Abstract: In the past, several models of consciousness have become popular and have led to the development of models for machine consciousness with varying degrees of success and challenges for simulation and implementations. Moreover, affective computing attributes that involve emotions, behavior and personality have not been the focus of models of consciousness as they lacked motivation for deployment in software applications and robots. The affective attributes are important factors for the future of machine consciousness with the rise of technologies that can assist humans. Personality and affection hence can give an additional flavor for the computational model of consciousness in humanoid robotics. Recent advances in areas of machine learning with a focus on deep learning can further help in developing aspects of machine consciousness in areas that can better replicate human sensory perceptions such as speech recognition and vision. With such advancements, one encounters further challenges in developing models that can synchronize different aspects of affective computing. In this paper, we review some existing models of consciousnesses and present an affective computational model that would enable the human touch and feel for robotic systems.
Pub.: 02 Jan '17, Pinned: 10 May '17
Abstract: There is a growing focus on how to design safe artificial intelligent (AI) agents. As systems become more complex, poorly specified goals or control mechanisms may cause AI agents to engage in unwanted and harmful outcomes. Thus it is necessary to design AI agents that follow initial programming intentions as the program grows in complexity. How to specify these initial intentions has also been an obstacle to designing safe AI agents. Finally, there is a need for the AI agent to have redundant safety mechanisms to ensure that any programming errors do not cascade into major problems. Humans are autonomous intelligent agents that have avoided these problems and the present manuscript argues that by understanding human self-regulation and goal setting, we may be better able to design safe AI agents. Some general principles of human self-regulation are outlined and specific guidance for AI design is given.
Pub.: 05 Jan '17, Pinned: 10 May '17
Abstract: Recent progress in artificial intelligence enabled the design and implementation of autonomous computing devices, agents, that may interact and learn from each other to achieve certain goals. Sometimes however, a human operator needs to intervene and interrupt an agent in order to prevent certain dangerous situations. Yet, as part of their learning process, agents may link these interruptions that impact their reward to specific states, and deliberately avoid them. The situation is particularly challenging in a distributed context because agents might not only learn from their own past interruptions, but also from those of other agents. This paper defines the notion of safe interruptibility as a distributed computing problem, and studies this notion in the two main learning frameworks: joint action learners and independent learners. We give realistic sufficient conditions on the learning algorithm for safe interruptibility in the case of joint action learners, yet show that these conditions are not sufficient for independent learners. We show however that if agents can detect interruptions, it is possible to prune the observations to ensure safe interruptibility even for independent learners
Pub.: 10 Apr '17, Pinned: 10 May '17