Artificial Intelligence and automation are already disrupting jobs, but mostly repetitive ones
In 10 seconds? Artificial Intelligence is getting very good at making decisions analysing large datasets. In practice it means that AI and automation are threatening a lot of current workplaces, especially those that involve repetitive tasks.
Any particular industries where AI could replace humans? Depending of the surveys you read, there are varying levels of “doomsday thinking” about the “Fourth Industrial Revolution”. However most experts agree that Artificial Intelligence and automation can affect nearly all industries. According to a recent UK survey, AI technologies are already being implemented in one third of industries.
So what are the jobs under threat and how many? PriceWaterHouseCoopers estimates that 30-40% of jobs will be affected in the developed world. It is predicted that AI will be able to replace many white collar jobs, such as truck drivers, retail workers but will also be able to write high school essays and translate documents. Additionally, algorithms threaten many financial advisor positions. The Australian financial startup, Stockpot boasts that its algorithm “can provide cheaper and better financial advice than people”.
So AI is all bad news? Not necessarily. In healthcare, AI can both help overstretched GPs make better decisions - for example by using deep learning methods to analyse mammograms for breast cancer screening -, predicting illnesses or by controlling health management systems. Additionally, robots are being introduced to help care for the elderly or autistic children and assist surgeons.
Is there anything we can do to save our jobs from the robots? We can future-proof our careers. Learning coding seems to be a safe bet or moving toward positions that require strategic and critical thinking or involving ‘case by case’ scenarios. Doctors, nurses, lawyers and financial planners seem to be safe, but they will also rely on AI in their work.
So, there is still hope for us? Yes, not everyone is pessimistic about the future. There are those who argue, that AI and automation will create new jobs and opportunities for software engineers, programmers, algorithm designers, AI trainers, ethicists and lawyers. They also insist that human analytical teams providing context and critical judgement in decision making will need to complement AI. Concerns about security may also slow down the penetration of AI as many IT managers are worried about being vulnerable to cyberattacks.
Abstract: This study investigates the challenges and opportunities pertaining to transportation policies that may arise as a result of emerging autonomous vehicle (AV) technologies. AV technologies can decrease the transportation cost and increase accessibility to low-income households and persons with mobility issues. This emerging technology also has far-reaching applications and implications beyond all current expectations. This paper provides a comprehensive review of the relevant literature and explores a broad spectrum of issues from safety to machine ethics. An indispensable part of a prospective AV development is communication over cars and infrastructure (connected vehicles). A major knowledge gap exists in AV technology with respect to routing behaviors. Connected-vehicle technology provides a great opportunity to implement an efficient and intelligent routing system. To this end, we propose a conceptual navigation model based on a fleet of AVs that are centrally dispatched over a network seeking system optimization. This study contributes to the literature on two fronts: (i) it attempts to shed light on future opportunities as well as possible hurdles associated with AV technology; and (ii) it conceptualizes a navigation model for the AV which leads to highly efficient traffic circulations.
Pub.: 29 Aug '16, Pinned: 03 Jul '17
Abstract: Although scientists have calculated the significant positive welfare effects of Artificial Intelligence (AI), fear mongering continues to hinder AI development. If regulations in this sector stifle our active imagination, we risk wasting the true potential of AIs dynamic efficiencies. Not only would Schumpeter dislike us for spoiling creative destruction, but the AI thinkers of the future would also rightfully see our efforts as the ‘dark age’ of human advancement. This article provides a brief philosophical introduction to artificial intelligence; categorizes artificial intelligence to shed light on what we have and know now and what we might expect from the prospective developments; reflects thoughts of worldwide famous thinkers to broaden our horizons; provides information on the attempts to regulate artificial intelligence from a legal perspective; and discusses how the legal approach needs to be to ensure the balance between artificial intelligence development and human control over them, and to ensure friendly artificial intelligence.
Pub.: 02 Jun '16, Pinned: 03 Jul '17
Abstract: Abstract Accident prediction is one of the most critical aspects of road safety, whereby an accident can be predicted before it actually occurs and precautionary measures taken to avoid it. For this purpose, accident prediction models are popular in road safety analysis. Artificial intelligence (AI) is used in many real world applications, especially where outcomes and data are not same all the time and are influenced by occurrence of random changes. This paper presents a study on the existing approaches for the detection of unsafe driving patterns of a vehicle used to predict accidents. The literature covered in this paper is from the past 10 years, from 2004 to 2014. AI techniques are surveyed for the detection of unsafe driving style and crash prediction. A number of statistical methods which are used to predict the accidents by using different vehicle and driving features are also covered in this paper. The approaches studied in this paper are compared in terms of datasets and prediction performance. We also provide a list of datasets and simulators available for the scientific community to conduct research in the subject domain. The paper also identifies some of the critical open questions that need to be addressed for road safety using AI techniques.AbstractAccident prediction is one of the most critical aspects of road safety, whereby an accident can be predicted before it actually occurs and precautionary measures taken to avoid it. For this purpose, accident prediction models are popular in road safety analysis. Artificial intelligence (AI) is used in many real world applications, especially where outcomes and data are not same all the time and are influenced by occurrence of random changes. This paper presents a study on the existing approaches for the detection of unsafe driving patterns of a vehicle used to predict accidents. The literature covered in this paper is from the past 10 years, from 2004 to 2014. AI techniques are surveyed for the detection of unsafe driving style and crash prediction. A number of statistical methods which are used to predict the accidents by using different vehicle and driving features are also covered in this paper. The approaches studied in this paper are compared in terms of datasets and prediction performance. We also provide a list of datasets and simulators available for the scientific community to conduct research in the subject domain. The paper also identifies some of the critical open questions that need to be addressed for road safety using AI techniques.
Pub.: 01 Oct '16, Pinned: 03 Jul '17
Abstract: Advances in artificial intelligence (AI) will transform modern life by reshaping transportation, health, science, finance, and the military. To adapt public policy, we need to better anticipate these advances. Here we report the results from a large survey of machine learning researchers on their beliefs about progress in AI. Researchers predict AI will outperform humans in many activities in the next ten years, such as translating languages (by 2024), writing high-school essays (by 2026), driving a truck (by 2027), working in retail (by 2031), writing a bestselling book (by 2049), and working as a surgeon (by 2053). Researchers believe there is a 50% chance of AI outperforming humans in all tasks in 45 years and of automating all human jobs in 120 years, with Asian respondents expecting these dates much sooner than North Americans. These results will inform discussion amongst researchers and policymakers about anticipating and managing trends in AI.
Pub.: 24 May '17, Pinned: 02 Jun '17
Abstract: We examine how susceptible jobs are to computerisation. To assess this, we begin by implementing a novel methodology to estimate the probability of computerisation for 702 detailed occupations, using a Gaussian process classifier. Based on these estimates, we examine expected impacts of future computerisation on US labour market outcomes, with the primary objective of analysing the number of jobs at risk and the relationship between an occupations probability of computerisation, wages and educational attainment.
Pub.: 29 Sep '16, Pinned: 02 Jun '17
Abstract: Recent advances in machine learning yielded new techniques to train deep neural networks, which resulted in highly successful applications in many pattern recognition tasks such as object detection and speech recognition. In this paper we provide a head-to-head comparison between a state-of-the art in mammography CAD system, relying on a manually designed feature set and a Convolutional Neural Network (CNN), aiming for a system that can ultimately read mammograms independently. Both systems are trained on a large data set of around 45,000 images and results show the CNN outperforms the traditional CAD system at low sensitivity and performs comparable at high sensitivity. We subsequently investigate to what extent features such as location and patient information and commonly used manual features can still complement the network and see improvements at high specificity over the CNN especially with location and context features, which contain information not available to the CNN. Additionally, a reader study was performed, where the network was compared to certified screening radiologists on a patch level and we found no significant difference between the network and the readers.
Pub.: 09 Aug '16, Pinned: 01 Jun '17
Abstract: Current analysis of tumor proliferation, the most salient prognostic biomarker for invasive breast cancer, is limited to subjective mitosis counting by pathologists in localized regions of tissue images. This study presents the first data-driven integrative approach to characterize the severity of tumor growth and spread on a categorical and molecular level, utilizing multiple biologically salient deep learning classifiers to develop a comprehensive prognostic model. Our approach achieves pathologist-level performance on three-class categorical tumor severity prediction. It additionally pioneers prediction of molecular expression data from a tissue image, obtaining a Spearman's rank correlation coefficient of 0.60 with ex vivo mean calculated RNA expression. Furthermore, our framework is applied to identify over two hundred unprecedented biomarkers critical to the accurate assessment of tumor proliferation, validating our proposed integrative pipeline as the first to holistically and objectively analyze histopathological images.
Pub.: 11 Oct '16, Pinned: 01 Jun '17
Abstract: The International Symposium on Biomedical Imaging (ISBI) held a grand challenge to evaluate computational systems for the automated detection of metastatic breast cancer in whole slide images of sentinel lymph node biopsies. Our team won both competitions in the grand challenge, obtaining an area under the receiver operating curve (AUC) of 0.925 for the task of whole slide image classification and a score of 0.7051 for the tumor localization task. A pathologist independently reviewed the same images, obtaining a whole slide image classification AUC of 0.966 and a tumor localization score of 0.733. Combining our deep learning system's predictions with the human pathologist's diagnoses increased the pathologist's AUC to 0.995, representing an approximately 85 percent reduction in human error rate. These results demonstrate the power of using deep learning to produce significant improvements in the accuracy of pathological diagnoses.
Pub.: 18 Jun '16, Pinned: 01 Jun '17
Abstract: Treating a human mind like a machine is an essential component of dehumanization, whereas attributing a humanlike mind to a machine is an essential component of anthropomorphism. Here we tested how a cue closely connected to a person's actual mental experience-a humanlike voice-affects the likelihood of mistaking a person for a machine, or a machine for a person. We predicted that paralinguistic cues in speech are particularly likely to convey the presence of a humanlike mind, such that removing voice from communication (leaving only text) would increase the likelihood of mistaking the text's creator for a machine. Conversely, adding voice to a computer-generated script (resulting in speech) would increase the likelihood of mistaking the text's creator for a human. Four experiments confirmed these hypotheses, demonstrating that people are more likely to infer a human (vs. computer) creator when they hear a voice expressing thoughts than when they read the same thoughts in text. Adding human visual cues to text (i.e., seeing a person perform a script in a subtitled video clip), did not increase the likelihood of inferring a human creator compared with only reading text, suggesting that defining features of personhood may be conveyed more clearly in speech (Experiments 1 and 2). Removing the naturalistic paralinguistic cues that convey humanlike capacity for thinking and feeling, such as varied pace and intonation, eliminates the humanizing effect of speech (Experiment 4). We discuss implications for dehumanizing others through text-based media, and for anthropomorphizing machines through speech-based media. (PsycINFO Database Record
Pub.: 12 Aug '16, Pinned: 01 Jun '17
Abstract: To date, studies of biological risk factors have revealed inconsistent relationships with subsequent post-traumatic stress disorder (PTSD). The inconsistent signal may reflect the use of data analytic tools that are ill equipped for modeling the complex interactions between biological and environmental factors that underlay post-traumatic psychopathology. Further, using symptom-based diagnostic status as the group outcome overlooks the inherent heterogeneity of PTSD, potentially contributing to failures to replicate. To examine the potential yield of novel analytic tools, we reanalyzed data from a large longitudinal study of individuals identified following trauma in the general emergency room (ER) that failed to find a linear association between cortisol response to traumatic events and subsequent PTSD. First, latent growth mixture modeling empirically identified trajectories of post-traumatic symptoms, which then were used as the study outcome. Next, support vector machines with feature selection identified sets of features with stable predictive accuracy and built robust classifiers of trajectory membership (area under the receiver operator characteristic curve (AUC)=0.82 (95% confidence interval (CI)=0.80-0.85)) that combined clinical, neuroendocrine, psychophysiological and demographic information. Finally, graph induction algorithms revealed a unique path from childhood trauma via lower cortisol during ER admission, to non-remitting PTSD. Traditional general linear modeling methods then confirmed the newly revealed association, thereby delineating a specific target population for early endocrine interventions. Advanced computational approaches offer innovative ways for uncovering clinically significant, non-shared biological signals in heterogeneous samples.
Pub.: 23 Mar '17, Pinned: 01 Jun '17
Abstract: There are several methods for building prediction models. The wealth of currently available modeling techniques usually forces the researcher to judge, a priori, what will likely be the best method. Super learning (SL) is a methodology that facilitates this decision by combining all identified prediction algorithms pertinent for a particular prediction problem. SL generates a final model that is at least as good as any of the other models considered for predicting the outcome. The overarching aim of this work is to introduce SL to analysts and practitioners. This work compares the performance of logistic regression, penalized regression, random forests, deep learning neural networks, and SL to predict successful substance use disorders (SUD) treatment. A nationwide database including 99,013 SUD treatment patients was used. All algorithms were evaluated using the area under the receiver operating characteristic curve (AUC) in a test sample that was not included in the training sample used to fit the prediction models. AUC for the models ranged between 0.793 and 0.820. SL was superior to all but one of the algorithms compared. An explanation of SL steps is provided. SL is the first step in targeted learning, an analytic framework that yields double robust effect estimation and inference with fewer assumptions than the usual parametric methods. Different aspects of SL depending on the context, its function within the targeted learning framework, and the benefits of this methodology in the addiction field are discussed.
Pub.: 11 Apr '17, Pinned: 01 Jun '17
Abstract: Using machine learning approaches as non-invasive methods have been used recently as an alternative method in staging chronic liver diseases for avoiding the drawbacks of biopsy. This study aims to evaluate different machine learning techniques in prediction of advanced fibrosis by combining the serum bio-markers and clinical information to develop the classification models.A prospective cohort of 39,567 patients with chronic hepatitis C was divided into two sets - one categorized as mild to moderate fibrosis (F0-F2), and the other categorized as advanced fibrosis (F3-F4) according to METAVIR score. Decision tree, genetic algorithm, particle swarm optimization, and multilinear regression models for advanced fibrosis risk prediction were developed. Receiver operating characteristic curve analysis was performed to evaluate the performance of the proposed models.Age, platelet count, AST, and albumin were found to be statistically significant to advanced fibrosis. The machine learning algorithms under study were able to predict advanced fibrosis in patients with HCC with AUROC ranging between 0.73 and 0.76 and accuracy between 66.3% and 84.4%.Machine-learning approaches could be used as alternative methods in prediction of the risk of advanced liver fibrosis due to chronic hepatitis C.
Pub.: 10 Apr '17, Pinned: 01 Jun '17
Abstract: Publication date: Available online 11 January 2017 Source:Metabolism Author(s): Pavel Hamet, Johanne Tremblay Artificial Intelligence (AI) is a general term that implies the use of a computer to model intelligent behaviour with minimal human intervention. AI is generally accepted as having started with the invention of robots. The term derives from the Czech word robota, meaning biosynthetic machines used as forced labour. In this field, Leonardo Da Vinci's lasting heritage is today's burgeoning use of robotic-assisted surgery, named after him, for complex urologic and gynecologic procedures. Da Vinci's sketchbooks of robots helped set the stage for this innovation. AI, described as the science and engineering of making intelligent machines, was officially born in 1956. The term is applicable to a broad range of items in medicine such as robotics, medical diagnosis, medical statistics, and human biology—up to and including today's “omics”. AI in medicine, which is the focus of this review, has two main branches: virtual and physical. The virtual branch includes informatics approaches from deep learning information management to control of health management systems, including electronic health records, and active guidance of physicians in their treatment decisions. The physical branch is best represented by robots used to assist the elderly patient or the attending surgeon. Also embodied in this branch are targeted nanorobots, a unique new drug delivery system. The societal and ethical complexities of these applications require further reflection, proof of their medical utility, economic value, and development of interdisciplinary strategies for their wider application.
Pub.: 11 Jan '17, Pinned: 01 Jun '17
Join Sparrho today to stay on top of science
Discover, organise and share research that matters to you