{"id":31978,"date":"2017-09-03T13:23:01","date_gmt":"2017-09-03T18:23:01","guid":{"rendered":"http:\/\/blogs.ams.org\/mathgradblog\/?p=31978"},"modified":"2017-09-02T15:45:22","modified_gmt":"2017-09-02T20:45:22","slug":"shedding-light-ais-black-boxes","status":"publish","type":"post","link":"https:\/\/blogs.ams.org\/mathgradblog\/2017\/09\/03\/shedding-light-ais-black-boxes\/","title":{"rendered":"Shedding light on AI&#8217;s black boxes"},"content":{"rendered":"<p><span style=\"font-weight: 400\">A recent <\/span><a href=\"http:\/\/science.sciencemag.org\/content\/357\/6346\/\"><span style=\"font-weight: 400\">special issue<\/span><\/a><span style=\"font-weight: 400\"> in <\/span><i><span style=\"font-weight: 400\">Science <\/span><\/i><span style=\"font-weight: 400\">highlights the increasingly important role that artificial intelligence (AI) plays in science and society. Providing a small but compelling sample of the types of challenges AI is equipped to tackle\u2014from <\/span><a href=\"http:\/\/science.sciencemag.org\/content\/357\/6346\/27\"><span style=\"font-weight: 400\">aiding chemical synthesis<\/span><\/a><span style=\"font-weight: 400\"> efforts to <\/span><a href=\"http:\/\/science.sciencemag.org\/content\/357\/6346\/26.full\"><span style=\"font-weight: 400\">detecting strong gravitational lenses<\/span><\/a><span style=\"font-weight: 400\">\u2014the issue captures the palpable excitement about AI\u2019s potential in a world saturated with data.<\/span><\/p>\n<p><span style=\"font-weight: 400\">But one article in particular, \u201c<\/span><a href=\"http:\/\/science.sciencemag.org\/content\/357\/6346\/22.full\"><span style=\"font-weight: 400\">The AI detectives<\/span><\/a><span style=\"font-weight: 400\">,\u201d captured my attention. Rather than highlighting a specific application of AI, as the other articles do, this piece draws attention to the lack of transparency in certain machine learning algorithms, particularly neural networks. The inner workings of such algorithms remain almost entirely opaque, and they are accordingly termed \u201cblack boxes\u201d: though they may generate accurate results, it\u2019s still unclear how and why they make the decisions they do.<\/span><\/p>\n<div id=\"attachment_31979\" style=\"width: 381px\" class=\"wp-caption aligncenter\"><a href=\"https:\/\/www.xkcd.com\/1838\/\"><img loading=\"lazy\" decoding=\"async\" aria-describedby=\"caption-attachment-31979\" class=\"wp-image-31979 size-full\" src=\"http:\/\/blogs.ams.org\/mathgradblog\/files\/2017\/08\/machine_learning.png\" alt=\"\" width=\"371\" height=\"439\" srcset=\"https:\/\/blogs.ams.org\/mathgradblog\/files\/2017\/08\/machine_learning.png 371w, https:\/\/blogs.ams.org\/mathgradblog\/files\/2017\/08\/machine_learning-254x300.png 254w\" sizes=\"auto, (max-width: 371px) 100vw, 371px\" \/><\/a><p id=\"caption-attachment-31979\" class=\"wp-caption-text\"><a href=\"https:\/\/www.xkcd.com\/1838\/\" target=\"_blank\" rel=\"noopener noreferrer\">Machine Learning<\/a> by <a href=\"https:\/\/www.xkcd.com\/\" target=\"_blank\" rel=\"noopener noreferrer\">XKCD<\/a> is licensed under <a href=\"https:\/\/creativecommons.org\/licenses\/by-nc\/2.5\/\" target=\"_blank\" rel=\"noopener noreferrer\">CC BY 2.5<\/a>.<\/p><\/div>\n<p><span style=\"font-weight: 400\">Researchers have recently turned their attention to this problem, seeking to understand the way these algorithms operate. \u201cThe AI detectives\u201d introduces us to these researchers, and to their approaches to unlocking AI\u2019s black boxes.<\/span><!--more--><\/p>\n<p><span style=\"font-weight: 400\">One such \u201cAI detective,\u201d Rich Caruna, is using mathematics to impose greater transparency in artificial intelligence. He and his colleagues employed a rigorous statistical approach, based on a generalized additive model, to produce a <\/span><a href=\"http:\/\/people.dbmi.columbia.edu\/noemie\/papers\/15kdd.pdf\"><span style=\"font-weight: 400\">predictive model for evaluating pneumonia risk<\/span><\/a><span style=\"font-weight: 400\">. Importantly, this model is intelligible; that is, the factors that the model weighs to make its decisions are known. <\/span><span style=\"font-weight: 400\">Intelligibility is crucial in this setting, as previous, more opaque models conflated overall outcomes with inherent risk factors. For example, though asthmatics have a high risk for pneumonia, they typically receive immediate, effective care, which leads to better health outcomes\u2014but which also led early models to flag them, naively, as a low risk group. <\/span><span style=\"font-weight: 400\">Caruna et al.\u2019s model is also modular, meaning that any faulty causal links made by the algorithm can be easily removed from its decision-making process. But while it is powerful, this approach is not well-suited to complex signals, like images\u2014and it circumvents the problem of intelligibility in artificial intelligence, rather than addressing it head-on. <\/span><\/p>\n<p><span style=\"font-weight: 400\">Gregoire Montavon and his colleagues, by contrast, have developed a <\/span><a href=\"http:\/\/www.sciencedirect.com\/science\/article\/pii\/S0031320316303582?via%3Dihub\"><span style=\"font-weight: 400\">method<\/span><\/a><span style=\"font-weight: 400\"> that uses Taylor decompositions to study the most opaque of machine learning algorithms, Deep Neural Networks. \u00a0Their approach (which was not mentioned in the <\/span><i><span style=\"font-weight: 400\">Science <\/span><\/i><span style=\"font-weight: 400\">article) has the advantage of explaining the decisions made by Deep Neural Networks in easily interpretable terms. By treating each node of the neural network as a function, Taylor decompositions can be used to propagate the function value backward onto the input variables, such as pixels of an image. What results, in the case of image categorization, is an image with the output label redistributed onto input pixels\u2014a visual map of the input pixels that contributed to the algorithm\u2019s final decision. A fantastic step-by-step explanation of the paper can be found <\/span><a href=\"http:\/\/heatmapping.org\/deeptaylor\/\"><span style=\"font-weight: 400\">here<\/span><\/a><span style=\"font-weight: 400\">.<\/span><\/p>\n<p><span style=\"font-weight: 400\">Of course, none of these artificial intelligence techniques would be possible without mathematics. Nevertheless, it is interesting to see the role that math is now playing in furthering artificial intelligence by helping us understand how it works. And as AI is brought to bear on more and more important decisions in society, understanding its inner workings is not just a matter of academic interest: introducing transparency affords more control over the AI decision-making process and prevents <\/span><a href=\"http:\/\/www.sciencemag.org\/news\/2017\/04\/even-artificial-intelligence-can-acquire-biases-against-race-and-gender\"><span style=\"font-weight: 400\">bias from masquerading as logic.<\/span><\/a><span style=\"font-weight: 400\"> \u00a0<\/span><\/p>\n<div style=\"margin-top: 0px; margin-bottom: 0px;\" class=\"sharethis-inline-share-buttons\" ><\/div>","protected":false},"excerpt":{"rendered":"<p>A recent special issue in Science highlights the increasingly important role that artificial intelligence (AI) plays in science and society. Providing a small but compelling sample of the types of challenges AI is equipped to tackle\u2014from aiding chemical synthesis efforts &hellip; <a href=\"https:\/\/blogs.ams.org\/mathgradblog\/2017\/09\/03\/shedding-light-ais-black-boxes\/\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n<div style=\"margin-top: 0px; margin-bottom: 0px;\" class=\"sharethis-inline-share-buttons\" data-url=https:\/\/blogs.ams.org\/mathgradblog\/2017\/09\/03\/shedding-light-ais-black-boxes\/><\/div>\n","protected":false},"author":136,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"jetpack_post_was_ever_published":false,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[15,19,21],"tags":[321,320,67,96,232],"class_list":["post-31978","post","type-post","status-publish","format-standard","hentry","category-mathematics-in-society","category-statistics","category-technology-math","tag-ai","tag-artificial-intelligence","tag-mathematics","tag-statistics-2","tag-technology"],"jetpack_featured_media_url":"","jetpack_shortlink":"https:\/\/wp.me\/p3gbww-8jM","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/blogs.ams.org\/mathgradblog\/wp-json\/wp\/v2\/posts\/31978","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/blogs.ams.org\/mathgradblog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blogs.ams.org\/mathgradblog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blogs.ams.org\/mathgradblog\/wp-json\/wp\/v2\/users\/136"}],"replies":[{"embeddable":true,"href":"https:\/\/blogs.ams.org\/mathgradblog\/wp-json\/wp\/v2\/comments?post=31978"}],"version-history":[{"count":3,"href":"https:\/\/blogs.ams.org\/mathgradblog\/wp-json\/wp\/v2\/posts\/31978\/revisions"}],"predecessor-version":[{"id":32204,"href":"https:\/\/blogs.ams.org\/mathgradblog\/wp-json\/wp\/v2\/posts\/31978\/revisions\/32204"}],"wp:attachment":[{"href":"https:\/\/blogs.ams.org\/mathgradblog\/wp-json\/wp\/v2\/media?parent=31978"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blogs.ams.org\/mathgradblog\/wp-json\/wp\/v2\/categories?post=31978"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blogs.ams.org\/mathgradblog\/wp-json\/wp\/v2\/tags?post=31978"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}