Future Progress in Artificial Intelligence (Muller & Bostrom)

In Future Progress in Artificial Intelligence: A Survey of Expert Opinion” by Vincent Muller and Nick Bostrom, they survey hundreds of researchers with an expertise in the study of artificial intelligence to determine the direction of our future with A.I. The study, published in 2016, mostly consists of survey questions, results and evaluations.

Muller and Bostrom neither present an argument nor necessarily defend an existent one, although they do pose a few concerns, but rather seek answers to many of the existential questions concerning A.I. In the introduction, they begin by defining plausible definitions for A.I.– “Artificial Intelligence began with the… conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it” (Muller & Bostrom 1). Then they provide an auxiliary definiton for A.I., which is if general AI were to be achieved, this might also lead to superintelligence: “We can tentatively define a superintelligence as any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest (Muller & Bostrom 2). According to the study, one idea how superintelligence might come about is that if we humans could create artificial general intelligent ability at a roughly human level, then this creation could, in turn, create yet higher intelligence, which could, in turn, create yet higher intelligence, and so on. So we might generate a growth well beyond human ability and perhaps even an accelerating rate of growth: an ‘intelligence explosion’ (Muller & Bostrom 2). Two overarching questions they seek to answer are: When to expect it? What to expect the impact to be, particularly the risks and possible existential threat to humanity? Hawking once said, “Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks.” The study was conducted through a questionnaire, which was carried out online by invitation to particular individuals from four different groups for a total of 550 participants. The groups surveyed were: PT–AI: Participants of the conference on “Philosophy and Theory of AI”; AGI: Participants of the conferences of “Artificial General Intelligence” and “Impacts and Risks of Artificial General Intelligence”; EETN: Members of the Greek Association for Artificial Intelligence, a professional organization of Greek published researchers in the field; and TOP100: The 100 ‘Top authors in artificial intelligence’ by ‘citation’ in ‘all years’ according to Microsoft Academic Search. These groups have different theoretical-ideological backgrounds, most are either theory-based or technically based (Muller & Bostrom 3). Part of their methodology states that it is unclear what constitutes intelligence and progress or whether intelligence can be measured or at least compared as a single dimension. Definition of Intelligence in study is based on behavioral ability, avoids the notion of a general ‘human–level’ and uses a newly coined term. The behavior question at beginning of questionnaire:“Define a ‘high–level machine intelligence’ (HLMI) as one that can carry out most human professions at least as well as a typical human” (Muller & Bostrom 4). According to their behavioral question, HLMI very likely implies being able to pass a classic Turing test. Their first research question is perhaps the most important: In your opinion, what are the research approaches that might contribute the most to the development of such HLMI?” Researchers were required to select from a list of about two dozen research fields. Most of the respondents believe that cognitive science, integrated cognitive architectures, algorithms revealed by computational neuroscience, artificial neural networks, and fasting computer hardware are the most significant research approaches to developing HLMI (Muller & Bostrom 8). 

Overall, this was a very intriguing study and I think provides an insightful look at where we are heading with artificial intelligence in the future. It is very interesting to see the different degrees of response, whether it be no response, response with agitation, or no response with agitation. Only around 170 out of 549 potential respondents responded. 65% of AGI responded with the highest percentage. 10% of EETN responded with the lowest percentage. Muller and Bostrom also ran into a selection-bias issue. One of the respondents, Hubert Dreyfus responded with, “I wouldn’t think of responding to such a biased questionnaire. … I think any discussion of imminent super–intelligence is misguided. It shows no understanding of the failure of all work in AI. Even just formulating such a questionnaire is biased and is a waste of time” (Muller & Bostrom 13). Many of the respondents apparently felt that the idea of artificial super-intelligence was lofty, highminded and ultimately misguided. I thought the questions and concerns were valid.

Leave a Reply

Your email address will not be published. Required fields are marked *