In “Future Progress in Artificial Intelligence: A Survey of Expert Opinion,” Vincent Müller and Nick Bostrom explain concerns around high-level machine learning, including the risks, real and perceived, that humanity faces as a result of artificial intelligence.
According to their study, there is a one-in-two chance that high-level machine learning will be developed by 2050 and a nine-in-ten chance by 2075. That being said, they also estimate that there is only a one-in-three chance that this will be “bad” or “very bad” for humanity.
Essentially, the study concludes that there is a significant chance that high-level machine learning poses – potentially, existential – dangers to our society, and that it’s impact should be well understood before we welcome it with open arms. While I agree with the assertions made in this paper, I’m not sure what sorts of studies can be commissioned to understand the effects of a technology that promises to be so pervasive that it completely transforms the social order we have created.
Max, I think it is slightly horrifying to consier that high-level machine learning will be a reality by 2050. While we often joke about the rise of the robots narrative, I fear that one day this could become a reality. We often predict futuristic technological advancements, but as discussed in the article I read, most of the predictions have come to fruition.