In “Standardizing Ethical Design for Artificial Intelligence and Autonomous Systems” Bryson argues that AI has made incredible leaps in the past few years, specifically in data searching, but it has also introduced many new perils to the future.
First Bryson outlines the requirements of intelligence, which are: the capacity to map contexts to actions, the capacity to act, and the capacity to develop new actions and contexts and understand them. This is important because it allows us to differentiate between plant intelligence and the intelligence that researchers are trying to create. It is also important because, if a machine can accomplish those requirements, then it could potentially surpass human intelligence. This achievement could be catastrophic for mankind if not done correctly, which leads to the main focus of her article that is as we develop new AI technologies, we need to keep in mind transparency and safety. There are a few different techniques and committees, she says, that focus on safety in AI, and we should use those to guide us in development because of the massive impact AI has on society.
I agree with Bryson in all facets, although I think there should be more work done on what human values are rather than on transparency in AI. The reason I say this is because the set of ethics that be impose on AI will have to come from somwhere and at this point we do not have such a set. Unfortunately, this problem is almost an impossible one because, in nature, human cant agree on everything, so how are we supposed to decide what AI should value. Transparency, on the other hand, is still a very difficult problem, but I think it is much more solveable.