In “AI-Mediated Communication: Definition, Research Agenda, and Ethical Considerations,” Jeffrey T. Hancock, Mor Naaman, and Karen Levy show that the recent emergence of Artificial Intelligence-Mediated Communication (AI-MC) raises new questions about how technology may shape human communication. They argue that the introduction of AI into interpersonal communication has the potential to transform how people communicate, upend assumptions around agency and mediation, and introduce new ethical questions.
Hancock, Naaman, and Levy do so by both defining AI-MC and offering numerous examples of it. According to this paper, AI-MC is “interpersonal communication in which an intelligent agent operates on behalf of a communicator by modifying, augmenting, or generating messages to accomplish communication goals.” A more tame example of AI-MC would be the auto generated Gmail suggested responses, in which an email recipient can select one of several responses produced by AI. Usually, the involvement of AI is not disclosed, with the partner most likely assuming that the message was written by the sender. From my perspective, this lack of disclosure makes other examples of AI-MC seem anything but ethical. For example, AI-MC also can present itself as deep fakes. Deep fakes take form when AI is used to create a misrepresentation of what a person says or does in audio or video. You may watch a deep fake without even knowing it, which, at times, can be alarming and concerning. Especially when deep fakes are utilized for purposes such as manipulating voters. In terms of political communications, given that traits like attractiveness can predict electoral outcomes and changes in vocal pitch can improve leadership perception, AI capabilities can morph candidate faces in photographs or in real-time, to match voters’ preferences.
This article was very interesting because it showed the metaphorical scale on which different AI-MC features fall. While some are more tame than others, many AI-MC features can pose many ethical questions. When it comes to friends and partners, will AI-MC lead to the formation of less meaningful relationships? With the introduction of AI into communication processes, such as automated birthday wishes on social networking sites, will the relational value of expressions of intimacy or gratitude be undermined when AI plays a role? Although useful, does AI-MC just act as a filter on human representation? While I believe that AI-MC is appropriate in certain environments, such as in the workplace, I don’t think it belongs everywhere.
I thought this was very interesting and well written. I especially thought the political misrepresentation was very thought provoking and it gave me a bit of an idea.
Social media is so prominent that we now have thousands, if not more, posts and tweets from previous presidents. So if we were to structure all of that data and combine it with quotes from the previous ~40 presidents, that could be scraped from the internet, then we would have a fairly impressive dictionary of presidential thoughts, actions, and pursuits. So, if we made an AI and trained it on this data, then gave it access to its own Twitter account and told it to campaign for the 2024 election, I wonder what would happen.
This is a very broad and simplified idea, but it is interesting to think about how, in modern times, an AI could run for president(until the in-person events happen and the cover is blown, that is).