In “Google, democracy and the truth about internet search,” Carole Cadwalladr argues that Google’s search engine algorithm is aiding the prevalence of misinformation online and that it’s influencing the way website visitors think.
Cadwalladr shows this by giving numerous examples of phrases she’s typed into Google which have given her alarming autocomplete suggestions. For instance, after typing “are jews…” Google offered her a choice of potential questions it thought she might like to ask: “are jews a race?”, “are jews white?”, “are jews christians?”, and lastly, “are jews evil?” You see similar results when searching “are women…” or “are muslims…” What’s scarier is the amount of articles, websites, etc. that pop up on the first page from these searches. The top 10 results “confirm” these twisted questions. Interestingly, these pages aren’t being hidden from us. Rather, they tend to be right in front of us– gathered on the first page of search results. People rarely look at the second page of results when searching on Google, so to have these results shown on the first page can be extremely damaging to society based on the way this misinformation can influence how we think. In this way, giant corporations, like Google or Facebook, are doing the world a disservice by having such a skewed algorithm.
In an effort to look further into this, Cadwalladr reached out to Google, asking about their seemingly malfunctioning autocomplete suggestions. She received a response saying that, “Our search results are a reflection of the content across the web. This means that sometimes unpleasant portrayals of sensitive subject matter online can affect what search results appear for a given query. These results don’t reflect Google’s own opinions or beliefs – as a company, we strongly value a diversity of perspectives, ideas and cultures.”
How is Google not alarmed by the harm they are doing? Perhaps this has to do with them not wanting to censor users. However, I believe the sharing of this type of misinformation crosses the line and Google needs to take more accountability. In America, where there is freedom of speech, do companies have the responsibility to censor information that is harmful and false? Or, should all beliefs be shared online regardless of the harm it can do to the way we think, for the sake of this right? I believe there needs to be a cleansing of these results on Google’s side. Or, the least they can do is prevent their algorithm from producing these types of results on the first page of results.
I also read this article and had the same opinions on Google needing to take more accountability in terms of what in posted in their search engines without heavily censoring people’s free speech. My idea was that Google could issue a warning upon people clicking on websites that do not use credible sources to warn the user about this. This would not necessarily remove any content from the search engine, but could spread awareness about source credibility and fake news.
Deirdre, your summary of this article was really interesting and you bring up important points regarding accountability. It is extremely concerning that the Internet is such a vast place for misinformation and misconduct (with little to no barriers in doing so). It is alarming because there are real life consequences to what happens digitally in terms of misinformation and misconduct. I think that companies should be allowed to put warning signs/regulate some information online. This is because the Internet is too powerful of a tool to not be regulated since it is at everyone’s fingertips (including those with poor motives). Overall, I agree with Talia in that there should be warnings when clicking on dangerous websites.