Information Age Education Blog
Big Brother’s Growing Capability to Listen To and Censor You
“There is always an easy solution to every human problem—neat, plausible, and wrong.” (Henry Louis “H.L.” Mencken; American journalist, essayist, editor; 1880-1956.)
I am a regular reader of New Scientist, a weekly British magazine that covers a wide range of science-oriented topics. I think it is a very good publication. The following short article is quoted from the In Brief section of a recent issue (New Scientist, 6/6/2018). It provides a frightening glimpse into the growing artificial intelligence capabilities and possible uses of computers.
F-BOMBS are out, and pleases and thank yous are in. An AI created by IBM translates offensive chatter into more polite language, while keeping the core message intact.
The team trained the AI on millions of posts from Reddit and Twitter, before giving it offensive posts it had not previously seen to make more palatable.
In almost every case, offensive text was successfully converted into less-offensive language, with the intended meaning remaining (arxiv.org/abs/1805.07685). [This link is to the original research article available online from Cornell University.]
In the future, the system could be used on social media and it will be able to handle “posts containing hate speech, racism and sexism”, says Cicero Nogueira dos Santos, one of the IBM team. [Bold added for emphasis.]
This article appeared in print under the headline “AI could mind online Ps and Qs.”
A Key Underlying Issue
I found this news quite scary. Every message I send out via the Internet passes through computers. Suppose that, rather than just passing through the computers, my messages are read and analyzed by the computers. Am I bothered by the idea that my most personal electronic computer-transmitted messages might be read and analyzed by a computer? ABSOLUTELY YES!
Carrying this situation one step further, suppose the computer censors and/or revises what I write. Not only is Big Brother reading what I write, Big Brother is then also changing what I write. To put it bluntly, this is scary as hell!
Let’s consider this more deeply. We now have computerized voice-to-text translation systems. This means we now have the capabilities to have a computer listen to phone conversations or recordings, translate them into text, and have a computer analyze the text. We already have computer capabilities to translate text back into voice—using a voice that matches that of the speaker. So, we now appear to have a growing capability to censor/revise any electronic or recorded communications.
The First Steps
You are probably familiar with the quotations:
“The road to hell is paved with good intentions.” The saying is thought to have originated with Cistercian abbot Saint Bernard of Clairvaux (1090-1153).
“The longest journey begins with the first step.” (Chinese proverb.)
I assume that IBM and the researchers developing the offensive chatter censoring/revising software that “translates offensive chatter into more polite language, while keeping the core message intact” have good intentions. However, their work, and the work of many others who have come before them, are the first steps along a path leading to a huge loss of privacy for all of us.
Likely, social media companies that acquire and use such computer facilities will argue that they are doing so with the good intention of protecting their other users from “posts containing hate speech, racism and sexism.” Such companies can inform their customers that the company is doing this in accordance with the company’s polices for dealing with offensive content, and require acceptance of this company policy as a requirement of customers who want to use the company’s social media system.
Suppose that I am reading a message on a social network, and I encounter a notification that the original message has been changed/revised by an artificially intelligent computer program in order to adhere to the social networking company’s standards. The notification asserts that the meaning of the message has not been altered. I believe that, in this instance, I am reading a fake message (fake news). How do I know what the message actually said and how can I be sure of its author’s actual meaning? Terms such as hate speech, racism and sexism do not have precise meanings. Whether the interpretation and revisions are made by a human or by a computer does not change this situation. As far as I am concerned, this would be fake news.
What You Can Do
It seems likely that the scary future I am foreseeing will be accompanied by a number of legal battles. Alternatively, perhaps the situation will come down to how many customers a social media company might lose or gain if it starts to use such censoring/revising software. You can inform yourself about steps being taken in this censoring/revising direction and vote with your feet or dollars. It will be interesting to see what happens.
If you teach social studies, you might want to bring this topic to the attention of your students.
References and Resources
Google Scholar (2018). Speech recognition deep learning. Retrieved 8/18/2018 from https://scholar.google.com/scholar?q=speech+recognition+deep+learning&hl=en&as_sdt=0&as_vis=1&oi=scholart.
Moursund, D. (2018). Artificial intelligence. IAE-pedia. Retrieved 8/18/2018 from http://iae-pedia.org/Artificial_Intelligence.
Moursund, D. (5/21/2018). "Big Brother" is getting better at tracking you. IAE Blog. Retrieved 8/18/2018 from http://i-a-e.org/iae-blog/entry/big-brother-in-getting-better-at-tracking-you.html.
New Scientist (6/6/ 2018). Anti-swearing AI turns abuse on Reddit into polite chatter. Retrieved 8/18/2018 from https://www.newscientist.com/article/mg23831811-400-anti-swearing-ai-turns-abuse-on-reddit-into-polite-chatter/.
Wikipedia (2018). United States free speech exceptions. Retrieved 8/18/2018 from https://en.wikipedia.org/wiki/United_States_free_speech_exceptions.