Monopoly and Misuse: Google's Strategic AI Narrative

Google's new research exposes AI's true danger: designed to create misinformation, it benefits tech giants by pushing for restrictive laws favoring monopolies.

Monopoly and Misuse: Google's Strategic AI Narrative
Photo by Julian Hochgesang / Unsplash

I deeply appreciate examples of irony in the world, and Google has just dropped a giant ironic rock in the middle of the AI conversation. As reported by 404Media, a new research paper from Google provides a keen glimpse of the very real problems associated with AI.

There has been a lot of wild-eyed speculation about where this technology will take us (despite it being decades, maybe centuries, away from true Generative AI ... if such is even possible). As Cory Doctorow stated in an interview with Jacobin, "As the eminent computer scientist they fired for coming up with this said, 'We’ve created stochastic parrots'" (Doctorow & Moscrop, 2023).

However, much of this speculation is self-serving. If I say, "This new thing I created could save the world, but it could also destroy the world!" it builds an automatic allure. Whether people are afraid or enthused, I'm winning as long as they buy into the underlying premise: my tool is powerful enough to do either of those things.

Well, in a June 2024 preprint paper, Google researchers offer a different insight: AI's real danger is that it was designed to create bullshit (in the Ethically technical sense of the word (Hicks et al., 2024)). Specifically, the Google researchers emphasize that the problems these tools create or exacerbate are often "neither overtly malicious nor explicitly violate these tools’ content policies or terms of services" (Marchal et al., 2024, p. 16).

As Maiburg writes for 404:

"This observation lines up with the reporting we’ve done at 404 Media for the past year and prior. People who are using AI to impersonate others, sockpuppet, scale and amplify bad content, or create nonconsensual intimate images (NCII), are mostly not hacking or manipulating the generative AI tools they’re using. They’re using them as intended." (Maiberg, 2024)

This brings me back to Doctorow's points about AI: "I think that when people worry about Skynet, what they mean is the imperatives of business are driving the world to the brink of human extinction" (Doctorow & Moscrop, 2023).

One of the more concerning things, from my perspective, is that tech giants like Google are positioning AI in such a way as to create favorable conditions for legal changes that will benefit them (at the expense of everyone else). It serves the tech giants to point out the flaws (even of their own creations) because it then allows them to lobby for restrictions that put them in a position of increased monopoly.

For instance, if laws are passed that restrict open-source AI models because they might be dangerous, Google wins ... even though Google helped create the problem in the first place.

💡
Hi there! I’m Odin Halvorson, a librarian, independent scholar, film fanatic, fiction author, and tech enthusiast. If you like my work and want to support me, please consider becoming a paid subscriber to my newsletter for as little as $2.50 a month!

You can also support me by using this affiliate link to sign up for Libro.fm, the best audiobook platform around!

Want your own Ghost website? Check out MagicPages for the cheapest rates via my affiliate link (they even offer lifetime hosting plans!).

References

Doctorow, C., & Moscrop, D. (2023, May 7). Cory Doctorow Explains Why Big Tech Is Making the Internet Terrible [Interview]. https://jacobin.com/2023/05/cory-doctorow-big-tech-internet-monopoly-capitalism-artificial-intelligence-crypto

Hicks, M. T., Humphries, J., & Slater, J. (2024). ChatGPT is bullshit. Ethics and Information Technology26(2), 38. https://doi.org/10.1007/s10676-024-09775-5

Maiberg ·, E. (2024, July 3). Google: AI Potentially Breaking Reality Is a Feature Not a Bug. 404 Media. https://www.404media.co/google-ai-potentially-breaking-reality-is-a-feature-not-a-bug/

Marchal, N., Xu, R., Elasmar, R., Gabriel, I., Goldberg, B., & Isaac, W. (2024). Generative AI Misuse: A Taxonomy of Tactics and Insights from Real-World Data (arXiv:2406.13843). arXiv. http://arxiv.org/abs/2406.13843

Subscribe for my regular newsletter. No spam, just the big updates.