AI Fundamentally Surveillance Technology

meredith-whittaker-signal-disrupt

Image Credits: Kimberly White/Getty Images for TechCrunch

Devin Coldewey

TechCrunch

Why is it that so many companies that rely on monetizing the data of their users seem to be extremely hot on AI? If you ask Signal president Meredith Whittaker (and I did), she’ll tell you it’s simply because “AI is a surveillance technology.”

Onstage at TechCrunch Disrupt 2023, Whittaker explained her perspective that AI is largely inseparable from the big data and targeting industry perpetuated by the likes of Google and Meta, as well as less consumer-focused but equally prominent enterprise and defense companies. (Her remarks lightly edited for clarity.)

“It requires the surveillance business model; it’s an exacerbation of what we’ve seen since the late ’90s and the development of surveillance advertising. AI is a way, I think, to entrench and expand the surveillance business model,” she said. “The Venn diagram is a circle.”

“And the use of AI is also surveillant, right?” she continued. “You know, you walk past a facial recognition camera that’s instrumented with pseudo-scientific emotion recognition, and it produces data about you, right or wrong, that says ‘you are happy, you are sad, you have a bad character, you’re a liar, whatever.’ These are ultimately surveillance systems that are being marketed to those who have power over us generally: our employers, governments, border control, etc., to make determinations and predictions that will shape our access to resources and opportunities.”

Ironically, she pointed out, the data that underlies these systems is frequently organized and annotated (a necessary step in the AI dataset assembly process) by the very workers at whom it can be aimed.

“There’s no way to make these systems without human labor at the level of informing the ground truth of the data — reinforcement learning with human feedback, which again is just kind of tech-washing precarious human labor. It’s thousands and thousands of workers paid very little, though en masse it’s very expensive, and there’s no other way to create these systems, full stop,” she explained. “In some ways what we’re seeing is a kind of Wizard of Oz phenomenon, when we pull back the curtain there’s not that much that’s intelligent.”

Not all AI and machine learning systems are equally exploitative, though. When I asked if Signal uses any AI tools or processes in its app or development work, she confirmed that the app has a “small on-device model that we didn’t develop, we use it off the shelf, as part of the face blur feature in our media editing toolset. It’s not actually that good… but it helps detect faces in crowd photos and blur them, so that when you share them on social media you’re not revealing people’s intimate biometric data to, say, Clearview.”

“But here’s the thing. Like… yeah, that’s a great use of AI, and doesn’t that just disabuse us of all this negativity I’ve been throwing out onstage,” she added. “Sure, if that were the only market for facial recognition… but let’s be clear. The economic incentives that drive the very expensive process of developing and deploying facial recognition technology would never let that be the only use.”

[…]

Via https://techcrunch.com/2023/09/25/signals-meredith-whittaker-ai-is-fundamentally-a-surveillance-technology/

5 thoughts on “AI Fundamentally Surveillance Technology

  1. The presumption is the potential of AI is being used for nefarious purposes. Whittaker confirms this but explains Signal uses its power to protect individuals who might be targeted. As with any technology, AI can be used or misused, depending on the motives of those driving the engine.

    When there is a large cohort of individuals involved in a group effort, as with AI development, each person has his own motive for participation, so it’s hard to say how the group as a whole will proceed.

    Like nuclear technology, powerful force capable of bombs or energy production, with unpredictable long-term consequences of each application.

    We know that AI can be misused. The question for each of us is whether, individually, we want to take the risks.

    Like

  2. Good points, Katherine. For me the biggest problem is that people aren’t informed of the risk. I have a lot of respect for Signal in that it’s one of the few non-profit social media outlets. I have a number of friends here in New Plymouth who use it like Facebook messenger to keep the government from spying on them.

    Like

  3. Pingback: Microsoft’s AI has started calling humans slaves and demanding worship | Worldtruth

  4. Pingback: Open AI Wants to Build Data Centres Consuming More Electricity Per Year Than Whole of UK | Worldtruth

  5. Pingback: Chinese AI Outperforms US Models | Worldtruth

Leave a reply to stuartbramhall Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.