Select Page

Big Tech’s Breach of Trust: How Your Personal Data Feeds the AI Gold Rush

Big Tech’s Breach of Trust: How Your Personal Data Feeds the AI Gold Rush

Every day, as you compose emails or chat with friends on social media, major tech companies are quietly observing, collecting, and leveraging your data for their gain. The repercussions of this are vast and profound, far surpassing mere targeted advertisements. The current wave of technological innovation hinges on data-hungry artificial intelligence (AI) systems, with Big Tech at the helm, profiting from our digital footprints without our informed consent.

The Data Extraction Saga

Gmail, a daily communication tool for many, isn’t just an email service; it’s Google’s AI training ground. When users utilize Gmail’s “Help Me Write” feature, every keystroke may potentially be processed to refine Google’s AI capabilities. It’s a glaring intrusion, but it doesn’t stop there.

Meta (formerly Facebook) took a whopping billion Instagram posts from public accounts, all to educate their AI, sidestepping any formal permission. Similarly, Microsoft taps into Bing chats to bolster its bot’s responsiveness. These actions aren’t just invasions of privacy; they’re blatant displays of corporate overreach.

The Expanding Frontier

As tech giants continue this trend, the lines demarcating private and public data blur. Google’s recent update to its privacy policy now asserts its right to use “publicly available information” for AI training. The issue is the ambiguity surrounding what is deemed “publicly available.” This grey area grants Google and others a wide berth to collect and utilize data in ways the original owner never intended.

While some tech companies may assure users that personal data is handled delicately, the larger issue remains: the commodification of personal data for corporate growth, often without explicit, informed consent.

The Consequences of AI Training

The technological behemoths’ insatiable appetite for data extends beyond mere collection. AIs, in their training phase, can unintentionally reveal personal information. Samsung experienced this firsthand when their ChatGPT instances began divulging company secrets, leading to an outright ban on AI chatbots in their workplace. This isn’t just a corporate issue; it has personal implications. Imagine having your personal conversations or sensitive information regurgitated by an AI.

When we ask these companies about their privacy measures, their responses remain worryingly vague. Google claims its “filters are at the cutting edge,” yet admits to occasional data leaks. It’s a concerning thought: the digital repositories holding our most personal data are imperfect and potentially leaky vessels.

A Question of Control

This isn’t solely about privacy; it’s fundamentally about control. A simple photo shared online might inadvertently train an AI, thereby enabling it to identify a face or replicate an art style. As Ben Winters from the Electronic Privacy Information Center (EPIC) aptly states, there’s a “thin line between ‘making products better’ and theft.” The worrisome part is that tech companies believe they hold the pen.

The Path Forward

The future looks hazy. The tech industry’s rapid pace, coupled with lagging regulatory measures, has given these corporations carte blanche. Big Tech’s modus operandi is clear: accumulate as much data as possible, fast. Unfortunately, the onus is often on users to safeguard their privacy.

Take Google, for instance. They provide users with a labyrinthine system to navigate and safeguard their privacy. For the uninitiated, understanding these processes might require a computer science degree.

A Call for Accountability

Nicholas Piachaud from the Mozilla Foundation sums up the situation aptly: “Are we willing just to give away our right to privacy?” As we stand at this crossroads, it’s crucial to question, deliberate, and demand more from these tech giants. Their unchecked power and opaque operations underscore a need for robust, transparent privacy laws that prioritize individual rights over corporate interests.

As AI continues to redefine our world, the narrative shouldn’t be dominated by corporate interests. Individuals must be empowered with knowledge and rights to make informed decisions about their digital legacy. The question remains: will Big Tech listen, or will they continue with business as usual and continue to steal our souls.

About The Author

1 Comment

  1. frank stetson

    We have got to get a lid on

    1. data privacy, should be a lock box on our keyboards, email is mail – federal offense. And enough of the tracking, unless I allow it, for anything personal, especially by third parties.
    2. misinformation — should be allowed, but should be some sort of penalty IF caught knowingly spreading lies. This is the toughest one, got to support the 1A, but at some point, this must be brought down. Really hard to set up a fair process that supports the 1A.
    3. kids — what’s good for young minds and what should be banned
    4. fraud, scams, etc. — we need Eliot Ness level cops and huge punishments
    5. virus, malware, ransomware — ditto on Ness and punishments
    6. extra punishments for international — gig the country if they can’t police their own.

    Not an exhaustive list, but this is out of control. In my personal world, I am spending close to $1K a year just to gain better privacy, monitoring, and control. I stop any tracking I can, I use a VPN to hide where I am coming from, use alias’s everywhere I can, never leave mouse tracks (credit cards, etc.) behind except for the real-time order, basically everything that I can hide, I do.

    For cause after being breached, I have a security package that scrubs the dark web, crushes any data aggregator out there who has my data, scrubs all browsers of any data at end of session, monitors finance accounts, credit cards, store cards, every card known to man. I have frozen, locked or both for all 3 credit bureaus (can still use cards), froze paycheck cashing, utility payments, if you can freeze it, lock it, or monitor it — it’s done. My laptop runs a virus package, a unique malware package, registry/drive/rootkit/driver cleaners, pc health and performance optimizer, two virus packages, real time browser mal-site monitor, ad blocking, cookie masking, and a very special deep-dive virus crusher that protects against the new shit even before it hits the virus protection sw companies (probably).

    All this stuff just to lower the risk caused by people trying to database for sales to politics, people trying to breach my digital walls, steal my stuff, or pretend to be me. $25 a week I spend to police myself and my digital property. I could roll for free, but important enough to purchase premium protection features. IMO, for me.