I visited GAP's flagship store in the Marina before class today, where they have their "World Orb" that scans people's iris to prove they are not Ai... or as the display puts it "universal proof of human”. This monstrosity was created by Sam Altman, owner of OpenAi. According to Gizmodo, World has reportedly amassed more than 33 million users and it initially targeted people in developing nations to sign up for the app, promising them Worldcoin (crypto) if they trade their eye imprint—a practice that some of the people first asked to sign up for the platform called “exploitative and deceptive.” Imagine, selling a unique biometric identifier to a faceless corporation in exchange for an effectively valueless virtual coin, fun!
Many brands are now using Ai to create what some may call artwork and what more people would refer to as “Ai Slop.” December 2024's issue of Vogue magazine contained a Sketcher's ad that at first glance looks fine but back then more than now, Ai had a tendency of being unrefined and left a lot of unfinished and confusing background elements. The 2022 days of “extra fingers” in Ai ‘art’ are long gone.
Some fashion companies have gone so far as to replace models with Ai generated figures. A Guess brand advertisement run in Vogue's July 2025 issue featured "model" Seraphinne Vallora. BBC reports one user on X writing: "Wow! As if the beauty expectations weren't unrealistic enough, here comes Ai to make them impossible. Even models can't compete."
There are a lot of instagram profiles that post nothing but Ai generated fashion designs now, often spotted because how flawless the figure's skin is, its blurry backgrounds and the velocity of how many posts are put out. As a professional illustrator, I know how long art can take to be produced and 20 fully-realized, color, highly detailed images cannot be pumped out by one person every few days. I would link you examples but I block every one that I encounter.
On the topic of art, Ai art cannot be copyrighted (under US law) because it lacks the required ‘human authorship.’ The creation is substantially made by the tool and is theft-based. Using prompts, the system collages imagery, text, and/or video from artists globally that have not agreed to be part of this. Ai from early 2020s imagery has even left remnants of artists' signatures in the new "art" created.
Generative Ai caricatures may seem fun, fast & free but most everything “free” especially in the digital realm, comes at a cost - from storing our data to selling it to facial recognition software companies like ClearView Ai. According to New York Times tech reporter Kashmir Hill, writer of the book “Your Face Belongs To Us: A Secretive Startup's Quest To End Privacy As We Know It”, states that ClearView's database is composed of billions of images that have been “scraped” from websites including Facebook, Instagram, LinkedIn, YouTube and strangely, Venmo. Web scraping, or “harvesting” is automated collecting of data that is channeled into spreadsheets or input into other websites.
The Computer Fraud and Abuse Act (CFAA) established that though unethical, data scrapping isn’t against the law if one is authorized to access it, one can be held liable under other laws, such as breach of contract (violating a site's terms of service) or trade secret misappropriation.
This data has been used by hundreds of law enforcement agencies such as ICE, DHS, FBI and can recognize faces even when they are wearing covid masks. The world’s population is just above 8 billion as of 2022 and police are now using ClearView’s database of around 30 billion faces. How can they have more faces than are alive today? That's because many people have unknowingly had multiples images of their face uploaded and not just selfies or ID photos, but also images of us walking down the street or in the background of a club or private party. These photographic matches are not foolproof. ClearView's matches are also making mistakes that have led to arrests and jail time, especially for people of color as the program gets people confused, in part because the programs are trained by and on white adults.
When investigating this story, Kashmir talked to police who entered her image into their database and upon doing so, a ClearView representative immediately called them to tell the cops to stop talking to her.
Google started developing Ai Facial recognition technology in 2011 but held off on it because of the danger that might occur if a dictator is in office. Luckily our leader is very sane and intelligent. Some might say a complete gentleman.
In unrelated news, Ai is now used to create CSAM - also known as Child Sexual Abuse Material. There are posters in San Francisco's deep SoMa with links to a wired magazine link stating "Grok Creates Ai Child Porn.” Grok is generative artificial intelligence chatbot developed by Elon's “X” platform (formally Twitter) that launched in 2023. Clare McGlynn, a law professor at Durham University and an expert on image-based sexual abuse, says “Some people's inhumane impulses are encouraged and facilitated by this technology without guardrails or ethical guidelines.”
Interestingly, in 2025, X (formerly Twitter) explicitly banned using its data to train third-party Ai models. According to TargetInternet.com “Market forecasts project a compound annual growth rate for AI-driven scraping tools as high as 39.4%.”
NBC has reported that Elon Musk's data facility in Mississippi has greatly contributed to air, light, and sound pollution in the area. The nonprofit Southern Environmental Law Center has said gas turbines release hazardous chemicals, including formaldehyde and sound from them can be heard from hundreds of feet away. Neighbor Krystal Polk noted that the local wildlife left the area after the turbines were built and local Taylor Logsdon says her children can't sleep from the non-stop noise.
Blackrock, a private equity company who as of March 2nd sealed a $33.4 billion deal to buy/privatize US energy company AES Corp in Indiana and Ohio. Owning this power grid, so they can divert energy to mass produce Ai data centers, consume and/or pollute communities' air and water. In part, this is being paid for by raising costumer fees (report from @Ajahallh saying upwards of $400+ a month). This acquisition comes after already buying TXNM Energy for $11.5 billion.
GreenPeace_Canada retweeted “The World Hunger Organization announced that we're entering into a "water bankruptcy era". Not "stressed". Not "in crisis". BANKRUPTCY. And that we've done irreversible damage to our water systems. Please think instead of running to Ai for "every damn thing". We're trading comfort for survival at this point.”
According to Google's own "Ai Overview": "Ai's data centers consume massive amounts of freshwater for cooling high-performance processors, with large facilities using up to 5 million gallons daily. Cooling water usage is projected to rise 870% in the coming years. In 2023, data centers consumed an estimated 17 billion gallons of water, a figure projected to rise to 68 billion gallons by 2028."
Less than 3% of the water on earth is fresh water and most centers only use fresh water through evaporative cooling. Though some centers have started to use saline (salt) water and some store their centers in the ocean, salt water is highly corrosive to equipment. Anti-corrosives are sometimes added to the cooling-water which are not capable of filtering out these chemicals, these poisoned waters then gets dumped into rivers.
Besides harming the planet, in 2025, Time.com reported that MIT’s Media lab found that OpenAi’s ChatGPT harms critical thinking abilities and the longer the tool was used, the lazier users got. Studies reported that brain activity while using Ai, was found to lower creativity, problem solving and bypassed deep memory processes. Luckily, Donald Trump “Love[s] "the poorly educated. We’re the smartest people, we’re the most loyal people” - stated in his Nevada victory speech, 2016.
Ai is getting harder and harder to get around and is now built into Google Search. Many people are switching to DuckDuckGo.com which is private (by default, instead of collecting our data) and has a clear delineation of choice between “talking” to the ai version of their site or not. Also, DDG features “no ai training” on conversations.
To close I'll leave you with this: @PerfectUnion reported today that an Ai safety officer at Meta gave OpenClaw, (an Ai assistant owned by OpenAi) access to her email and it deleted her whole inbox.
So it's clear, even the professionals cannot control these machines. Society can use a time out. Step away from your nearest “time saving app”, drink some fresh water before it’s all gone, or held hostage by Nestle, and take a deep breath, while we still can.
References:
https://www.bbc.com/news/articles/cgeqe084nn4o
https://time.com/7295195/ai-chatgpt-google-learning-school/
https://www.wired.com/story/grok-is-generating-sexual-content-far-more-graphic-than-whats-on-x/
The Simpsons inspired paperdolls
Bowtie Necklaces sold at
Sui Generis
Dec Pillow