- As AI content becomes less distinguishable from human-generated content, we’ll have to find ways to prove our humanity – possibly even through IRL verification. - [[Appleton, Maggie - The Expanding Dark Forest and Generative AI|Maggie Appleton]] provides this example, if we were to have institutional verification, where you “show up in-person to register your online accounts or domains”: ![](https://maggieappleton.com/images/posts/ai-dark-forest/human_google-1100.jpg) - The idea of walking up to an Alphabet Registration Station to submit my ID and register my accounts is so repulsive it's hard to imagine. There is something that feels so inherently *lame* about the entire thing. - My initial reaction to the article was that it was overkill. Sure -- we'll probably, in the near future, be looking for clues to establish an account's humanity. But the idea that we're going to be [[{6.1} artificial intelligence will increase the power of the stream|*so* flooded with content]] that's going to be *so* good that we need these weirdly specific methods of detection is... A lot. - I imagine myself sitting at a local restaurant for a meetup crossing my fingers that my new Twitter BFF is a Real Life Human and not a carefully crafted AI catfish. I imagine the flood of comments on Instagram accounts, accusing the poster of using AI-generated content. I imagine the deluge to be so strong that some of our friends are gaslighted into thinking we don't exist. - I imagine a glitch on an AI-run government account that leads to military action. - This seems so far removed from my experience online. So cynical. So goofy. - And yet... The “dark forest” is a thing already. AI is a thing already. The forest is getting bigger and AI is getting better. I don’t know if the future is as bleak as Appleton paints it, but it definitely is about to be way less cool. - [[2024-06-22]] it’s getting harder… - I wrote this note over a year ago and AI-generated content is getting better and better. It’s still distinguishable from human-created content (for most of us), but the gap is narrowing. - One thing I’ve been thinking about a ton in relation to this is the new Drake “feature” on [Wah Gwan Delilah](https://www.wired.com/story/drake-plain-white-ts-ai-wah-gwan-delilah/). My immediate thought was that it’s AI — like, without a question. Everyone thought it was proven to be real when Drake posted it on his Instagram story, but that’s not convincing at all to me – not that I think he deserves the benefit of the doubt. Maybe he just thought it was funny? - We’re also seeing it now with the teaser for Katy Perry’s new single. It’s so lackluster that the half-joking assumption is that it was at least written by AI. - How will artists confirm the legitimacy of their work? Does it matter when the most immediate reactions tend to drive public perception? How do we navigate being unsure? Does this give artists an “out” when people don’t like their latest work? What are the implications around [[{1.2a2e2a} truth in record-keeping on the internet and post-ai]]? - At some point, will these questions become so ubiquitous and laborious that people just stop caring? [[{6.4b} how will our feelings about ai art change over time]]? - Maggie Appleton was talking about something a little bit different, but I see this as another step toward her predictions. I don’t think this type of uncertainty is sustainable. - [[2024-10-18]] [[{1.2a2e1} kayfabe content]]