- Data voids can be exploited to promote political propaganda or other “disturbing content”.
- When there isn’t a lot of reputable content around a search query, media manipulators have the opportunity to fill that void.
- It isn’t [[what we think of as algorithmic radicalization is just how most of us surf the web|algorithmic radicalization]]; it’s good (shady) SEO.
- There are different types of data void vulnerabilities:
- **Breaking news**: Bad actors capitalize on the immediate confusion around breaking news events by quickly spreading rumors and manipulating conversations on social media. Since high-authority content is published shortly after an event, these voids are “cleaned up naturally”.
- **Strategic new terms**: New phrases are created to “divert discourse and search traffic alike into areas full of disinformation”. This includes terms like “crisis actor” or “black on white crime”. These catchy phrases are meant to lure in curious searchers, but are also created with the intent of mainstream media picking them up – thereby driving more traffic to the terms and disinformation around it.
- [[2025-01-18]] In [[Klein, Naomi - 2023 - Doppelganger]], Klein mentions that by Trump appropriating the term “fake news”, we were “robbed of a useful phrase to describe the phenomenon” (pg. 145). This isn’t a data void, but I think it’s an interesting example of how phrases being co-opted are an issue for what they say, but also what we *can’t say* as a result.
- **Outdated terms**: These data voids happen when content creators stop publishing content around a term before users stop looking for the term.
- **Fragmented concepts**: This involves creating distinct terms to separate “manipulated content from more popular content”.
- **Problematic queries**: People are sometimes looking for disturbing or taboo content. Because reputable content creators and publishers *aren’t writing about those things*, *reputable information* doesn’t show up in search results.
- This manipulation can also happen in recommender systems, like search suggestions and YouTube’s auto-play.
- This is where [[what we think of as algorithmic radicalization is just how most of us surf the web]] meets [[{1.2a2e2a1} algorithms increase the spread of misinformation]], I think…
- The algorithm increasing misinfo makes it easier for media manipulators to exploit data voids and snag people who are searching and browsing the web…
- [[2024-11-18]] And [[our engagement with media and culture are shaped by our epistemological frameworks]], so when you’re already encountering this information with a total lack of trust in institutions, these independent creators / manipulators can be more appealing.