Do you know what I do when there’s really damaging misinformation being touted, not the sort of misinformation inadvertently promulgated by otherwise reliable sources, but the completely ludicrous stuff coming from demonstrably spurious actors? Nothing. I do nothing. Calculated inaction. (Or at least 95% of the time. I too fail at something I consider ethically critical. Go figure! What a surprise!)
It does not deserve repudiation because it does not deserve attention. Don’t try to address the symptom, address the disease. Why is misinformation increasingly common? It’s not the result of individual behaviors, it’s the result of non-malevolent but nevertheless insidious systems which perpetuate certain behaviors across populations.
Not at the individual level, individuals can be reformed, but at scale, at the level of populations, causing exponential increases that no amount of individual behavioral changes will be able to offset. It is almost guaranteed that a piece of misinformation that gains some measure of initial virality will spread faster than it can be quashed.
And unlike in real places, the internet cannot actually be locked down, neither should it be. It has no fixed borders, it’s all fungible. Suppression only ever causes the suppressed to fester underground, out of sight, less able to be monitored in an open society.
The best way to address misinformation — and note, this will still not prevent it from spreading — is to just ignore it. If someone sends it to you, ignore it. Don’t share repudiations. Don’t share ‘splainations. Don’t share derogatory memes. Don’t unfriend people, social shunning only fortifies people’s belief in the erroneous information for which they’re being penalized. Just ignore it.
If they can be expected to have the self-discipline to not trust terrible information on its face, you can be expected not to do the one thing that will aggravate the spread of that bad information, publicizing it further and with exactly the cocktail of emotionally-charged couching that will make others more likely to fixate on it.
These systems of algorithmically-driven information exchange which reward our worst impulses and reflexively point us down dark alleys of increasingly radical misinformation cannot be stopped, but they can be mitigated. If we’ve learned anything from this pandemic, let it be this transferable insight. Virality online is not unlike virality in real life.
And rather than address the problems of content, let’s address the systems that manipulate content delivery. Extremism thrives online because it is driven by algorithms that misidentify engagement with enjoyment. What we “like” is really just being used to determine what causes us to stay fixated longer. It’s a crude tool that with no particularly ill-intent figured out that we are most easily seduced to scroll longer and watch longer if what is presented to us is negatively skewed or sensationally exaggerated.
It’s pretty basic psychology that every single one of us — no exceptions — is more easily enraptured by those things which stimulate a negative response, which cause our primitive amygdalæ to flare up and unleash a torrent of reality distorting chemicals that hyperfocus our attentions on the perceived threat but in the most irrationally procrustean way possible, strapping chemical blinders to our brains that prohibit a more comprehensive and proportional view. We enter a cycle of fixating on threats because the system knows without actually knowing anything that we fixate on threats.
The first step to avoiding it is to at the very least be semiaware that it’s happening, not in the moment but more generally. Try to cultivate the habit of knowing that what directs the delivery of content online is inadvertently designed to make sure you stay angry and utterly irrational. Instead of attacking symptomatic misinformation, try to sublimate the impulse into reforming these algorithms. Target the disease.
Campaign for algorithms that are ethically designed, that are instructed with less ambiguous goals, that are not indirectly concluding that what will keep our interest is the same as what is good for our overall wellness. Advocate for algorithms that are self-explaining, that can output how they arrived at the conclusions they did. Instruct them on how to avoid certain kinds of exploitation that we’ve socially identified as unethical, but which they have no idea is unethical because they have no ideas or ethics.