Do you know what I do when there’s really dam­aging mis­in­for­ma­tion being touted, not the sort of mis­in­for­ma­tion inad­ver­tently pro­mul­gated by oth­er­wise reli­able sources, but the com­pletely ludi­crous stuff coming from demon­strably spu­rious actors? Nothing. I do nothing. Cal­cu­lated inac­tion. (Or at least 95% of the time. I too fail at some­thing I con­sider eth­i­cally crit­ical. Go figure! What a sur­prise!)

It does not deserve repu­di­a­tion because it does not deserve atten­tion. Don’t try to address the symptom, address the dis­ease. Why is mis­in­for­ma­tion increas­ingly common? It’s not the result of indi­vidual behav­iors, it’s the result of non-malev­o­lent but nev­er­the­less insid­ious sys­tems which per­pet­uate cer­tain behav­iors across pop­u­la­tions.

Not at the indi­vidual level, indi­vid­uals can be reformed, but at scale, at the level of pop­u­la­tions, causing expo­nen­tial increases that no amount of indi­vidual behav­ioral changes will be able to offset. It is almost guar­an­teed that a piece of mis­in­for­ma­tion that gains some mea­sure of ini­tial virality will spread faster than it can be quashed.

And unlike in real places, the internet cannot actu­ally be locked down, nei­ther should it be. It has no fixed bor­ders, it’s all fun­gible. Sup­pres­sion only ever causes the sup­pressed to fester under­ground, out of sight, less able to be mon­i­tored in an open society.

The best way to address mis­in­for­ma­tion — and note, this will still not pre­vent it from spreading — is to just ignore it. If someone sends it to you, ignore it. Don’t share repu­di­a­tions. Don’t share ‘splaina­tions. Don’t share deroga­tory memes. Don’t unfriend people, social shun­ning only for­ti­fies people’s belief in the erro­neous infor­ma­tion for which they’re being penal­ized. Just ignore it.

If they can be expected to have the self-dis­ci­pline to not trust ter­rible infor­ma­tion on its face, you can be expected not to do the one thing that will aggra­vate the spread of that bad infor­ma­tion, pub­li­cizing it fur­ther and with exactly the cock­tail of emo­tion­ally-charged couching that will make others more likely to fixate on it.

These sys­tems of algo­rith­mi­cally-driven infor­ma­tion exchange which reward our worst impulses and reflex­ively point us down dark alleys of increas­ingly rad­ical mis­in­for­ma­tion cannot be stopped, but they can be mit­i­gated. If we’ve learned any­thing from this pan­demic, let it be this trans­fer­able insight. Virality online is not unlike virality in real life.

And rather than address the prob­lems of con­tent, let’s address the sys­tems that manip­u­late con­tent delivery. Extremism thrives online because it is driven by algo­rithms that misiden­tify engage­ment with enjoy­ment. What we “like” is really just being used to deter­mine what causes us to stay fix­ated longer. It’s a crude tool that with no par­tic­u­larly ill-intent fig­ured out that we are most easily seduced to scroll longer and watch longer if what is pre­sented to us is neg­a­tively skewed or sen­sa­tion­ally exag­ger­ated.

It’s pretty basic psy­chology that every single one of us — no excep­tions — is more easily enrap­tured by those things which stim­u­late a neg­a­tive response, which cause our prim­i­tive amyg­dalæ to flare up and unleash a tor­rent of reality dis­torting chem­i­cals that hyper­focus our atten­tions on the per­ceived threat but in the most irra­tionally pro­crustean way pos­sible, strap­ping chem­ical blinders to our brains that pro­hibit a more com­pre­hen­sive and pro­por­tional view. We enter a cycle of fix­ating on threats because the system knows without actu­ally knowing any­thing that we fixate on threats.

The first step to avoiding it is to at the very least be semi­aware that it’s hap­pening, not in the moment but more gen­er­ally. Try to cul­ti­vate the habit of knowing that what directs the delivery of con­tent online is inad­ver­tently designed to make sure you stay angry and utterly irra­tional. Instead of attacking symp­to­matic mis­in­for­ma­tion, try to sub­li­mate the impulse into reforming these algo­rithms. Target the dis­ease.

Cam­paign for algo­rithms that are eth­i­cally designed, that are instructed with less ambiguous goals, that are not indi­rectly con­cluding that what will keep our interest is the same as what is good for our overall well­ness. Advo­cate for algo­rithms that are self-explaining, that can output how they arrived at the con­clu­sions they did. Instruct them on how to avoid cer­tain kinds of exploita­tion that we’ve socially iden­ti­fied as uneth­ical, but which they have no idea is uneth­ical because they have no ideas or ethics.

We use cookies. By browsing our site you agree to our use of cookies.Accept