Skip to main content

Do you know what I do when there’s real­ly dam­ag­ing mis­in­for­ma­tion being tout­ed, not the sort of mis­in­for­ma­tion inad­ver­tent­ly pro­mul­gat­ed by oth­er­wise reli­able sources, but the com­plete­ly ludi­crous stuff com­ing from demon­stra­bly spu­ri­ous actors? Noth­ing. I do noth­ing. Cal­cu­lat­ed inac­tion. (Or at least 95% of the time. I too fail at some­thing I con­sid­er eth­i­cal­ly crit­i­cal. Go fig­ure! What a surprise!)

It does not deserve repu­di­a­tion because it does not deserve atten­tion. Don’t try to address the symp­tom, address the dis­ease. Why is mis­in­for­ma­tion increas­ing­ly com­mon? It’s not the result of indi­vid­ual behav­iors, it’s the result of non-malev­o­lent but nev­er­the­less insid­i­ous sys­tems which per­pet­u­ate cer­tain behav­iors across populations.

Not at the indi­vid­ual lev­el, indi­vid­u­als can be reformed, but at scale, at the lev­el of pop­u­la­tions, caus­ing expo­nen­tial increas­es that no amount of indi­vid­ual behav­ioral changes will be able to off­set. It is almost guar­an­teed that a piece of mis­in­for­ma­tion that gains some mea­sure of ini­tial viral­i­ty will spread faster than it can be quashed.

And unlike in real places, the inter­net can­not actu­al­ly be locked down, nei­ther should it be. It has no fixed bor­ders, it’s all fun­gi­ble. Sup­pres­sion only ever caus­es the sup­pressed to fes­ter under­ground, out of sight, less able to be mon­i­tored in an open society.

The best way to address mis­in­for­ma­tion — and note, this will still not pre­vent it from spread­ing — is to just ignore it. If some­one sends it to you, ignore it. Don’t share repu­di­a­tions. Don’t share ‘splaina­tions. Don’t share deroga­to­ry memes. Don’t unfriend peo­ple, social shun­ning only for­ti­fies people’s belief in the erro­neous infor­ma­tion for which they’re being penal­ized. Just ignore it.

If they can be expect­ed to have the self-dis­ci­pline to not trust ter­ri­ble infor­ma­tion on its face, you can be expect­ed not to do the one thing that will aggra­vate the spread of that bad infor­ma­tion, pub­li­ciz­ing it fur­ther and with exact­ly the cock­tail of emo­tion­al­ly-charged couch­ing that will make oth­ers more like­ly to fix­ate on it.

These sys­tems of algo­rith­mi­cal­ly-dri­ven infor­ma­tion exchange which reward our worst impuls­es and reflex­ive­ly point us down dark alleys of increas­ing­ly rad­i­cal mis­in­for­ma­tion can­not be stopped, but they can be mit­i­gat­ed. If we’ve learned any­thing from this pan­dem­ic, let it be this trans­fer­able insight. Viral­i­ty online is not unlike viral­i­ty in real life.

And rather than address the prob­lems of con­tent, let’s address the sys­tems that manip­u­late con­tent deliv­ery. Extrem­ism thrives online because it is dri­ven by algo­rithms that misiden­ti­fy engage­ment with enjoy­ment. What we “like” is real­ly just being used to deter­mine what caus­es us to stay fix­at­ed longer. It’s a crude tool that with no par­tic­u­lar­ly ill-intent fig­ured out that we are most eas­i­ly seduced to scroll longer and watch longer if what is pre­sent­ed to us is neg­a­tive­ly skewed or sen­sa­tion­al­ly exaggerated.

It’s pret­ty basic psy­chol­o­gy that every sin­gle one of us — no excep­tions — is more eas­i­ly enrap­tured by those things which stim­u­late a neg­a­tive response, which cause our prim­i­tive amyg­dalæ to flare up and unleash a tor­rent of real­i­ty dis­tort­ing chem­i­cals that hyper­fo­cus our atten­tions on the per­ceived threat but in the most irra­tional­ly pro­crustean way pos­si­ble, strap­ping chem­i­cal blind­ers to our brains that pro­hib­it a more com­pre­hen­sive and pro­por­tion­al view. We enter a cycle of fix­at­ing on threats because the sys­tem knows with­out actu­al­ly know­ing any­thing that we fix­ate on threats.

The first step to avoid­ing it is to at the very least be semi­aware that it’s hap­pen­ing, not in the moment but more gen­er­al­ly. Try to cul­ti­vate the habit of know­ing that what directs the deliv­ery of con­tent online is inad­ver­tent­ly designed to make sure you stay angry and utter­ly irra­tional. Instead of attack­ing symp­to­matic mis­in­for­ma­tion, try to sub­li­mate the impulse into reform­ing these algo­rithms. Tar­get the disease.

Cam­paign for algo­rithms that are eth­i­cal­ly designed, that are instruct­ed with less ambigu­ous goals, that are not indi­rect­ly con­clud­ing that what will keep our inter­est is the same as what is good for our over­all well­ness. Advo­cate for algo­rithms that are self-explain­ing, that can out­put how they arrived at the con­clu­sions they did. Instruct them on how to avoid cer­tain kinds of exploita­tion that we’ve social­ly iden­ti­fied as uneth­i­cal, but which they have no idea is uneth­i­cal because they have no ideas or ethics.