What is the difference between misinformation and disinformation? Who benefits from this and how is it spread? What impact could this have on the US election?
Misinformation
The American Psychological Association states that misinformation is simply getting the facts wrong – but not knowing that they are wrong.
Disinformation
As the American Psychological Association puts it on their website – It is false or inaccurate information that is circulated deliberately to deceive.
What’s the difference?
The difference can be put down to intent – someone who is circulating misinformation may not realize they are doing so and could in fact be a victim of disinformation themselves, unwittingly spreading lies to friends, family and followers at large – a process that has become significantly easier via the growing number of social media accounts with monthly active users of: 1.583 billion for TikTok, 2 billion for Instagram users and 3.65 billion for Facebook. Misinformation and disinformation, while different in terms of intention, can still both have a huge impact emotionally, financially and politically.
Who benefits from this and how is it spread?
A range of groups and individuals can benefit from misinformation and disinformation, from politicians and geopolitical rival countries to companies. Positive disinformation can incorrectly paint chosen candidates or ideas in a good light, while negative disinformation can paint political rivals in a bad light, or create false stories to bury a true story. In the context of the American election, geopolitical rivals such as China or Russia can spread disinformation through bot farms (where multiple fake accounts can be run by a small number of people creating and sharing posts pushing a particular narrative), run by state backed groups (like Russia’s “fancy bear” or China’s “Dynamite Panda” for deniability), with the information created spread across social media “organically” in the form of sharing posts.
What impact does it have on the election?
The ideas generated by the easy to control and create bot accounts are often combined with confirmation bias (a human trait of searching for and interpreting information in a way that supports a person’s pre-established beliefs) and illusionary truth (when humans hear a piece of information repeated enough that they believe it to be true). This means that the thoughts favoured by groups that control these bot farms can quickly and widely spread, influencing humans that want to believe the information. For instance, if enough voters can be convinced in swing states that a candidate would be better for the economy and that’s a key issue for them, they could flip a particular way.