Humans transmit information using language, but how often do you stop and examine information sources for the content you consume? Originally, humans shared stories to transmit this information. Then our scholars and holy men began to write the information down. Eventually, thanks to the printing press, information that was written down could be produced and distributed to the masses. The internet has enabled us all not just to consume information, but to produce written and spoken information as well.
We have more information at our fingertips than any time in history. But how often do we stop and critically examine information sources for content that we consume?
Since anyone can publish information now, it is important to examine our information sources. Have you ever read a tweet sharing a blog post or other article, and the blurb brings up enough emotion in you that you retweet it?
Take this tweet for example:
“Everybody just downloads precompiled binaries from random websites. Often without any authentication or signature. You don’t need to exploit any security hole anymore. Just build a container, and have people load your malicious binary to their network.” https://t.co/NqdV89Gsrm
— John Arundel (@bitfield) May 29, 2018
If you just read the quote, you may think this is an article bashing tweets. But there are a few things you can do to see if the author and the content is credible:
- Is the author of the tweet credible if he’s railing on containers? Well this guy is a Kubernetes/Puppet/Terraform/Go consultant, so there is a good probability he’s not going to bash container technology.
- What is the primary source about? The tweet is not the primary source, that would be the article that is linked from the tweet. To understand what’s being shared, you need to click that link and read it to evaluate it. The title of this post is “The sad state of sysadmin in the age of containers”, and the author says right away that the post is a “rant is about containers, prebuilt VMs, and the incredible mess they cause because their concept lacks notions of “trust” and “upgrades””. So the post is a rant, but who wrote it?
- When was this written? The date the content was written can sometimes help set context for establishing credibility. This post was written 3 years ago. For this topic, that doesn’t mean much.
- Is the author of the primary source credible? The author of this post is a post doc who is focused on high dimensional data mining. So here we have a researcher in an academic situation wrote a rant on how incredibly dangerous it is to use containers and pre-built VMs “as-is”. The rant is correct, it is incredibly dangerous to use containers and VMs in this way. But are people being this wreckless in environments other than these research environments? Is it only the environment in which this academic works that things are this bad? We have no way to know this, but it does shine a different light on the credibility of the post. All it took was the time needed to examine the source of information.
Is the information out of context?
The phrase “victors write history” is a great example of what happens if information is created from just one side of the story. Since all of us are authors, we can write any part of the story down. Sometimes this happens innocently enough; we read a story without validating who wrote it. The previous section is a good example of this.
But there is a dark side to presenting information out of context as the entire truth. Victors often erase entire histories to ensure the world looks favorably. I once gave a presentation about a government worker whose speech was purposefully taken out of context to accuse her of using racial bias against white farmers to deny them access to government assistance. The accuser created so much media pressure with edited video that the president fired the government worker. No one checked to see the context of the information, or the credibility of the person posting the video.
New reasons to examine information sources: Identifying Bots
Twitter admitted that malicious information was spread by bots during the 2016 US Elections. Even though we know bots are used to help topics trend or even harass people, people will still argue with bots. According to this article, malicious bots can be identified by the 3 A’s: Activity, Anonymity, and Amplification. If an account has an incredibly high number of tweets based on its age, doesn’t give much insight into who is communicating, or the account is only used to retweet (amplify) about a certain topic it could be a bot. Someone has set up a machine learning algorithm to detect bots that promote propaganda, but you should examine their methodology to ensure it isn’t biased (a blog post for another time).
Always examine information sources if content is emotionally charged
Since we live in a world where everyone publishes information, it’s more important than ever to critically information sources. This goes double for information that triggers an emotional response from us. Making sure we’re not reacting to propaganda or bots designed to get an instant response and amplification may be the new literacy.