Fact Checking? Not Any More
Lectures aren’t what they used to be.
Back in the day, professors amassed a collection of well-researched information, partly from peers and colleagues, and partly from their own work.
From these notes and selected course textbooks, they created a syllabus. Then each week, the teacher delivered the information to his or her students. Some students took copious notes, and others listened, but they were all accountable for the information on the mid-term and final.
Technology and the Internet have changed that approach.
Students today are as much a part of the research team as the professor or teacher. Any topic introduced by a teacher can be googled during class. What’s an event horizon? You can know the answer in .76 seconds. Is the Pacific Northwest Tree Octopus really endangered? It takes .59 seconds to reach the Zapatopi homepage to find out.
Hoax or reality?
Most people recognize that not everything on the Internet is true. Some of the information is blatantly false; another portion may be built around inductive leaps, faulty syllogisms, or other types of fuzzy thinking.
Analyzing content for its veracity and checking facts can take time. It also requires deeper analysis. After all, the Pacific Northwest Tree Octopus site looks real. It has tabs for FAQs, sightings, activities, links, and the media. Someone even posted photos of the creatures.
What appears plausible is not always real, even if we wish it to be true. Therefore, we must still teach fact checking, at least until artificial intelligence steps in to do it for us.
Fact-checking made simple
Automation is taking the guesswork out of fact-checking.
Researchers have been able to construct algorithms that detect fake content, photos, and even videos. Those social media posts about extravagant vacations that never happened may become a thing of the past. In time, so will fake news.
Artificial intelligence is taking on accountability and eliminating the guesswork in fact-checking.
To combat the plethora of information that has invaded the Internet, and by default, the media, Harvard and MIT are promoting projects that explore ways in which artificial intelligence can improve the quality of information available to everyone. There currently are three projects of note, including Tattle Civics Technologies (WhatsApp), Rochester Institute of Technology (deepfakes), and Chequeado (fact-checking for journalists in Latin America).
Fake news is like gossip; it spreads much quicker than the truth, and it’s harder to eradicate.
Even MIT admits that we’re nowhere near being able to rely exclusively on AI for our fact-checking. The artificial intelligence algorithms used to ferret out falsehoods are accurate only two-thirds of the time.
The work isn’t easy. Misinformation is more difficult to identify because of the 900 variables algorithms use to determine how trustworthy a news source might be. We might be better off labeling veracity with good, better, best labels until technology catches up.
AI fact-checking won’t be accurate until machine learning has built considerable experience in identifying fake news, hoaxes, and misinformation. When it does, even the tree octopus hoax may become a thing of the past.