Host Myles Bess breaks down the research around why our brains can so easily make us believe that fake news is real news.
SUBSCRIBE to Above the Noise: [https://www.youtube.com/abovethenoise?sub_confirmation=1]
ABOVE THE NOISE is a show that cuts through the hype and takes a deeper look at the research behind controversial and trending topics in the news. Hosted by Myles Bess and Shirin Ghaffary.
*NEW VIDEOS EVERY WEDNESDAY*
Ever have an argument with someone, and no matter how many facts you provide, you just can’t get that person to see it your way? One big reason for this is cognitive bias, which is a limitation in our thinking that can cause flaws in our judgement. Confirmation bias is a specific type of cognitive bias that motivates us to seek out information we already believe and ignore or minimize facts that threaten what we believe.
So how can you overcome confirmation bias? It’s tricky, because brain research shows that once a person believes something, facts don’t do a very good job changing their mind. Studies show that when people are presented with facts that contradict what they believe, the parts of the brain that control reason and rationality go inactive. But, the parts of the brain that process emotion light up like the Fourth of July.
In this video, Myles dives into the research and offers some tips to combat confirmation bias.
* What is confirmation bias? *
Confirmation bias is the tendency to interpret new evidence as confirmation of one’s existing beliefs or theories.
* What is fake news? *
Fake news is a type of hoax or deliberate spread of misinformation. For a story to be fake news, it has to be written and published with the intent to mislead.
SOURCES:
How Fake News Outperformed Real News on Facebook
https://www.buzzfeed.com/craigsilverman/viral-fake-election-news-outperformed-real-news-on-facebook?utm_term=.ljW98PZ9p#.ya49Vow9A
Most Americans Who See Fake News Believe It
https://www.buzzfeed.com/craigsilverman/fake-news-survey?utm_term=.qu5xrd1x3#.bnlmxlvmd
Neural Bases of Motivated Reasoning: An fMRI Study
http://www.uky.edu/AS/PoliSci/Peffley/pdf/Westen_neural%20basis%20of%20motivated%20reasoning_An%20fMRI%20study.pdf
Dopamine and Confirmation Bias
http://www.jneurosci.org/content/jneuro/31/16/6188.full.pdf
How to Convince Someone When Facts Fail
https://www.scientificamerican.com/article/how-to-convince-someone-when-facts-fail/
Follow KQED:
KQED: http://www.kqed.org/
Facebook: https://www.facebook.com/KQED/
Twitter: https://twitter.com/KQED?lang=en
Teachers follow KQED Learning
KQED Learning: https://ww2.kqed.org/learning/
Facebook: https://www.facebook.com/KQEDLearning/
Twitter: https://twitter.com/KQEDedspace?lang=en
About KQED
KQED, an NPR and PBS affiliate in San Francisco, CA, serves Northern California and beyond with a public-supported alternative to commercial TV, Radio and web media.
Funding for Above the Noise is provided in part by S.D. Bechtel, Jr. Foundation, David Bulfer and Kelly Pope, Horace W. Goldsmith Foundation, The Dirk and Charlene Kabcenell Foundation, The Koret Foundation, Gordon and Betty Moore Foundation, Smart Family Foundation, The Vadasz Family Foundation and the members of KQED.
Remember that "ChatGPT makes you dumber" paper from MIT?
Remember that "ChatGPT makes you dumber" paper from MIT? I've just read it while preparing for the next iteration of my "GenAI in Research" course and it's even worse than I feared. I was sceptical of the sensationalist headlines, but expected just some nuanced misinterpretation of the results. In fact, I'd already seen posts talking about the small sample size or the fact that the observed changes in neural connectivity might be due to a familiarisation effect. However, the reality turned out to be much worse.
I'm sharing this because I feel articles with titles "The truth is a little more complicated," e.g. recent Conversation coverage, validate the original text by treating it as a proper scientific paper. But it is not. The text violates basic requirements for presenting research findings. Honestly, the paper is such a mess that I don't even know where to start...
Okay, let's start with the design of the experiment. Participants were split into 3 groups and got 20 minutes to write an essay on a given topic. One group was allowed to use ChatGPT, another could use a search engine, and the third could only use their brain. The sessions were repeated 3 times, and then, 4 months after the first session, there was a 4th session where participants switched modes, i.e., those who had been using ChatGPT were not allowed to use anything, and those who had been using only their brain switched to using ChatGPT.
55 participants completed 3 sessions, and then the authors removed one of them to make the distribution nicer. Yes, you heard that right—they just removed an observation for no reason. Here's a direct quote from the paper: "55 completed the experiment in full (attending a minimum of three sessions, defined later). To ensure data distribution, we are here only reporting data from 54 participants (as participants were assigned in three groups, see details below)." Like, seriously, what? They just want the number of observations to be divisible by three and drop a data point? That's not how science works.
Anyway. At least, we have established that there were 3 groups in the experimental design. However, one of them mysteriously disappeared and the final results are reported only for two groups! So maybe there were 3 groups, maybe 2. Or maybe 5. And I'm not joking—there is a plot (Figure 12) suddenly showing 5 different groups without any explanation of what is going on.
One of my favourite figures is Figure 7 (attached to this post). Can you guess what these p-values correspond to? Check out the paper and you might be surprised. But also note that the figure caption says "Percentage of participants within each group who provided a correct quote" while the axis label says "Percentage of Participants Who Failed." So, never mind the p-values as authors can't even decide if they're measuring success or failure.
The most interesting part is just coming but apparently this post is already too long for LinkedIn so I have to continue in comments. | 74 comments on LinkedIn
On anecdotes versus objective data, and the relative value of each, but more importantly the fact that they serve different purposes in different contexts.
How to see past your own perspective and find truth
The more we read and watch online, the harder it becomes to tell the difference between what's real and what's fake. It's as if we know more but understand less, says philosopher Michael Patrick Lynch. In this talk, he dares us to take active steps to burst our filter bubbles and participate in the common reality that actually underpins everything.
Why Do So Many People Share and Believe Fake News?
Fake news spreads across the Internet like wildfire, and might even spread more quickly than real news!Hosted by: Hank Green----------Support SciShow by beco...
In 44 episodes, Adriene Hill teaches you Statistics! This course is based on the 2018 AP Statistics curriculum and introduces everything from basic descriptive statistics to data collection to hot topics in data analysis like Big Data and neural networks.