Home
Courses
Engineering and Technology
Computer Science and Information Systems
The AI research combatting the online spread of false information
Thanks for visiting TopUniversities.com today! So that we can show you the most relevant information, please select the option that most closely relates to you.
Your input will help us improve your experience.
Your input will help us improve your experience.You can close this popup to continue using the website or choose an option below to register in or login.
Already have an account? Sign in
Sign up for free to continue reading.
Ask me about universities, programs, or rankings!
Our chatbot is here to guide you.
We use Necessary cookies to make our website work. We’d also like to set optional Functional cookies to gather anonymous site visitation data and Advertising cookies to help us understand which content our visitors value the most. By enabling these cookies, you can help us provide a better website for you. These will be set only if you accept.More information about the cookies we use can be found here Cookies Policy
Views
The AI research combatting the online spread of false information
Craig OCallaghan
Updated Sep 09, 2024Save
Share
Share via
Share this Page12
Table of contents
Table of contents
Sponsored by Loughborough University
You don’t have to look too hard for recent examples of how the online spread of false information has significant real-world consequences. Whether you are trying to keep up with the latest political news, or just doing your online shopping, you can’t always be sure that the information you’re exposed to is accurate.
Innovative research by Dr Nick Hajli from Loughborough University digs deep into the challenge of preventing false information from spreading online, harnessing artificial intelligence to reliably assess the legitimacy of content and creators.
We spoke to Dr Hajli – who also leads Loughborough’s International Business, Strategy and Innovation group – to learn more about his pioneering research and how AI could be used not only to detect false information, but curtail its spread before it can cause harm.
What are some of the greatest challenges in tackling the spread of false information online?
Social media platforms have millions of users continuously generating content. The scale of information flow makes it challenging to monitor and verify content in real-time.
Disinformation campaigns use advanced techniques, including AI-generated deepfakes, which are increasingly difficult to detect with traditional methods. Distinguishing between genuine users and sophisticated bots is complex due to the bots' ability to closely mimic human behaviour.
Implementing strict regulations can conflict with the principles of free speech, making it challenging to create policies that are both effective and fair.
Disinformation can also originate from various sources, including state actors, ideological groups, and individuals with different motivations, complicating the identification of root causes and appropriate responses.
How does your research address these challenges?
My research leverages machine learning and text mining to analyse large datasets of tweets, aiming to identify patterns and detect malicious bots early.
Actor-Network Theory helps in understanding the interplay between human and non-human actors (bots) in social media networks, providing insights into how disinformation spreads and how it can be countered.
By creating tools that detect harmful social bots, my research aims to mitigate the spread of false information before it gains traction. Investigating the mechanisms through which social bots influence public opinion helps in designing more targeted interventions.
What can social media companies do to mitigate the spreading of false information?
Social media companies are increasingly responsible for implementing content moderation policies to curb the spread of false information. They need to invest in more sophisticated detection technologies and human oversight.
Companies must be transparent about their content moderation practices and accountable for their decisions. This includes clear communication with users about why certain content is flagged or removed.
In the future, social media companies are likely to enhance their use of AI for real-time content analysis, collaborate with fact-checking organisations, and implement stricter verification processes for accounts.
Given the rise of deepfaked audio and video, do you believe people can still correctly identify disinformation?
As disinformation techniques become more sophisticated, it will be increasingly difficult for individuals to distinguish between genuine and false content, particularly with the rise of deepfakes.
Enhancing digital literacy is crucial. Educating users on how to critically evaluate information sources and recognise disinformation techniques can empower them to make more informed decisions.
If you were able to make one lasting change to social media platforms, what would it be and why?
A lasting change would be to implement comprehensive verification systems for both accounts and content. This could involve:
Verification systems can significantly reduce the spread of false information by ensuring that only credible and authenticated sources gain visibility.
This approach addresses the root of the problem by preventing disinformation from reaching large audiences, thereby reducing its impact on public discourse.
As Head of Content, Craig is responsible for all articles and guides published across TopUniversities and TopMBA. He has nearly 10 years of experience writing for a student audience and extensive knowledge of universities and study programs around the world.
Recommended articles Last year
3 benefits of your university having an international partnership
How to immerse yourself in the local culture while studying abroad
6 reasons to study a master’s degree in Egypt