With visualisations of the social media silos we all fear, Dr Mehwish Nasim tells us how we can break the cycle of misinformation and disinformation.
    Mehwish is a Computer Scientist working in the emerging field of social cybersecurity. Her research lies at the intersection of computer science, mathematical sciences, and sociology. She develops mathematical models of social influence to combat the spread of misinformation on social media, and uses large language models and computer vision techniques to identify hate speech online. Mehwish holds a PhD in Computer Science from the University of Konstanz, Germany, and works as a lecturer at the University of Western Australia, where she teaches Artificial Intelligence, Software Engineering, and Cybersecurity. This talk was given at a TEDx event using the TED conference format but independently organized by a local community. Learn more at https://www.ted.com/tedx

    [Applause] I’m a computer scientist a misinformation researcher an academic but I have also been dead for the past 3 years according to Chad gbt you must have heard about chat GPT it is an example of a large language model large language models are a class of artificial intelligence we also call them

    Llms which are used to generate and process human language it has been pretty useful we can get personalized recommendations we can get instant information we can also Al get indulged in conversation it has made us believe that it can help us navigate the complicated world of digital information and that’s where we get

    Blindsided AI driven decisions have a dark side as well there is always this possibility that somebody can use our data for nefarious purposes and users may not know what’s happening with their data Bad actors can use that data for scamming purposes or can create disinformation to cause

    Harm so the question today is will we be able to combat disinformation in this new era of AI let’s see how do these models work llms are trained on a large amount of data you can compare them with children it is expected that responsible individuals such as their parents are

    Going to train them as those little Minds grow up they form neurological connections and use the knowledge that they have gained in the past to extrapolate it and apply it in situations uh which are probably unseen situations and then they use that knowledge to make decisions llms are also trained on large

    Amount of data but guess what sometimes that data is inaccurate or unfiltered or even biased as a result these models have the potential to make wrong decisions and even if these models are trained on accurate data sometimes that data is so large that the models get get confused and they start

    Hallucinating probably this is what happened when I asked Chad gbt about myself and it thought that I was dead if there are so many problems with these models then why do we believe them well there’s a little problem with the human psychology whenever a system shows even

    A hint of linguistic ability we start associating intelligence with that system and we start trusting that system since chat gbt talks to us just like us we think it’s very intelligent language models look at the statistical Trends in the data and then they make predictions or give you the answers if malicious

    Actors get hold of those models they can tweak the output and still go go undetected you can call it disinformation is put on social media there is a high chance that it would spread and change reality for a number of people how do people influence each other on social

    Media we can actually mathematically model that there is a very simple model and it’s my favorite model that only looks at how uncertain you are about your choices an example a number of you would have smartphones some of you must be sure about the choice that you have made but then some of

    You might have doubts about your phone let’s say you have an iPhone and you are reasonably certain about your choice but it so happens that you start interacting with people who own Android phones and they start pursuing you to change your phone they tell you how bad

    Your phone is and how good their phone is and when you keep interacting with those people and also given that those people are way more certain about their choice than you are you would get influenced and you might change your opinion and then switch to an Android phone so back

    In 2019 and 2020 in Australia we had unprecedented Bush FES we collected some data on social media and we found out that there were two polarized communities on social media there was one group of people who were just discussing that arson was the the cause of bush FES and then there was another

    Group of people who talked about the fact that climate change was the cause of bush FES and the two communities barely interacted with each other as a result the opinions in those two communities kept getting reinforced even if some of those opinions were misinformed so now the question is can we combat

    It well we all need to believe that humans are very powerful even if AI has the potential to generate a plethora of misinformation it is US humans who share that information we conducted a psychological study using a gaming simulator and we found out that participants who were

    Hasty in Sharing messages and those who were driven by emotions they ended up sharing a lot more misinformation than people who paused pondered and looked at fact checking websites before sharing the messages all of us can act as dead ends when it comes to sharing misinformation on social media we need to encourage

    Healthy debate among different communities on social media and we can be the people who can stop the spread of misinformation a couple of years ago Australian government introduced an AI technology called roboat unfortunately it sent wrong notices to a number of people which caused a lot of Financial and mental

    Stress had there been humans in the loop and had there been regular ations that may not have had happened so we need to encourage governments to speak to platform owners and those who create AI Technologies to regulate them although I am a misinformation researcher working

    At the edge of AI and social sciences I do not have a perfect solution to the problem of misinformation on social media in this era of AI but when we are using AI Technologies we need to be mindful we need to be careful when we are trusting the algorithms we should not believe

    Whatever we read or hear on social media because AI sometimes lies as well and perhaps you should not believe me as well because after all I’m already dead according to Chad GPT thank [Applause] You

    3 Comments

    Leave A Reply