Marc Fawcett-Atkinson [00:00:00] When I’m writing a story, I’ll try to go to a place. It’s intangible, but you do get a sense of what it looks like, what it smells like, how people talk, how they interact, all those little details which a computer can’t capture. It can’t exist socially as a human does. By definition, it’s a machine.
Patricia [00:00:34] Welcome to another episode of The AI Purity Podcast, the show where we explore the intersection of artificial intelligence and the pursuit of truth. I’m your host, Patricia, and today we have a very special guest joining us. He is no stranger to the world of investigative journalism, with a keen focus on pressing issues such as food systems, climate, disinformation and the environment. As a reporter and a writer for Canada’s National Observer. His dedication to uncovering the truth has earned him numerous accolades, including a Webster Award nomination for Environmental Reporting in 2021 and recognition from the Canadian Association of Journalists for his work on disinformation. Today, we’ll talk about his experiences, insights, and expertise as we explore the evolving landscape of journalism in the age of artificial intelligence. Join us as we unravel the complexities of AI driven disinformation and the vital role of ethical journalism in safeguarding the integrity of information. Without further ado, let’s welcome Marc Fawcett-Atkinson to The AI Purity Podcast. Hi, Mark, how are you doing today?
Marc Fawcett-Atkinson [00:01:36] Good. How are you? Thanks for – that was quite the intro. Thank you.
Patricia [00:01:39] Thank you. Well, we are so glad that you can join us today. We would love to, get our audience to know you a bit better. Could you share, your story of how you began your career as a reporter and writer and how you joined Canada’s National Observer?
Marc Fawcett-Atkinson [00:01:53] Okay, I’d say probably, I started… a friend and I work on the documentary. This would have been 6 or 7 years ago, covering water politics in New Mexico. And that kind of got me hooked on reporting, talking to people, interviewing kind of that exploration and curiosity, which led me to do a master’s in journalism. And then shortly after I started with the National Observer, and initially I was covering food systems, climate related. So the National Observer, all of our work focuses on climate change. And I started off doing food related issues like the climate, which then led me into plastic pollution and the petrochemical industry, and that kind of fed into disinformation in the last few years. So I still cover food, I still cover petrochemicals and toxins. But I’ve also kind of opened this wheelhouse of disinformation.
Patricia [00:02:48] And these are all relevant issues that we should exactly be talking about today. And I wanted to ask you, [00:02:54]what inspired you to explore, the relationships between people and their social and physical environments? And how does this perspective inform your journalistic work? [8.9s]
Marc Fawcett-Atkinson [00:03:04] [00:03:04]I think it’s hard to understand the world without looking at the relation. I guess I approach my reporting from a relational perspective. So I think about, you know, how are events, how are politics, how are, and like, how is our environment influenced by how we relate to each other and how we communicate essentially, and how kind of those dynamics play out? So I think, yeah, I’ve always kind of had an interest in those links and particularly in terms of the environment, like how do the ways that we relate to each other and we understand ourselves in our environment kind of shape our actions within it, I guess. Almost like geography. That’s probably the closest academic discipline that I would have come from. Yeah, and in terms of like how that informs our reporting now, that’s a how I try to approach, I guess, the broad framework within which I try to approach, you know, the questions of environment, the environmental questions I’m writing about now. It’s like, how do these different actors relate to each other? [57.4s]
Patricia [00:04:02] And yeah, that goes, right in with my next question about like, [00:04:05]how do you approach the storytelling aspect of your environmental reporting to engage and inform readers about critical issues without overwhelming them? [8.3s]
Marc Fawcett-Atkinson [00:04:15] [00:04:15]That’s the eternal balance. Part of it, I think, is trying to find the story like the narrative thread, which honestly, news can be a bit hard to do. So, I’ll try to see is there a kind of a story arc to the piece? Is there, a flow to the information that feels natural and present in it? Some of it is getting good at figuring out which information is relevant or not. And usually the way, and I do this both of my interviews and when I’m writing is I try to approach it from the perspective of someone who doesn’t know anything. [31.7s] That’s a great perspective to come from. Yeah, exactly. And I start that process at the interview level usually. [00:04:54]I’ll ask questions that I know the answer to. And I know the other person knows that I know the answer too often if it’s an environmental organizer, or even an academic. But I want them to kind of articulate what the concept is, or the argument is that they’re making in a really clear way, because then I can go and say, okay, this is shaping how I’m going to write the story. [23.3s] But also, like, I could actually use that quote in the piece and have them explaining what they’re trying to say and kind of what this context is. But in terms of writing, a lot of it – like, I have really good editors. But yeah, [00:05:32]a lot of it is just kind of trying to simplify it, not simplify, but like figure out what’s the core issue or the core question at play. And I think part of it is just experience to some degree, and reading a lot of other news stories about whatever I did about another, like just anything. And you kind of get a sense of like, oh yeah, that’s relevant, that’s not. And even like in terms of the writing, I’ll try to take out adjectives, I’ll try to take out like excessive words to just really try to simplify it down to, you know, what am I trying to say? What is this matter? Who cares about it? Why should they care? Yeah. [43.6s]
Patricia [00:06:17] Amazing. And you talked about this briefly earlier, about how you got started in environmental issues and reporting. But what drives this focus for you today? And what significance do you believe these topics hold in the broader context of your journalistic work?
Marc Fawcett-Atkinson [00:06:32] I think for me, it’s like I’ve actually – I’m in Vancouver looking out across at the mountains, and there’s almost no snow. So I think to some degree answers right there. It’s, you know, we’re in a climate crisis and, the weather is erratic. And right now in VC it’s too warm. And we’re looking ahead at a really terrified fire season and drought season. And that also my writing on particularly pesticides and toxins has been scary. Would maybe be the best word to use. Especially the more you write about, the more issues there are. There’s a certain I don’t want to say, yeah, it’s almost a the frustration and anger. So it feels like, a good way to kind of use the skills I have to expose and kind of highlight a lot of the environmental challenges that we’re facing. So, to some degree, I wouldn’t say I’m an activist. I’ve never been activist. I don’t like protests. I don’t kind of take that super public approach, but I definitely have a conscientiousness, like, yeah, I care for the world we’re in. And it’s also work that feels morally comfortable, like it does feel, even if it’s small and…
Patricia [00:07:47] It’s a subtle activism, I would say.
Marc Fawcett-Atkinson [00:07:49] Yeah. And the results aren’t necessarily measurable directly, but you hope at least that there having some impact.
Patricia [00:07:58] Absolutely. Well, I want to pivot a little bit about, from environmental issues. We were talking about our social environments, and I don’t think there’s anything more relevant to our society right now than artificial intelligence. And that’s what we really wanted to talk about today. AI Purity is a platform dedicated to AI text detection, which is why we wanted to, you know, speak with journalists and see their perspective on, you know, this issue. So I wanted to ask, [00:08:24]how do you think AI is currently impacting the journalism industry, and in what ways do you see it influencing the way stories are researched and reported? [8.8s]
Marc Fawcett-Atkinson [00:08:34] [00:08:34]There’s kind of two answers to that. One is in the journalism industry, I’d say particularly management end, there’s a push and an interest into seeing how it can substitute for reporters. Which raises two concerns for me. One is, you know, the jobs and the labor aspect, and the other one is straight up accuracy. Like occasionally, I’ll just for kicks, I’ll go on the ChatGPT and put in a prompt. Actually did it today because I was looking for just the source and I was like, oh, I wonder if it’ll generate something for me. And like… How was it? Well, there’s two issues. One is that it does generate something that sounds accurate, but it doesn’t cite anything. It gives me no references, so I can’t cross-check it. And when I have, out of curiosity, asked it to write a news story, they are consistently inaccurate and false. So I think aside from the, you know, the labor issue and the fact that, you know, we do need skilled reporters who can verify information and articulate that information in a way that’s not only, like clear, but also compelling. This is my other criticism of tech space AI is that it just is boring. Yeah. There’s there’s no verification or safeguard, none of the verification or safeguards that are kind of embedded in the editorial process are in the AI system, at least from what I’ve seen and kind of what I’ve experienced. And the matter of concern on the disinformation side is just people writing fake articles or using it to generate. And some of the experts I’ve talked to have said, you know, a lot of this isn’t – it’s not that the AI will be coming up with new material or new disinformation, it’s that you can increase the scale of what you’re spreading exponentially, because instead of needing to have people actually write the content, or thinking up the content, you can have a machine do it and just kind of like blast it out. And whether that’s on fake news websites, fake blogs, Facebook posts, whichever platform you have, it just makes the scale exponentially bigger. And that, I think, my other concern. That’s not directly related to journalism per se, but more to kind of wider ecosystem or the information ecosystem. [142.0s]
Patricia [00:10:57] Yeah, I wanted to touch base on that as well because I wanted to get your perspective on, you know, platforms, not necessarily news sources using AI text generators in creating online content specifically. [00:11:09]How do you think it’ll affect the credibility and authenticity of information in general? [3.9s]
Marc Fawcett-Atkinson [00:11:14] [00:11:14]I think it’ll make a lot harder for people to filter the information and to know which information is valid, and partly because it’ll just be more common and partly because people get used to it. So I think for me, the biggest area – the biggest responsibility area, the way I see there is, is on the educational system, whether that’s public, honestly like schools. And it started at a very young age to make sure that kids know how to filter information and know how to identify so– which is like basic stuff. I learned this when I was a grade six or that I started to, you know, that kind of continued through university. But like, helping people be much more aware of how information can be generated and what blocks that or not, yeah, I think will be key, but it’s a bit of an unknown world and it is rather scary. Just the amount of fake information that can be pumped out there. And like I said, a lot of it, you know, social media already is pretty good at generating a pretty whack of stuff. Yeah. Even with actual people behind it. So it’s not that the concept of verification is new. I think it’s that the you need to be much more attuned to it. And it’s – AI can be much harder to, I think, to differentiate because it can imitate basic… Yeah, it can imitate non-boring human writing pretty darn well. [88.0s]
Patricia [00:12:43] Yeah. And like the more it’s being used, the more harder it becomes to, you know, decipher if it’s like AI-generated human-written. I mean, I feel like with anything, any content you see online and especially for the younger generation, you said earlier, like when you were in sixth grade, you could already tell, I totally agree with that. But I don’t feel like it’s the same for the younger generation. I mean, I think they may be a little more, susceptible to believing fake news. So that’s why we come on here and like, try to be the platform to, teach people about how can you spot these types of content that are AI-generated, how can you decipher what is truth and what isn’t? And so [00:13:21]I wanted to, ask you, could you share your thoughts on the ethical considerations surrounding the use of AI in journalism or just in general, text, generation, particularly in the context of preserving the integrity of information and combating disinformation. [16.8s]
Marc Fawcett-Atkinson [00:13:39] [00:13:39]From my perspective, there’s kind of two aspects to that. One is on the reporting end. I think you… and I don’t know what this would look like, because journalism isn’t irregulated. I would say regulated industry, but it’s not like engineers or lawyers, who need to pass a test that was kind of a college regulated that were really working based on, you know, each company’s journalistic standards and practices and kind of the broader ecosystem of journalistic practice and that ethical journalistic practice. So I think there is a responsibility to be – if you do use AI to be very transparent about it and to always, always, always verify it. Like, I do think there is a space for it as a tool. Occasionally, I’ll use it to just like get into the writing mode if I’m like, if I’m just having like blank page fear, and I can’t get going. You know, I can kind of kickstart that process. But then once I’m in, like, I’m writing and I need to be, you know, I’m verified all the information and I’m not copy pasting it essentially. It’s one tool out of other tools in the tool box. But it can’t be kind of a substitute for your verification work. And honestly, your work and kind of your craft work as well. Like another example I’ll use there is that I use Otter.ai, which is a transcription service often because it just accelerates my work speed by like eight times. Instead of need to hand transcribe, I can get a transcript in 30 seconds, right? So that there is like an example. It’s a very useful tool, but it’s not writing the stories for me. It’s one element of that process. And then the other angle to that is whether, and I think, like, you know, New York Times’ recent pushback on AI use in its material, I think is a really interesting development. I agree with their pushback on this, because what we generate is valuable and it’s intellectual property. And, you know, there’s a reason that we asked people to either donate or subscribe, like in the case of National Observer, to subscribe to read materials, because it costs something to produce. And I think allowing AI to just draw from the material we produce and then essentially plagiarize it also brings up a lot of moral and ethical and labor issues as well. So that’s kind of the other angle to that. [158.2s]
Patricia [00:16:17] Yeah, I totally agree. And what do you think, like, [00:16:20]how can journalists navigate this evolving landscape where there’s a world now of AI-driven disinformation, and how do we ensure accurate reporting to counteract the potential influence of false narratives in public perception? [13.7s]
Marc Fawcett-Atkinson [00:16:34] [00:16:34]As the journalists, educate yourself as much as possible on whatever it is you’re writing about with actual people who are experts in it, or reports or, you know, peer reviewed research. But usually for me, the experts and peer reviewed research are my two go-tos, so that I know enough about the context to be able to say, you know, okay, this doesn’t really make sense. And yeah, and it also in my work, citing those same kinds of sources. Definitely it has pushed me to rely much more on peer reviewed work and, yeah, and direct interviews with people who and, you know, none of this is new, like, journalists have been doing this for ages. But I think it puts an even bigger emphasis on the need to do that and to have that verification process. Yeah. And when you have a newsroom with the resources, like, the fact checking process that magazines would use, for instance, it’s great because then you have it even more sources and you have another person going through and checking everything. In some – it depends on what kind of news organization you’re in, whether that’s possible or not. But kind of take in – at least take in some of the rigor and the standards from that and bring it into your work, I think. [84.8s]
Patricia [00:18:00] Yeah. Highlighting direct sources as you’ve always done I think that’ll really, you know, set the tone and make the difference. If you see an article that doesn’t have any references or citations, I mean, that’ll be a very obvious sign that this might be an AI-generated content. So, thank you for that. And, [00:18:18]do you believe that the deployment or development of AI detection tools like AI Purity is essential to fight against disinformation? And, how might these tools, you think, contribute to maintaining online integrity? [14.4s]
Marc Fawcett-Atkinson [00:18:34] [00:18:34]I think they’re an important part of the picture, for sure. And one thing that I have noticed with them is it’s– I’m not enough of a tech person to know how either the AI works or the AI detection works, right? So, having that – I think for me personally, what helps inspired to trust in the AI detection is having a knowledge of how that is actually figuring out whether a text is AI – or an image or whatever it is content you’re looking at is AI-generated or not. Just kind of a basic understanding of how that process works. But it’s definitely, like I said, it’s another tool in the toolbox that I think is super useful, because in the same way that AI scales the amount of information that can go out there, detection tools can help scale the speed at which you can look through the information of flagged issues that you then go through kind of a more rigorous fact checking process, but it’s that it helps with that kind of initial scanning. It’s kind of how I see it play in, it makes me think too of, you know, professors that are scholars who are trying to make sure their students aren’t plagiarizing. Again, it’s like a helpful tool to catch that, but I think that the idea that that’s the only standard that you use to decide whether someone’s plagiarized it or not can be a bit of a complicated ground, because you do want to dig a bit more and see, you know, what’s what’s going on. Where is this material coming from? What does, you know, in the case of like a student, where – what does the person say to it or not? And like, use some of those human skills, I guess, to make the final decision. So yeah, it’s a tool. And like thinking about it in terms of a tool I think is the way to go about it. [108.1s]
Patricia [00:20:22] Yeah. Yeah, no I agree. Like you said, it is a tool. It’s not anything that can replace human ability to make that sort of judgment by yourself. I mean, the way AI text detectors work is quite similar to AI generation, where it, like, has a database of everything that’s been written, which is why, like you said earlier, it’s very easy to plagiarize already existing bodies of work, especially if it’s online, because that’s where AI takes that from. So if you know, if they can see that it has been written somewhere else, then it can say. They pretty much look for the perplexity and burstiness of the sentences. So basically, like the way humans would write sentences would not be the same as a computer. So they can like tell those little like subtle things, you know, so it is a tool. It’s not something you can, I think, personally completely just use that as a basis. But like you said, it is very helpful and easy, but you have to still do the work, I guess, and just make your own assessment and judgment to completely know what you’re reading if it’s correct or not.
Marc Fawcett-Atkinson [00:21:25] But I think the scale question is big for me. Like the – there’s just – on both ends, like, the internet has allowed so much material to be at your fingertips. It’s just you need a tool to help work through all of this.
Patricia [00:21:40] This, like, makes the process like faster, but like you said earlier with using transcription tools, right? It just makes everything easier, especially with like the amount of like stuff that’s out there. You can’t really, like go through all of it all at once, or like at least if you want to save some time, that’s where it becomes a bit handy using AI tools, I would say.
Marc Fawcett-Atkinson [00:22:01] But even, like not – this isn’t an AI tool per se, but the piece I did that you mentioned, it got an award nomination for… we use the Web Scraper for Facebook posts that scraped every Facebook ad on political issues in Canada for, I think, a year or something. And that allowed links to emerge that would have been possible to see otherwise, but it would have taken three weeks a month to actually do all that manually whereas the Scraper did it in three days or something. Two days. Yeah.
Patricia [00:22:37] I think they use maybe similar technologies like machine learning and stuff like that. But yeah, like let’s use these like tools as they are. But not necessarily using that all the time, especially for creating content. So let’s talk a little bit more about your investigative reporting and some ethical challenges. [00:22:55]As a journalist covering a broad spectrum of topics, how do you balance in-depth investigative work with the need for timely reporting on rapidly evolving issues? [9.2s]
Marc Fawcett-Atkinson [00:23:05] [00:23:05]I think that’s every journalist’s existential question. I’m lucky with the Observer that my role gives me a fair bit of – I don’t have many breaking news stories. I don’t really do breaking news. You know, I’ll have stories that are time sensitive and do need to get published, but our approach is that we’re not going to be beating The Canadian Press on everything all the time, because they have an entire staff whose job it is to cover breaking news or CTVs. So it’s a slightly different thing where I see my work fit in is more of a – a bit more depth, a bit more analytical, a bit more investigative, kind of digging into stories. Yeah. So that kind of – in terms of the news end, that’s probably where I see myself anymore. It’s, like, slightly following up on breaking news with more depth and more complexity. Like one example that comes to mind is a few weeks ago, I did a piece about a gas leak at a refinery in Vancouver that, like, sent or Burnaby that sent a huge gas cloud all over the city and like plenty have been written about it in The Sun and CBC, but I wanted to do a piece… Yeah, looking at it a few, like, two weeks later, you know, what information had come out, and no one had really kind of done that follow up work. So in terms of the news, that’s kind of where I see myself fitting in it. And then that allows time for investigations because you have a bit more flexibility. Some of it’s also just time management to balance between more newsy stories and more investigative stories. If there’s a really big investigation my editors will let me work on that more to prioritize it. Yeah, so it’s just kind of balancing essentially. And there’s also ebbs and flows in the news cycle, like there’ll be really busy periods where you need to focus more on more immediate issues, and then there’ll be two weeks where not much happens that you have time to do others. So… [117.5s]
Patricia [00:25:04] [00:25:04]So, in your investigative work, how do you ensure the accuracy and the reliability of the information you gather, especially when dealing with issues prone to misinformation? [9.2s]
Marc Fawcett-Atkinson [00:25:14] [00:25:14]The same process as I was describing earlier is I’ll rely a lot on on experts in the field and peer reviewed work. Occasionally, I’ll use for government reports sometimes. I’ll use reports that – they might not be peer reviewed, but they’ll be like environmental organization reports, but even that I take those with a grain of salt, usually because they obviously have a bent to it. But yeah, again, it’s finding where you can get reputable sources and leaning on that to at least contextualize. It doesn’t mean that everything needs to be coming from there. Particularly if you’re looking also for a story arc, you might find, you know, a really good character that you trace through, but you need to check what they’re saying against other sources, essentially. [44.1s]
Patricia [00:26:00] [00:26:00]And, have you encountered instances where AI had been used to manipulate information or narratives? And how can journalists counteract that potential of misuse of AI in shaping public perception? [12.5s]
Marc Fawcett-Atkinson [00:26:13] [00:26:13]Thinking If there’s anything I’ve written about where it was really obvious… I don’t think I’ve yet covered any stories where it’s been obviously AI – like, AI was obviously used to generate misleading content, or at least that I could tell as journalists. And if I were to encounter that, it’s, I think I’d approach it from, a perspective of, you know, going back to those basic verification systems. As, you know, what – how does this information line up with stuff that I know to be accurate? And how does this compare when cross-referenced? Who is using the AI would also be – if you can figure that out, which you can’t always. I’ve noticed a lot of the disinformation reporting you’ll have front websites where you’ll have kind of – it’s hard to figure out who’s pushing it, which is in and of itself a red flag. Because you’re like, you can’t identify the author. You can’t verify the author. Could be anyone. And yeah, even that, like, I haven’t had this happen, but I have definitely seen cases. A colleague of mine in Victoria had one of his stories directly lifted by a AI powered news outlet in the States, or I think, in Canada. It was lifted – essentially copy pasted onto their website and given a fake journalist’s name as the byline, and he figured it out and they eventually managed to get it down. But yeah, like that would be a case to just Google the person who wrote it and it ain’t an actual person, which sounds obvious to do, but you don’t necessarily do that, right? So, I think it’s just be more careful and more thorough. [106.6s]
Patricia [00:28:00] Absolutely. [00:28:02]I mean, as a journalist, you said you do make these like verification. You do have these verification processes, but like the regular reader might not. Do you have any more of those types of advice that they can, like, use when they’re reading news sources online that, you know, they should raise a red flag or something? [18.9s]
Marc Fawcett-Atkinson [00:28:21] [00:28:21]I think that for me, the – if you like, if you’re just – and even for me, when I’m just reading not for work, just like I’m just reading online, the kind of questions I’ll ask myself is like, does this make sense in the context of what else I know and the kind of broader, again, kind of going back to that relational question and environment question like, how does this piece of information or this story fit into the broader context of what I know? And if it works, then like, okay, it doesn’t mean that it’s fully accurate, but maybe it’s – particularly if there’s other signs in it. Like it’s citing sources that you recognize, for instance, then like, okay, or even that it has an interesting structure that could be another one for me is like, if it’s well-written and it reads with a voice like you can sense that there is a lot authorship behind it. Doesn’t mean that it can’t be AI, of course, but it has a bit more texture and personality and gives me a bit more of a sense of, okay, there is actually a person. And then, yeah, like if I’m really not sure, I might Google. Just straight up Googling, you know, the author or the source or verify it. Yeah, essentially. But I guess it’s, it’s a combo of just being aware and gut feeling. Yeah. To some degree. [81.7s]
Patricia [00:29:45] Well, considering the speed at which information spreads on the internet, [00:29:48]how might AI contribute to the rapid creation of propagation or propagation of disinformation campaigns? [6.3s]
Marc Fawcett-Atkinson [00:29:56] [00:29:56]I think, like I said earlier, it can increase the amount of material that can be sent out. Yeah, that’s probably the biggest. It can increase the speed and the volume of material that can be published, like, it doesn’t – it’s not going to come up of its own volition with a disinformation campaign at least. Maybe it could. I’m sure some people would argue that it could. The more likely scenario I’ve heard from experts I’ve talked to is that it just allows the scale to be much bigger and much faster. Yeah. So I think that’s where I see it. [31.2s]
Patricia [00:30:27] [00:30:27]Well, you do have a focus on like environmental issues, right? Have you ever – because there’s a lot of people who don’t necessarily believe in climate change. Do you think they would ever leverage these types of AI tools to spread that type of disinformation, or have you seen that personally? [13.2s]
Marc Fawcett-Atkinson [00:30:41] [00:30:41]Oh, they totally would. That I’m aware of, personally, I haven’t. That doesn’t mean I haven’t in practice. The prime example that comes to mind is I was talking to a researcher who studies fossil fuel disinformation. And, you know, it’s like, yeah, fossil fuel companies could totally just use AI to put out a bunch of content. Whether that’s on, you know, Facebook posts, comments, fake blogs, and then that gets kind of wrapped into the broader ecosystem. I can’t think of any examples where I’ve directly encountered it that I know of. It’s definitely on my radar, and I’m looking for it because it would be a great story. Yeah. [41.9s]
Patricia [00:31:25] And [00:31:25]on the journalists side, how can they respond to these types of challenges where, AI-driven technologies are used to rapid dissemination or amplification of misleading information? [11.2s]
Marc Fawcett-Atkinson [00:31:37] [00:31:37]I think the biggest thing is highlighting it, exposing that it is happening. Putting out what’s what’s accurate or what’s true, but also say this is this is an event that’s happening. This is something that you need to be aware of as a person interacting in the world, and you need to be aware that even sometimes the people that you see posting that you think are people aren’t actually people. So again, it kind of goes back to that awareness question. And in terms of the responsibility of journalists, it’s really highlighting that this is a possibility that this is happening, that this is part of the information ecosystem and people working in that space or just, you know, reading the news or kind of browse around on the web or on TikTok or wherever. I need to be aware that that’s possible in the same way that, like, the conversation around deepfakes might be fake videos. It’s led to talks about regulatory measures, and I think that is needed to some degree with written material as well. But even the fact that you know it’s a possibility helps be aware and a bit more cautious, I think, at least for me. [73.2s]
Patricia [00:32:51] And [00:32:51]do you think these AI-generated misinformation articles would eventually impact the public trust in journalism? And if so, what measures, do you think should be taken to mitigate that effect? [12.2s]
Marc Fawcett-Atkinson [00:33:04] [00:33:04]I think they could, along with non AI-generated disinformation materials. Again, it’s a question of scale is how I see it. Which again comes back to I think the same answer is this question I think journalists have been asking ourselves for definitely since the Trump years is how do you – how to regain public trust? And for me, the answer to that is being much more clear as to where I’m writing from. Yeah, I think there’s kind of two answers. One is, you know, just being accurate with sources and citing essentially. And it also, you know, acknowledging that I’m writing from a certain position and I am a person. I’m not like an objective robot. And this is a discussion that’s been happening in journalism circles for several years now, was this, kind of idea of the journalist that is the objective observer. It’s slowly getting dismantled and being replaced with the idea of an observer who sees from a certain position, but that doesn’t mean you can’t be accurate about what you’re seeing and reporting. You just need to acknowledge that you are approaching it from a certain position. I don’t even want to say perspective, but, you also come at it with a background and knowledge and an approach, and it’s a question of kind of acknowledging that public facing persona at work, in the work itself, citing and and also to some degree, having a good story…[89.7s]
Patricia [00:34:36] Original stories… new stories…
Marc Fawcett-Atkinson [00:34:39] [00:34:39]Yeah. If it’s a true original story that like talks to people, right? And that can insight – that’s going to resonate much more with people who might not be as convinced than a bunch of facts which they might just write, you know, outright right away. [14.3s] But I wouldn’t say it’s easy by any means. Like writing about I’ve been – I had a few stories around conspiracy theory groups. And what I found talking to some of these people is, you know, they’re very nice people, and we in many ways had similar concerns around some big issues like environment and economic fairness. It’s just that we’d get to a point in the conversation, particularly around climate change, where our baseline facts were like in another universe from a scientific fact. So that’s – it’s hard to – I don’t know, I’ve read [00:35:35]Naomi Klein’s [0.0s] Doppelganger book recently, and she describes that feeling very well. It’s it’s like a mirror world, right?
Patricia [00:35:42] Well, we’re talking about like, more of the negative sides of, like, AI and it’s being used to spread misinformation. On the flip side, how do you think – because I feel like with enough discussions, we can, like, you know, tip that and hopefully we can use AI in a more ethical way. [00:35:59]How do you think AI technologies can be leveraged to counteract the spread of false narratives? And what do you think are the collaborations that journalists and the people behind this technology can do together [10.6s] to, you know, tip that point?
Marc Fawcett-Atkinson [00:36:14] Again, it comes back to that question of scale, like the AI or the counter AI tools – I don’t even know what to – because presumably counter AI tools are also using AI, right?
Patricia [00:36:24] Yeah, they do, actually.
Marc Fawcett-Atkinson [00:36:27] [00:36:27]But the counter disinformation tools, I think it it allows journalists to cover much more ground and get a much better sense of what’s happening across the entire information ecosystem, or more of it, than if you’re your own person, or even to think outside of, like, if I’m thinking, to social media, what would be ideal would be to be able to see outside my algorithm driven bubble. Right? Which you can do to some degree by just making a fake account and, you know, likeing stuff that you never like, but it would be awesome. Like, one story that comes to mind is I did a piece recently about this kind of front group linked to the Canadian Gas Association and promoting natural gas over alternative – like electric and more sustainable alternatives. And the way that they came to mind is they were running ads in CBC and The New York Times on the apps, like, through a third party ad provider. And I didn’t get any of the ads because I live in the wrong part of Vancouver, and I’m too young. My editor, who lives in another part of Vancouver and is older, was getting them for like 3 or 4 days. So, having a system where I’d be able to look at the entire kind of Canadian ad ecosystem and figure out where these are being run and who’s being targeted would be amazing. I don’t know if that would be possible, but that is definitely one tool that would come to mind, because then you can say, look, this group that I found has links to this industry, but it’s like kind of hiding it. It’s targeting these types of people in these sorts of environments and platforms. And yeah, like it’s exposing that essentially. But doing that as an individual is nearly impossible because I can’t be, you know, a million people at once or a million social media profiles at once. [115.0s]
Patricia [00:38:22] Yeah. I actually did want to pivot towards social media, and ask you, I mean, considering the evolving nature of online media consumption, [00:38:31]how do you see AI influencing the creation and distribution of news content, and what implications does this have for traditional journalism? [8.8s]
Marc Fawcett-Atkinson [00:38:41] [00:38:41]It allows a broader, I think, form of of creation of social media content. Yeah, it makes it harder to figure out who is who essentially. It kind of takes the ability to be anonymous and hard to trace that already exists on social media and amplifies that essentially. So I think – and can also then inform, you know, which topics are trending, which videos or posts or tweets or whatever piece of content you’re looking at get amplified or not, because they’re, in a way, working the algorithm, right? And in terms of what that means for traditional journalism, I think it goes back a bit to the question of like, what is social media mean for traditional journalism? But again, on steroids to some degree, because of that scale question. You know, it’s like how – I think social media is an environment that is definitely worth investigating and is worth kind of looking at as a sphere that’s influencing how we act in the world and how we kind of think about the environment. In my case, you know, the environment and climate change and and politics and food. [72.1s]
Patricia [00:39:54] Towards my next question, how do you think the integration of AI newsrooms? And you can let us know at the scale of which it’s comfortable to use AI in Canada’s National Observer, for example, even as simple as using ChatGPT. How do you think this might affect the relationship between journalists and their audiences, especially in terms of transparency and accountability.
Marc Fawcett-Atkinson [00:40:13] At the National Observer, we’re still figuring out the AI policies. So for the moment, we’re not using it to generate stories. Like, I use it occasionally to maybe start writing or like a bit of research, but I don’t – I would never publish, like, a ChatGPT written story and publish it. Like, absolutely not. I think for reputable news outlets, the key is going to be if you do decide to use AI, which comes with issues around accuracy, like I’ve mentioned, and eloquence, and also on the labor end, like labor questions, and whether journalists have jobs or not, and whether you have enough staff to actually be able to verify what’s being published, which I think are significant issues as well, it really comes down to being transparent and saying, you know, we use AI or AI contributed to developing this piece. And I’ve definitely seen that in, I think, in some academic papers I’ve noticed will cite the fact that AI has been used. And some reporting, I have seen that on occasion. So I think…
Patricia [00:41:26] That’s a great thing. Yeah. I mean, I feel like they should be very transparent with that. And thank you for letting us in on that in the National Observer. I mean, but I have and I wanted to ask you that question because [00:41:36]I’ve heard, and seen news stories where they would be like, AI reporters. I don’t know if you’ve heard of that. They’re trying to slowly integrate that. Like, I think they started with like AI news sports reporters. Yeah. So, that’s like a little start, but yeah. What do you think about that? I mean, because you’re talking about labor issues and the fact that it could be, you know, it’s like a touchy subject. It’s a – I don’t think it’s a possibility, or at least we’re very far from it. But I mean, with, for example, the one I cited earlier with like, AI sports reporters. How do you feel about that? [32.2s]
Marc Fawcett-Atkinson [00:42:08] [00:42:08]Well, I think that’s where I see it coming in. Honestly, in terms of the news industry, that’s where I see it hitting the hardest is on areas like sports reporting or daily news reporting or kind of easy content generation. Yeah, it would be relatively easy to gather the basic information and get the AI to write something and the format doesn’t need to be particularly enticing. The whole point of it is to spread, like, information and to share information. So, I see that’s kind of where in the industry I see it being the most likely to really to see all those fully AI-written stories are where I think the threats to jobs are. The issue is you don’t necessarily know it’s accurate, which goes back to what I said right at the beginning. And beef in terms of a labor side, that those starter jobs are like how people start. Like that’s how you get into the industry. And some people love it and say that their entire careers, but definitely those kinds of, you know, breaking news pieces or sports pieces or kind of shorter, easier stories or how how a lot of the industry kind of gets going. And also in terms of local news, you can’t really – and this is what I found working in smaller newsrooms before I was at the Observer. If you’re in a small town and a lot of your reporting is really going to come from knowing people and talking to people and what’s going to resonate with your audience because it’s a local audience, are very specific, localized details or characters or facts even that like, are fit to meet the needs of that publication or outlet, and AI just can’t do that because it does it live in a place. It doesn’t know a place in the sense of – in the sense that a person who interacts and lives within a social environment knows a place essentially, or even like a physical environment, that’s all. That’s always why when I’m writing a story, I’ll try to go to a place, because you get a sense of it. You get a – it’s intangible, but you do get a sense of what it looks like, what it smells like, how people talk, how they interact. All of those little details which a computer can’t capture because it’s a computer. [137.2s]
Patricia [00:44:28] [00:44:28]They’ll never have that perspective as a human does. [2.0s]
Marc Fawcett-Atkinson [00:44:32] [00:44:32]No, and it can’t exist socially as a human does. Because by definition, it’s a machine. [5.6s]
Patricia [00:44:39] Well, [00:44:39]in your opinion, what role does media literacy play in mitigating the impact of AI driven disinformation? And how can educational initiatives better prepare the public to critically assess information in the digital age? [13.6s]
Marc Fawcett-Atkinson [00:44:53] [00:44:53]I think it’s key. Personally, I think, like I mentioned earlier, educating people on how forms of tools are out there to generate content, like even knowing what’s possible and making sure that, you know, what’s possible as those tools evolve and update because they’re changing quite quickly, and then really teaching a lot of the honestly, to some degree, basic skills of information verification, you know, looking back to peer reviewed work over the random thing, you found on a quick Google search, talking to people that are specialized in this, who know – who’ve researched it, talking to people who are validated by the rest of their community or colleagues. Of course, there are, you know, pitfalls with that. If you’re into conspiracy world, you can get an entire alternate reality that is validated by itself, but it doesn’t track up to what you see in – outside your window. So I think, you know, there is a bit of caution there, but really, to me it comes down to just basic media literacy to some degree. Which I think, you know, we’ve known how to do this for a long time, and we don’t have a teacher for a long time. If anything, one of my bigger concerns is under funded schools, underfunded universities, and putting people kind of socially in a situation where they can’t necessarily afford to look at this and again, this and, you know, I have a bias here because I didn’t do a liberal arts degree, and I’m an arts person, but like having funding and having kind of a society that allows people to explore history, explore anthropology, explore sociology, explore political science, even if it’s not what they make – they don’t make their career there. But to have kind of, a rounded arts background, I think is helpful to be able to develop those critical thinking skills. And that is where we’ve in Canada and in the States significantly in recent decades. You know, both at the public school system level and at the university level. [133.0s]
Patricia [00:47:07] And with the potential for AI to automate the generation of deceptive narratives. And you talked about this earlier, like regulations. [00:47:15]Do you think there is a need for increased regulation or ethical guidelines to govern the use of AI in content creation, particularly in the news and information space? [8.7s]
Marc Fawcett-Atkinson [00:47:25] [00:47:25]Yes, on both fronts. I’m always a bit skeptical of industry, like, purely industry-driven guidelines on anything. Whether that’s AI or plastic production or grocers not backstabbing each other. The last two are examples and kind of drawing from Canada. So no, I think, you know, within the industry, we definitely need guidelines to at least frame how individual newsrooms and how individual reporters can think about how to use the AI in their work or not, and that they can then say, I’m following this code of conduct. Here’s you know, where the rules are. Here’s how it was developed. And I think, you know, that is helpful to for, you know, making us seem more or making us more legitimate in an audience size. In terms of regulation, to me, there’s kind of two areas that really strike me. One is the copyright question and whether AI can just outright steal your work and publish it under a letter name. Whether that’s the machine doing it or someone could be telling the machine to do it. I think that’s a significant issue just in terms of the media’s viability as an industry. And then also, I think we need, you know, we need regulation to ensure that the yeah, that there’s a system to make it so that it’s transparent and easy to find what is AI-generated or not. Not necessarily, you know, I don’t want to ban it. I’m not in favor of that, obviously, but being able to say this was generated with AI or this was not generated with AI is, I think, a key tool in terms of that information filtering. And that is definitely where regulations can come into play, is to put that requirement for transparency. And I notice this, you know, a parallel example I notice is with social media when I’m reporting on social media and, you know, Facebook, for instance, is transparent. It has, you know, transparency rules that it supposedly adheres to. But there have been several times when I’ve tried to identify information, based out of ads that it’s – that are being run, like, say, the person running the ads, how much they’re spending. And sometimes you can get it, sometimes you can’t. And there isn’t really a solid system for recourse, and there isn’t a legal requirement for them to be more transparent. So I think, yeah, just even those kind of basic, they aren’t very – it’s not a very complicated idea. It’s you know, just making sure that companies are transparent about where their material is coming from and how it’s generated is is really key in my mind. Yeah. [164.4s]
Patricia [00:50:10] And I think a lot of, the challenges in like regulating these types of use of AI tools, especially in journalism is that we might not even get a consensus, we might not even get like a global regulation. Like you said earlier, journalism isn’t necessarily something that’s regulated or like, you know, other industries might be. So I think that’s like a big challenge to that because like, for example, like the National Observer, you’re not really integrating AI. But like I said earlier, some news sources are integrating AI. So, I think that in itself is a great challenge.
Marc Fawcett-Atkinson [00:50:43] Yeah. Like, I think the, you know, you can focus on it from the journalists end, but you can also focus on it from the AI end. Do the companies who run these platforms like, say, ChatGPT, do they have a responsibility to make sure that their content is labeled as AI when it goes out into the world? So essentially, you know, shift that responsibility question or even, you know, require some I don’t know if this is possible, but like require some form of tag, on that information. So you can’t actually lift it. Yeah. Without it almost like embedded like, you know, in photos you’ll have metadata to make – have a requirement that the information is transparent, even if it’s not, you know, the reader just going quickly through something that might not be the most obvious. Ideally it would be, but to at least be able to verify it if you have like basic computer and research skills.
Patricia [00:51:38] Yeah, I agree. It’s definitely something that needs to be discussed more, and that’s why we have this platform. That’s why we love to talk to different people from different industries and see how AI is impacting their industry. You know, to have these discussions and be more open and see what we can do, because everything is, like, ever-changing. It’s all very new. So we don’t really know how to navigate it just yet, but it’s really important, I feel, to have these types of discussions. I wanted to ask you just a few last questions, about the future of journalism in AI. [00:52:09]From your perspective, how do news organizations need to adapt to the changing landscape of journalism, considering the influence of AI and evolving reader expectations? [9.0s]
Marc Fawcett-Atkinson [00:52:19] [00:52:19]I think step one is acknowledging it, which a lot of newsrooms are doing. I think from a business perspective, it’s really doubling down on what does journalism – what do journalists, more to the point, offer in terms of verifying information and really kind of driving home that point of we’re trained and skilled in verification. Like, this is literally what we do all day, every day, for work. So emphasizing that, and even as you know, individual journalists, going back again to that objectivity question, I think at least for me as a reporter, it is kind of acknowledging a certain positionality from where I’m writing from and not having that define of all of my work. But at least, you know, acknowledging that I’m not kind of a embodied, disembodied voice up in the ether. I’m actually a person, and I think to some degree, being quite aggressive in, I would say necessarily pushing back, like, yeah, pushing back against where it’s due. Like, in the case of like, getting copyright infringements, and really differentiating our product. Yeah, from a business perspective, like our product from AI-generated product, and like saying why this matters. Yeah, that’s kind of how I see it. But it’s you know, it’s not a clear – particularly in the broader context of say, like the current conservative leader actively undermining faith in reporters or Trump. That’s you know, it’s definitely an uphill – I wouldn’t even say an uphill battle. It’s definitely a battle. Yeah. [99.4s]
Patricia [00:54:00] Well, I wanted to ask you just like one last question before you go, and this’ll be a bit more general. Just, about the future of journalism in AI. How do you see that happening? [00:54:11]Do you think it’s an inevitable thing for AI to be integrated into newsrooms? Not necessarily for content generation, but just, you know, help with fact checking and, you know, streamlining that process. How do you see the future of journalism in AI? [13.8s]
Marc Fawcett-Atkinson [00:54:25] [00:54:25]I don’t think anything’s inevitable about AI, because it’s a technology. And ultimately, we choose as a society whether to use that technology or not. The idea that – and this is one thing that capitalism is very happy to push is this idea that, you know, development and new technologies are inevitably going to be adopted and that, you know, people are definitely gonna go for an easier option, but that’s also shaped by the regulatory environment to what’s possible or not, and the cultural environment too. In terms of whether it’s being used, like, I already use AI in my work all the time. Like, I use Otter. So, I think it’s already kind of integrated to some degree, and I do think that trend will continue in this specific context, but I don’t think that’s bad. You know, it saves a lot of time. It can definitely help. In some cases, it could even help with accuracy, and like you said, like fact checking. As long as there’s also a human behind the scenes controlling the technology and verifying it. Yeah, so I don’t think it’s in the short term anyways going away, but I do think there is a need for more oversight and regulation, particularly in content that’s put out on the internet and transparency. Yeah. [72.7s]
Patricia [00:55:39] Yeah, I totally agree. This has been really insightful. Would you like to, leave any parting words for our audience? Any advice maybe? Something you’d like to share?
Marc Fawcett-Atkinson [00:55:49] [00:55:49]Check your sources, and also like, invest in good journalism, and like, also focus on stories. Like, for me, that is how I got into this. And still, the most compelling pieces of reporting I read are good stories. And, you know, good stories with good characters and a nice narrative arc are always compelling, and they have been since humans were talking like humans, however, you know, many million years ago. And I think that fundamentally is like what – you know, a lot – why a lot of us do this, and kind of what you should be looking for because you can’t – a computer can’t generate a good story [38.0s] or I’m sure there are tech people out there who would disagree with me on this, but I’ll hold my ground there. No, I agree totally with you. Absolutely.
Patricia [00:56:38] Well, this has been an amazing hour, Marc. Thank you so much for being here with us and the time, and thank you to our listeners for joining us on another enlightening episode of The AI Purity Podcast. We hope you enjoyed uncovering the mysteries of AI-generated text and the cutting edge solutions offered by AI Purity. Stay tuned for more in-depth discussions and inclusive insights into the world of artificial intelligence, text analysis, and beyond. And until next time, keep exploring, keep innovating, and keep unmasking the AI.