Jim Davies [00:00:00] One of the most important things just to teach students how to think, and learn, and reflect on their own ideas. That and building a love for knowledge. I think that these large language models get in the way of that. I think that teaching writing is one of the best ways to teach thinking. Unfortunately, the most difficult parts of writing, the parts that students don’t like is the part where the real learning happens.
Patricia [00:00:34] And welcome to another episode of The AI Purity Podcast, the show where we explore the complex intersection of artificial intelligence, ethics, and its societal impacts. I’m your host, Patricia, and today we are honored to feature the guest for today’s episode, who is a distinguished full professor at the Institute of Cognitive Science at Carleton University and the director of the Science of Imagination Laboratory. Beyond academia, our guest is also a prolific writer with notable works including “Riveted: The Science of How Jokes Make Us Laugh, Movies Make Us Cry, and Religion Makes Us Feel One with the Universe” and “Imagination: The Science of Your Mind’s Greatest Power.” Adding another layer to his multifaceted career, our guest serves as a co-host on the thought provoking and award winning podcast “Minding the Brain,” where he engages in stimulating conversations about the intersection of neuroscience, psychology and everyday life. Join us today, as we delve into the fascinating worlds of cognitive science and artificial intelligence. And welcome to the show, Dr. Jim Davies! Hi, Dr. Jim, how are you doing today?
Jim Davies [00:01:29] Thanks for having me on, Patricia! Nice to meet you!
Patricia [00:01:31] We’re so happy to have you here today! Dr. Jim, could you just, walk us through your journey into becoming a cognitive scientist and what led you to pursue this as a career?
Jim Davies [00:01:40] I majored in philosophy, and when I was in undergrad, I didn’t really know what I wanted to do for a career. But like many people my age, we didn’t really worry about it like students do today. I wasn’t too concerned. I was talking to my – the chair of my department, and he asked me if I’d ever heard of cognitive science because he knew I was interested in computers, and psychology, and stuff. And I said, no. He lent me a book. I can’t remember what book it was, but he let me a book. I went home and read it, and I thought, “Oh! This is what I want to do.” So, I had basically one semester to prepare for my future. And then, I got some work doing artificial intelligence stuff. And then, went to grad school for psychology for my master’s. And then, my PhD was in computer science and artificial intelligence. So, I have degrees in three different disciplines of cognitive science. And now, I work in a cognitive science department.
Patricia [00:02:34] You are the director of the Science of Imagination Laboratory at Carleton University. Could you tell us more about the research that you conduct there and the significance?
Jim Davies [00:02:42] The PhD I studied, what’s called visual analogy, which is when people try to understand new situations by their visual similarity to things they’ve seen in the past. A lot of the times I would give talks on this, and people would ask, “Well, where these visual representations come from?” And I said, “Nobody really knows. It’s imagination.” And so, when you become professor, you’re supposed to do something that’s related, but not exactly the same. So, I thought, “Well, I’ll tackle imagination. That’s what I’ll do.” So, my lab, the main bread and butter of the lab is to try to understand how human beings create visual scenes in their heads. So, if I said that it was snowing today, and I took the bus, and you picture that, your mind is making a lot of decisions about what the point of view is and maybe what color coat I was wearing. Stuff I didn’t say. And so, my lab is trying to understand how minds make decisions about what objects to put in scenes, where do they go, how big are they, what color are they? And then, we try to simulate it with computer models.
Patricia [00:03:37] [00:03:37]That’s really fascinating! And, you know, Dr. Jim, today, I really wanted to talk about your take on artificial intelligence, generative AI in general and how it’s being used in education and potential risks of AI in general. I know a lot of people know when they talk about AI or hear about AI, they usually just think of ChatGPT, LLMs, and all that. So, I recently watched your talk on YouTube. I think “Teaching Thinking in the Age of Large Language Models,” where you spoke with a lot of different teachers. Could you talk to us about what the insights from this talk was and what you aim to provide to teachers about ChatGPT and the risks of using it in education? [35.9s]
Jim Davies [00:04:14] [00:04:14]I think that it’s – one of the most important things to teach in university and in high school is to teach students how to think, and learn, and reflect on their own ideas. And that, I think is, like, probably the most important thing. That and building a love for knowledge. I think that these large language models get in the way of that. I think that teaching writing is one of the best ways to teach thinking. Unfortunately, the most difficult parts of writing, the parts that students don’t like is the part where the real learning happens, where you’re very frustrated, and you can’t figure it out, and you try different things, and it’s not working, and you go away, and you come back, and you try again. And just doing that for years is what it takes to learn to be very critical of your own opinions, your own thoughts, and be able to analyze the thoughts and opinions of others. So, I’m very concerned. I mean, I’m not as interested in writing per se as I am the fact that writing is a way to teach thinking. And so, if students are doing the writing, if they’re getting the writing done by a large language model, then, they’re not engaging in the level of critical thinking that they had to before these models are invented, where you might spend two weeks just, what I like to say, bang your head against the paper and trying to figure out how to make it work, where the real learning happens. So, I think that if it is not disallowed by the educational system, students are going to use it, and I think they’re learning of how to think will suffer. That’s the basics of the talk.[87.8s]
Patricia [00:05:43] Yeah, I really enjoyed that part specifically when you said that. I feel like some people kind of, like, make that similarity about using ChatGPT. It’s like using calculators in school. And you said, well, calculators aren’t really allowed until students get the foundations of, you know, the basic arithmetic, for example. So, I really enjoyed that part. I mean, I don’t think there’s currently a stop to students using ChatGPT and other LLMs in school. So, what do you believe are some of the possible solutions to address this widespread use in academic submissions?
Jim Davies [00:06:19] I think that the teachers should have their students write by hand in class, either like literally by hand with, like, a pencil or on computers where they don’t have access to large language models, either on the internet or downloaded on to the machine. And although there are a lot of drawbacks to this, so many students are using large language models to help them write, and help them with their homework, and everything else that I just don’t think that we can have any kind of honor code that will catch everybody. There’ll be some people who will cheat. I’ll just call it cheating, but they’ll use these large language models to cheat. And then, that will put other students at a disadvantage, particularly over the years as these large language models get better and better at what they do. So yes, I think, the teacher should really make them completely against the rules and then enforce that by forcing students to write when they’re right in front of the professor.
Patricia [00:07:14] [00:07:14]Well, besides students not being able to practice critical thinking or thinking for themselves and kind of using the shortcut when they use LLMS, what other implications do you see with students’ overreliance on large language models, and how do you think this, you know, affects academic integrity as a whole? [18.4s]
Jim Davies [00:07:33] [00:07:33]The rules of academic integrity were designed before large language models existed. So, does the large language model count as an other or does it count as a tool? Right? So, the people who say that it’s like using a calculator, they view it as a tool. And so, if you submit work written by a large language model, you’re not submitting work written by an other. So, it doesn’t violate academic integrity. Others like me see it more like hiring somebody to write your essay. And, clearly hiring someone to write your essay is having it written by an other. I mean, in truth, a large language model is sort of halfway between a tool and an other. So, it’s a matter of interpretation. And at many universities, not all, but many universities, the university is leaving it up to the professors to decide: Is it against the rules or not? And the professors are supposed to say you’re allowed to use large language models or you’re not allowed to use them. And then, it becomes an academic integrity violation if it violates the rules of the professor has set out. [60.4s]
Patricia [00:08:35] You also talked about ChatGPT being, like, the most widespread adopted technology. More than fire, you said during your talk.
Jim Dvies [00:08:44] Yeah.
Patricia [00:08:44] [00:08:44]Yeah. With this, like, rate that it’s being adopted. How do you envision the future of AI and education evolving, particularly concerning maintaining critical thinking skills and amidst these technological advancements? [12.4s]
Jim Davies [00:08:58] [00:08:58]Right. [0.0s] [00:08:58]Just a little nuance for people who haven’t seen the talk, [1.6s] [00:09:00]what I said was it was the fastest adopted. So, it’s not necessarily the most widespread adopted, but it’s uptake was faster than any technology in history. Just people – more people using it faster. You know, the future, I really don’t know where it’s going to go. There are people who think that there’s no way to stop students from using language models, and so, we have to incorporate it into our teaching. There are people like me are saying that we should resist it. But I can also foresee that within 20 years, many students will have glasses that have, bone conducting audio, and they can look at a test question and their glasses will basically tell them the answer, and the professors can’t even see it happening. So, then what do we do? Do we outlaw, like, we say, like, you can’t have smart glasses in the classroom? And then, students say, “Well, I have to use them to see, and they’re my only glasses.” You know? Or maybe everyone will use them so often or eventually – maybe there’ll be contact lenses. So, what’s the future of education in relationship to generative AI is hard to see. So, my thoughts and recommendations are really a near-term horizon. We’ll just have to deal with the other problems as they come. [77.8s]
Patricia [00:10:19] And do you believe there should be more regulation surrounding the use of AI in academic settings?
Jim Davies [00:10:24] I think that because they hinder the practice and learning of how to think, they should be made against the rules, if that’s what you mean by regulation, but that that’s usually up to each teacher to decide. Many of the professors I’ve talked to have their kind of head in the sand attitude about it. Like, they’re not really trying to enforce it, and I think a lot of them are grading AI-generated essays, you know, which I think is a real waste of everybody’s time.
Patricia [00:10:52] And are there any regulations right now being done at Carlton University that you can share with us? Or, like you said, is it just varies per professor?
Jim Davies [00:11:02] It varies according to professor. So, you know, the university – I mean, the professors disagree enormously on whether we should allow students to use generative AIs. So, the university is not in a position to be heavy handed about it. We have a lot of – professors have enormous freedom about what we teach and how we teach it. So, they’ve just left it up to each professor to make a choice. But they do tell the professors to definitely make it clear whether it’s allowed or not.
Patricia [00:11:33] [00:11:33]And earlier, you said there’s essentially no way that professors can tell. Even in your talk, you said that there’s no way to 100% say that a student’s essay has been, you know, used with generative AI. What is your thoughts on AI detector tools out there? I know in your talk you said most of them are wrong. What would you say…[19.6s]
Jim Davies [00:11:53] [00:11:53]Their detection rates are not nearly good enough to be used as a tool to catch people. So, the best tools… I can’t remember the numbers – it’s like 73% or something if it’s pure ChatGPT. If the students edited it a little bit it drops down to 43%. And even linguists can’t tell the difference very reliably. So basically, a professor can have a very strong conviction that it was done by AI, but the student can deny it, and there’s not really any way to resolve that. So, yeah, we kind of can’t catch students unless they really screw up and include things like hallucinated references, like references that don’t exist or it says in their text, like, “As an AI language model, I can’t do this,” which some students do, which which means that students are submitting work that they haven’t even read. Not only did they write it, they didn’t read it. So, aside from that, we just, yeah, we can’t really catch them. So, if we don’t want them to use it, we either have to convince them not to or make them right in front of us, so they can’t. [64.1s]
Patricia [00:12:58] [00:12:58]How can we ensure that AI tools… Because not all AI tools are generative AI. There are other AI tools that, you know, students can use. How can we ensure that, these AI tools are being utilized ethically and being used as learning aids rather than shortcuts that compromise the learning process? [18.4s]
Jim Davies [00:13:18] [00:13:18]I don’t know how we can ensure it, but what I can say is that, you know, I don’t want to give anybody the impression I don’t think these tools are valuable for learning. They’re extremely valuable for learning if you really want to learn. So, if you just want to, you know, learn about Roman history or whatever, and you’re reading a book about it and you have a question and you go to ChatGPT and ask the question and get an answer, hopefully, you know, you check to make sure that it’s correct, but you can learn a whole lot using ChatGPT if you actually want to learn. My problem is in context for formal education, where we have to have grades and the grades are meaningful and students are overworked. Those are the situations where the generative AI is going to get in the way. But if a student really wants to learn, even writing.[45.8s]
Patricia [00:14:06] [00:14:06]What advice would you offer students who feel pressured to use AI tools for academic purposes despite ethical concerns? [5.5s]
Jim Davies [00:14:13] [00:14:13]So, I’ve talked to students who’ve felt very tempted to use AI tools, particularly if they don’t think they’re very good writers or they don’t like it. You know, it’s hard. Someone’s facing a blank page and, you know, they think, “Well, maybe I’ll just ask ChatGPT to give me some ideas.” And then, ChatGPT writes something better than they could. Then what do you do, right? It’s demoralizing. You know, it’s discouraging, and it makes it so it’s very tempting to use large language models like ChatGPT a lot. Large language models might get higher grades than you. I’ve seen it happen. It’s really terrible. And this is something I try to instill in my students, is that your grades are not that important. Like, the grades you get in university, for example, you’re going to not even remember what they are six months after you graduate. You won’t even think of them ever again. They might affect your first job, depends on how capable you are at doing work. Can you take initiative or can you do group projects? Can you think clearly? Can you do all this? And those are not skills you can get if you’ve cheated your way through university. So, I know it’s a cliche that the students should focus on learning and not grades, but it’s truer then the students think. The good grades are not going to give you a solid career for the rest of your life. Absolutely not. It’s the skills – the transferable skills that you get will really do that. So, when students are using language models to do their homework, they’re missing a great opportunity to learn how to think and write. After you’re done with university, you are rarely in a position where it is somebody’s job to help you get better at something. It’s like when you’re in university, it’s the professor’s job to teach you to go over your stuff with you. Use your office hours. You’re not going to get that when you work in an industry. What you’ve learned – that learning is going to be difficult after that. So, that’s what I offer students, you know. But because it’s so tempting to use these large language models, and they do pretty well and get decent grades, I think it’s really a shame that the students should have to sacrifice their grades to learn, which is why I think that they should be ruled out. They should be not allowed. [128.3s]
Patricia [00:16:22] [00:16:22]I think that’s really great advice for students out there. And on the flip side, what would you have to say – give advice to the educators out there, especially because I feel like, and this is just I feel like my personal observation that the generation right now, they’re more tech savvy, they’re more open to using ChatGPT. On the other side, educators may not be as, you know, used to this type of technology. So, what would you say to educators? [26.6s]
Jim Davies [00:16:50] [00:16:50]Well, I think they should definitely use large language models. They should should put their homework into ChatGPT and see what it generates. They should learn about how to use ChatGPT to do homework. They should put their multiple choice questions into these large language models and just see what their capabilities are. Or ask some students to show them, you know. There are a lot of misconceptions about what these language models can and can’t do, and it’s hard to appreciate what they can and can’t do until you’ve learned how to use it pretty effectively. You know, you gotta at least try it. [28.5s]
Patricia [00:17:19] [00:17:19]Yes, and like, speaking of using generative AI effectively, what do you foresee are some opportunities that AI presents for enhancing personalized learning experiences for students? [10.0s]
Jim Davies [00:17:30] [00:17:30]You know, these large language models can function like a tutor, particularly if they’re trained for that. So, Khan Academy, which is a really fabulous free website for learning math, they’re developing a, a large language model to help tutor students in math, and they’re going to make that AI, so that it doesn’t just give them the answer, it actually scaffolds their learning. I mean, that’s wonderful! That’s a wonderful thing to have, because tutoring is one of the best ways to learn, but it’s just incredibly expensive that it doesn’t scale, right? I teach a class with 1300 students, and you know, I can’t one on one help every single one of those students, but if we have AI’s that are able to tutor students, it could be a really great scalable way to learn. [41.8s]
Patricia [00:18:13] I would say, you know, ChatGPT, it was launched pretty recently. It’s a relatively new technology. Like you said, there aren’t like – there isn’t a, like a global regulation for it in educational institutions. Eventually, when we reach that stage where using LLMs are more widely adopted, and they’re used ethically. What would you say are some potential positive outcomes from integrating AI into educational systems responsibly?
Jim Davies [00:18:38] You know, I’m going to have to see what form it takes. Because right now, the potential for abuse is so great that I don’t see how we can let students use large language models and still get the same learning out of it. So, what you’re talking about is a hypothetical future where we figure out a way to integrate it effectively, and I’m not confident that we’re going to do that.
Patricia [00:18:58] [00:18:58]Let’s talk about the development of these AI technologies. Could you highlight some of the key principles for the responsible development and use of AI technologies? [8.3s]
Jim Davies [00:19:08] [00:19:08]So, development. Okay, so so by development, I guess we’re talking about the the people who create these AIs. So, there is a real problem, because, you know, some people who are concerned about AI ethics really admonish some of these companies for releasing to the public these really powerful AIs without any, you know, safety things in line or without doing extensive testing. We didn’t really know what effect releasing these AIs into the wild, so to speak, would have on on society, but they did it anyway. So, you know, it’s kind of a legitimate point. But, as some of these scholars have said, some of these researchers have said, like, we don’t actually know what is going to happen to society until we put it into society. We don’t know how people are going to use these AIs, and we actually need that data, particularly while these eyes are still less powerful, right? We need the data so that when we make the more powerful AI, we can know what we’re what we’re dealing with. Like how are people going to use them? How is it going to affect society? Right? So, you know, it’s hard. We’re asking AI researchers to be responsible, but ultimately they don’t know what their AI is going to do. I mean, particularly with generative AI, the whole point is that we can’t predict what it’s going to do. That’s the reason they’re useful, is because we can tell them to do something, and they’ll come up with something brand new that we couldn’t predict. And that’s great, you know, that’s why it’s useful. So, when it says something that’s a lie or it says something that’s racist or sexist or something, you know, to turn to these researchers and say, “Hey, you made this in an irresponsible way.” Well, yes and no. You know, the large language models were trained on so much writing that it would take a person 2000 years to read. So, obviously, no person or even group of people at the company has read everything that the AI is trained on, and we have no automated way to distinguish the true from the false sentences on the internet. We also don’t really have a way to distinguish the racist from non racist or sexist or whatever. So, basically it’s all in having humans use it. So, they pay people to actually use ChatGPT and flag when it does something wrong and just sort of punish the AI, so that it doesn’t say racist and sexist things anymore. That’s what it’s doing. You know, that’s sort of the narrow AI answer right now. If we eventually we’re going to talk about existential threat. You know, if we’re trying to build general artificial intelligence, then I have a lot to say about, you know, responsible stuff with that, but we can we can wait till we get to that section. [146.7s]
Patricia [00:21:35] [00:21:35]How would you say we can foster a culture of ethical AI use? Not just in academic institutions, but throughout society. [5.9s]
Jim Davies [00:21:43] [00:21:43]Well, I mean, how to foster is tough. I mean, that’s, like, a public relations problem, you know. Like, how do we get people to be good? You know, like, I think that the efforts that we’ve made to try to make people be good with the internet have only mixed success. You know, artificial intelligence, it’s so useful for so many things, and it can be used in so many ways, good and bad that, you know, it’s sort of like asking, how do you convince people to be good people and not be bad, right? So, they’re always going to be people who are willing to break the law and break ethical rules for what they perceive to be a greater good. So, we’re going to probably see people using AI to sway people in elections and to take money from people who are vulnerable, and AI is going to help them do that. So, you know, I don’t have a great answer for that, because it’s sort of the same problem we’ve always had of trying to get people to be good. It’s just that these AIs are making them much more powerful. [58.0s]
Patricia [00:22:41] Well, what role do you think students can play in shaping the responsible use of AI in education and beyond? We’ve been talking about, like, putting in regulations from the institutions themselves, but how can students practice responsible and ethical use of AI?
Jim Davies [00:22:57] I don’t think they can do a lot. But, I mean, one thing is that I would hope that there is a culture among students that cheating with large language models is is a no no. It’s taboo. I don’t think that is happening because, like, 90% of students are using it. So, I don’t think that they think it’s taboo at all. But even if they were to develop a culture of taboo, it doesn’t mean that people wouldn’t use it on the sly, because it’s not the kind of thing where you can know that somebody used it really. So yeah, I don’t think there’s a – I don’t see a whole lot the students can do other than just sort of voice a general disapproval for using it in place of learning.
Patricia [00:23:30] [00:23:30]Let’s move away from education and talk about AI risks and existential threats. Could you share some insights from your recent podcast episode titled AI and Existential Risk, where you featured Darren McKee? [11.7s]
Jim Davies [00:23:42] [00:23:42]Sure. So, this subject is, you know, we’ve been talking about, you know, ChatGPT and generative AI, but if you think about where AI is going, where it’s always been trying to go is not to just make something that can generate text or whatever, but it’s to create what we would call an artificial general intelligence, which roughly means create an artificial intelligence that is as smart as a person, a human being, no matter what task you give it. Now, you know, whether that includes riding a bicycle or not or just intellectual tasks is, you know, people disagree when they talk about, artificial general intelligence, but it’s generally thought that once we have artificial general intelligence, it won’t be very long before we have artificial superintelligence, which is artificial intelligences that are much smarter than any human that’s ever lived. And the problem with this is that we cannot predict what an AI that is smarter than us would do. It’s – we just would not be capable of doing it. And the reason we create a superintelligent AI is so that it can solve the problems that we can’t. So, already we have narrow AIs that are, like, creating math proofs that are a thousand pages long we can’t check. So, do we believe it or not? You know, do we take its word for it? It’s working memory is larger than ours. We can’t actually check it. So, this idea is that once we have an AI, even one AI that is many times smarter than a human being, it can start reprograming itself to be even smarter, and it might – it’s something called a take off, right? Where it gets smarter and smarter. And then once we have, like, an internet capable AI that can create videos, it can create audio, it can do things on the internet, it can buy and sell companies through people or not. We’re looking at a situation where we might have a superintelligent artificial intelligence that effectively starts taking over the world in terms of gathering resources and owning companies and, you know, paying for lobbyists to change laws, and when this happens, we really want to make sure that this artificial intelligence has ethical concerns that we approve of, right? That it cares about the life of creatures that can suffer, it cares about happiness and well-being, and it doesn’t, you know, severely harm life on Earth. That’s the existential threat, right? So, I mean, just to – you know, people might think, “Well, what are you talking about?” Well, you know, if the group that makes the first artificial superintelligence made it to maximize profits for a company, and they didn’t specifically put anything in there about keeping people alive, then maybe the AI would just mow down the entire earth and cover it with solar panels to get more energy to help this company own everything. And then, we’re all dead, right? So the problem is: How do we make the AI friendly? [177.5s]
Patricia [00:26:41] Exactly, and what strategies do you believe are effective in mitigating bias within AI algorithms?
Jim Davies [00:26:47] So, do you want to switch to bias? Because that’s – I’m not really talking about bias.
Patricia [00:26:51] [00:26:51]Exactly, yeah. Well, like earlier, I mean, there are studies about AI amplifying racism, Islamophobia. So, you know, the algorithms, the only term I can say is like bias, but like going with what you said in the context, how do we mitigate AI from becoming superintelligent and like…[16.9s]
Jim Davies [00:27:09] [00:27:09]Yeah, okay. Well, I mean, because to me, it’s a different – the bias problem is a real problem, but it’s to me a short term problem, and it’s not an existential threat. Like, the reason AIs are biased is mostly because the data is biased, right? There’s been a history of, people who are white, people who are male, being in positions of power, American media dominating the world… And so, you know, if an AI thinks that pictures look like the pictures it’s seen, they’re going to look American and white and blah, blah, blah. So, that’s the bias, and there are ways to mitigate that. And the companies that are working on these things really are trying hard to avoid that, but it’s not solved, because the data itself is biased. So, they’re trying to, like, de-bias the data or whatever. But to me, like, yeah, the existential threat is a different kettle of fish entirely, right? So, how do we stop that? I mean, personally, I think that we should be developing ethical theories into computer code, and then we hope. We hope, hope, hope that whoever makes an artificial superintelligence uses it. I think that’s the best way. I’m kind of – of the opinion that there will be artificial superintelligence. It probably will take over the world, and we’re not going to be able to control it. So, we need to make it so that it doesn’t want to hurt us, and it wants the best interests of beings that can experience things at heart. [89.0s]
Patricia [00:28:39] [00:28:39]And do you think that’s possible, or are we too far out into the development that these AI will eventually get there or can we? Is there still time, do you think? [9.1s]
Jim Davies [00:28:49] [00:28:49]I think there’s time, but I don’t think that it’s – I’m not confident it’ll be successful. So, what I mean is that I think we should be trying as hard as we can, but I think that what – here’s what we got to hope. Let’s just say that we build all these ethical theories into code. First of all, that hasn’t been done, okay? Let’s just say we do it. Now, we have to hope that whoever makes the first artificial superintelligence that takes over the world actually used one of them. That’s the second thing. So, hopefully we’ll build it in time. Hopefully, someone will use it. And then hopefully, it is a pretty decent ethical system that actually works. The problem is that intelligent beings are very good at getting around rules, right? So, a few years ago there were emissions tests for cars, and it was a very famous case where Volkswagen, which is a company I detest, they change their cars, so that the car could detect when it was being measured for emissions and it would change – it would filter its emissions. So, it would pass all the tests of its emissions. But then, as soon as it got on the road, it would spew awful things into the atmosphere that were illegal, and they got caught. This kind of stuff happens all the time. We put in laws to try to control people’s behavior and company’s behavior, and many times people find a way around it. They find a loophole. If the law is you have to pass an emissions test, and that’s the way we enforce the law, then maybe a company can figure out a way to pass the emissions test without actually reducing emissions. That’s what Volkswagen did, okay? A company of humans can do that. So, if you got an AI that’s 30 times smarter than a human, and we make an ethical rule, what chance do we have that it’s not going to figure a way around that rule, right? We have to be very careful about what the ultimate goals of these machines are, because they’re going to ruthlessly optimize for those goals and try to get passed whatever rules are in its way. Here’s a cool example. They were trying to train an AI to play Tetris or some game, like – I think it was Tetris, but they play some game like Tetris, and the rule they gave it was try to keep the game going as long as possible without it ending. Right? Which is a reasonable thing. And you’d think for Tetris, that would mean becoming really good at Tetris. No, the AI just learned to press pause. It pressed pause, and then the game would never end. And as far as the AI was concerned, it had succeeded at the task. Now we might say, “No, that’s not what we meant.” We can also say to Volkswagen, “That’s not what we meant. You’re not supposed to only be good on the emissions test. You’re supposed to be good all the time.” But technically, it was passing whatever test we gave it. So yeah, I’m not super confident that this is going to work, but that doesn’t mean we shouldn’t try. Maybe if we get lucky, it won’t be a complete disaster. [164.3s]
Patricia [00:31:35] [00:31:35]Well, what are your thoughts on the concept of artificial intelligence potentially achieving consciousness? Would you say when superintelligent AI develops that it is conscious? [10.8s]
Jim Davies [00:31:47] [00:31:47]No, I don’t think that is necessarily true, but we understand consciousness so poorly that we don’t even know if that’s true. So, what I mean by that is there are many theories of consciousness, and none of them are widely accepted. So, even among consciousness experts, there’s enormous disagreement about what it is, how we should explain it, and whether indeed computers or software could even be conscious. Some of them say “yes,” some of them say “no.” Some of them say “yes” under certain hardware conditions. You know, it’s very – but we don’t know which one is correct. So, we don’t really know. I will say, though, that consciousness is an interesting aspect of cognition, because we don’t really know what it’s for. We don’t really know the purpose of consciousness, which is why almost nobody is working on artificial consciousness. When people make AIs, they’re making them to make plans, or to generate text, or to identify cancer in a photograph. And are they also trying to make the computer conscious? No, because they don’t see why that would help with the task, and it’s sort of the same thing with human beings. We actually don’t know exactly what consciousness is doing for us. Anyway, so is consciousness something we would have to specifically build or would it emerge out of a certain level of complexity? These are all questions that a scientist is extremely uncertain about. So, I think it’s possible. It’s very possible that some software entity or computer entity could have consciousness. And if so, we might have to treat it with ethical consideration. But right now, we know so little about consciousness, and exactly what it is, and how it works that we don’t even know for sure if the AI as we’ve already created or conscious or not. We just don’t know. For the same reason we don’t really know if an earthworm is conscious. [106.7s]
Patricia [00:33:34] [00:33:34]How can we engage the public in these types of discussions about the ethical and societal implications of superintelligent AI, and what steps can individuals take to advocate for responsible AI development? [10.9s]
Jim Davies [00:33:47] [00:33:47]Luckily, this is a fascinating subject. There are a lot of important subjects out there that are not very fascinating. So, people seem to actually think that this is really interesting, rigt? So, there are lots of movies about AI risk. You know, people find it a bit interesting. So, you know, getting people talking about it isn’t a real hard part. It is not on the menu, so to speak, for any political stances. So, politicians do not talk about their stance on AI very much. So, one thing that the public could do is to start asking politicians for responsible AI policy. So, you know, and this is not – it’s not guaranteed to work, but it’s better than nothing, but we might be able to limit who has access to these graphics chips, maybe oversight in groups, and individuals, companies who are trying to create artificial general intelligence. Just so we can keep an eye on it. But right now, the companies that are trying to make artificial general intelligence are generally resistant to legislation, as you might imagine, and they’re often very rich companies. Like, it’s a lot of the richest companies in the world that are making these generative AIs now, and they have a lot of political power. So, if there was a widespread grassroots movement to have some regulation regarding this, that’s what people could do. They could start voting for it. [86.1s]
Patricia [00:35:13] [00:35:13]Besides existential risk and bias in AI algorithms, there are other risks with AI adoption. What is your take on AI contributing to issues such as job displacement, economic inequality, and loss of human autonomy? [16.0s]
Jim Davies [00:35:30] [00:35:30]Yeah. So, the long term problems are these existential risks, but the short term problems include many of the things you said and bias, right? So, we’ve already seen job displacement in certain sectors, right? So, the creation of these image-generating AIs like Midjourney and Dall-E, the number of, like, gigs for illustrators on websites and the amount of money they get have both been reduced since the introduction of these things significantly. So, not as many artists are getting hired, and they’re getting paid less. ChatGPT is really good at generating text, particularly in software code. Like, software programmers are now being much more productive because they’re using AI. But if your job is to make budgets or your job is to write copy for websites or whatever, this is stuff that these generally AIs are really good at. And it wouldn’t surprise me if a lot of the jobs coming are like, “Okay, you know, we fired ten writers, and we’re hiring you. Use AI and do the job of ten people.” So, you know, it’s reducing – it’s replacing people in jobs. Now, historically, technological innovation has displaced people only temporarily, right? So, a lot of us don’t want to go back to the, time before sewing machines even though sewing machines put a lot of seamstresses out of work. Okay, like, yes, it happened, but people in the long run, people found not only different jobs, they often found better jobs. You know, there’s been a big shift to white collar work instead of the backbreaking agricultural work that people did before. And economists, I think, are somewhat in disagreement about whether it’s different now. Okay? So, is AI just another productivity technology that will have people move on to other more interesting jobs like it has in the past? Or is AI something different where, actually, you know, we’re running out of jobs that only humans can do, right? If you look at, like, the top 50 jobs, like top 50 most common jobs, only, like, a couple of them, like, software developer and, like, 1 or 2 others, all of them were around 100 years ago practically. So, you know, it could be that we’re actually running out of jobs, and AIs will do more and more of the intellectual work. You know, certain jobs are going to go a lot slower, like a nurse. You know, we’re nowhere near creating a robot that can do a nurse’s job, like, putting in IVs and, like, bathing people. No, we’re like, that’s several revolutions away. You know, interestingly, these more blue collar jobs are safer than jobs like mine. Where, you know, it’s mostly an intellectual job that, you know, presumably something’s thinking on a computer could actually do. It talks of economic inequality? Well, you know, there’s two sides to this. You know, a lot of the AIs are actually empowering people to do a lot more stuff, you know? ChatGPT is free for almost anybody with an internet connection. Like, there’s a free version. So, people, even the poorest people, if they can just get on a computer or even a phone or a smartphone, they can get a smartphone with internet, they can actually take advantage of all these wonderful things. But it could increase, you know, inequality if, you know, the people who are creating the AIs are making a lot of money on it, and these benefits are not being shared, but it’s taking away jobs, but I think it’s a little early to tell. I think – I guess what I mean is I’m not an economist, but it seems to me plausible that AI could actually decrease inequality, but I don’t think we know yet. [216.5s]
Patricia [00:39:07] [00:39:07]How do you see the potential for AI to exacerbate existing social inequalities or disparities? [4.5s]
Jim Davies [00:39:13] [00:39:13]It might be the sort of have and have not thing where people who control the AIs or have access to the AIs make a lot more money, and then, maybe it could exacerbate the disparities. But, you know, as I said, it’s like these things get created, and they’re incredibly expensive, but we’re almost at the point now where a large language model can run on a cell phone without any internet connection and just on the hardware on a cell phone, which uses very little power. And there are also free versions of these things. So, it’s kind of unlikely that there’s going to be, like, one individual or one company that has the super AI like some Tony Stark person who’s like the only person that can make it, and nobody else has it, and that person just makes a huge killing. I don’t really think that’s likely, because a lot of these AIs – how they work is published in papers, and free versions get released, and people can put them on their computers, and the company then doesn’t have any market capture. Like, it’s basically not doing anything that anybody else couldn’t do on their own machine. So, I’m not actually very worried about AIs exacerbating disparities. [67.5s]
Patricia [00:40:23] [00:40:23]Well, what other concerns, if any, that you might have about the potential misuse or abuse of AI technology by individuals or organizations? [6.8s]
Jim Davies [00:40:32] [00:40:32]I’m really concerned about deepfakes. So, a deepfake is when you create, let’s just say media. It could be a video, it could be audio, or it could be a photograph, using deep learning technology. So, I’m very concerned about this. People have already used audio deepfaking to swindle people out of money, so they imitate their son’s voice. And on the phone, they use their son’s voice to tell their parents that, “I’m desperate. I need you to transfer money,” and the parents do it, and it’s all a fake. We also can have deepfakes of, I mean, certainly pornography deepfakes are a problem. You know, people get their faces put in pornographic images or movies, and they feel violated, and they – the violation of their rights, and everybody watches them. That’s a problem, putting – showing people in other compromising positions for just hurting people’s reputations is a problem. But then, there’s another side of it – is that once the media is more and more full of AI-generated content and deepfakes, people might start trusting photographs and videos less and audio less. So, not only might we have an AI showing you doing something embarrassing, but we also might have a real video of you doing something embarrassing. And you can say, “No, no, I didn’t do that. That’s deepfaked.” So, they claim that it was made by AI even though they’re actually guilty. So, and then, like, what do we do for journalism, and history, and courtrooms if we no longer have photographic video or audio evidence? If it all can be deepfaked, how is – let’s take today. Today is 2024. Look at a historian in 30 years trying to understand what the world was like today when the internet is full of pictures that are supposedly about today, but are actually created by AI. How are they supposed to tell what actually happened in 2024, when text, and video, and audio, and pictures can all be created by AI, and they’re basically indistinguishable from actual photographs? This is a real problem. I do an experiment in my class where I show students an actual picture of a woman from the 60s and an AI-generated picture of a woman from the 60s, and they can’t tell the difference. Half the class thinks one’s AI generated, and the other half thinks the other is. [146.6s]
Patricia [00:42:59] [00:42:59]And is there any way around this, you think? I mean, the more we use these generative AI, the more it generates more realistic images. And even now, like you said, there’s a potential for disinformation. Like now, people are reading AI-generated articles and believe it to be true. What would you say can we do about this? [19.4s]
Jim Davies [00:43:20] [00:43:20]Yeah. I think that, for now, we have to rely on reputation of venues. So, if you read something from the New York Times, it’s much more likely to be true. Now, New York Times can be faked by AI-generated stuff, but let’s assume that they’ve got fact-checking, they’ve got resources to try to actually get the real scoop, you know? Even now, we have to do that. Even before AI, like on Facebook, if somebody posted a picture with text on it, I tend to not believe whatever text it is. If it’s actually text in Facebook, I read it, and I think maybe. But if it’s, like, just a picture with text on it, it signals to me that it’s bullshit. Right? So, I would hope you know that I think the reputational thing is going to have to be part of it. So, there might be companies or even AI algorithms that pride themselves and make their money based on their reputation, where if they were to get fooled by AI-generated content, it would be really embarrassing for them, and they would lose a lot of money. So, they have an incentive to only say things that are true. But I think that a person just reading something, or looking at something, and trying to decide whether it’s AI or not, we can kind of do to a limited degree now, but eventually, it’s going to be so hard to do. I’m not confident that people will be able to just do it on their own very well. [87.2s]
Patricia [00:44:48] And do you believe there is a need for greater interdisciplinary collaboration and dialog on AI-related risks among policymakers, technologists, ethicists, and other stakeholders?
Jim Davies [00:44:59] Yeah, there should be. And, you know, as I said, it’s kind of a universal solvent. As it’s put, it’s like AI is going to affect everything. Any problem that you have in this world, you can use AI to help you with that problem. And so, that’s going to affect medicine, and education, and law, and everything else. So, I think that ethicists, and computer scientists, and people from all these disciplines should be working together to try to make sure that that the AI is serving us and not hurting us.
Patricia [00:45:30] [00:45:30]And looking ahead, what are your hopes and concerns regarding the future relationship between AI and society? [5.7s]
Jim Davies [00:45:37] [00:45:37]Well, I mean, I sound like – I’ve been focusing on the negative, but I mean, I’m an AI scientist and I actually, you know, I have a lot of hope that AIs can help us enormously. A lot of people would actually rather talk to an AI therapist than a human therapist. I think many people would feel more open to telling a non-human things that are embarrassing or they feel guilty about. And so, just as you know, there are not enough therapists in the world. And if we can get AIs to give people, like, effective psychotherapy, that would be an enormous benefit for mental health. Similarly with tutoring for education, you know. If we can get AIs to help students learn things or help anybody learn things, that could scale really easily. You know, even these, like, AI image generators, like, a lot of people that I know are very anti AI, and they’re starting to advertise books like no AI was used in the creation of this book. But I mean, books are not often illustrated, and that’s mostly, I think, because of the expense of hiring illustrators. But if we can have AIs, we might see a renaissance. There’s just more art out there now, right? So, you know, and then maybe really smart AI can solve problems that we haven’t figured out yet, like material science, and creating new medicines, and everything. So, you know, there are a lot of dangers, but I really do, you know, I really believe that AI has the potential to make life a lot better in many ways. If it doesn’t kill us all, it might make it a lot better. [96.0s]
Patricia [00:47:14] [00:47:14]And, just before you go, Dr. Jim, is there anything you’d like to share to our audience? A message, maybe advice? [4.9s]
Jim Davies [00:47:20] [00:47:20]Well, no. Just, you know, keep your eye on AI, but don’t let it, you know, make sure that it’s serving you, and you’re not serving it. And this is a similar problem just with, like, people addicted to their cell phones, you know. The cell phone, you can look at it like a walking, like just a billboard you carry around with you everywhere, advertising to you constantly and, you know, take control of your own life and, you know, use these things to serve you and to serve the world and not necessarily the companies that created it, and just be mindful of how you’re using this stuff. [33.6s]
Patricia [00:47:55] Thank you so much, Dr. Jim, for gracing our podcast for the time and valuable insights you’ve shared with us. And of course, to everyone watching, thank you for joining us on another enlightening episode of The AI Purity Podcast. Stay tuned for more in-depth discussions and exclusive insights into the world of artificial intelligence. Don’t forget to go visit our website. That’s www.ai-purity.com, and share this podcast to spread the word about the remarkable possibilities that AI Purity offers. Until next time, keep exploring, keep innovating, and keep unmasking the AI. Goodbye, Dr. Jim, thank you so much for being with us again today!
Jim Davies [00:48:26] Thank you so much!
Patricia [00:48:27] Goodbye!