Maura Grossman on Navigating Ethics and The Future of Responsible AI
Listen Now
Maura Grossman [00:00:00] The computer scientists, they often feel that their job is to maximize the algorithm. I try to help them own where that data came from, whether that data is representative, whether that data is fair. Who benefits and who doesn’t benefit? Who’s included in its use when it’s designed? These things are important for them to think about.
Patricia [00:00:37] Welcome to another episode of The AI Purity Podcast, the show where we explore the complex intersection of artificial intelligence, ethics and its societal impact. Today’s guest is a distinguished guests, who is a research professor at the Sheridan School of Computer Science at the University of Waterloo. She’s also an adjunct professor at Osgoode Law School and an affiliate faculty member at the Vector Institute for Artificial Intelligence. She is best known for her groundbreaking work in technology assisted review, a supervised machine learning approach that has transformed to document review in high stakes litigation. In addition to her research and consulting work, our guest teaches courses on AI Law, Ethics and Policy, where she explores the legal and ethical implications of AI. Her insights on responsible AI use, deepfakes and the challenges the legal profession faces with AI-generated content are shaping how we think about the intersection of law and technology. We’re excited to dive deep into these topics and more, including how AI can be responsibly used in educational settings and the broader implications of AI in our society. Welcome to the podcast, Maura Grossman! Hi!
Maura Grossman [00:01:43] Hi, Patricia! Thanks for having me.
Patricia [00:01:45] How are you doing today? Thank you so much for being here. Maura, please tell us, how did your background in law led you to the world of artificial intelligence and machine learning?
Maura Grossman [00:01:54] I was a litigator at a law firm in New York handling lawsuits, and many of the lawsuits we handled had massive amounts of electronic data. So you might collect 10 million email. And the question is, you know, you’re only going to show maybe a hundred exhibits will ever make its way to court if that, and maybe ten of them really matter. How do you find in 10 million email, 2 million email, whatever it is, how do you find those important documents quickly without having to break the bank? And so I was dealing with this problem at the law firm, and it seemed to me that having lawyers, junior lawyers, eyeball these documents one at a time, one at a time, one at a time was neither particularly efficient nor effective, so I started to look for a technical solution to this problem. And I came across Dr. Gordon Cormack, who was at the time one of the top spam gurus in the world, and it occurred to me spam and ham was not a terribly different problem than important document, not important document. And so I went up to him at a computer science conference and I asked him if he would be interested in working with me on this problem. And that’s what started me in the world of computer science and machine learning. We were the first in 2011 or so. We published a what has become a very famous study showing that actually supervised machine learning could do this task better than lawyers.
Patricia [00:03:42] And you would say that was what first sparked your interest in digital evidence and the application of AI the legal field?
Maura Grossman [00:03:48] Yes, it did. And as we continued to work together, both in law and we’ve done a lot of work in health also, when I came up to the University of Waterloo, I noticed most of the students really weren’t thinking a lot about the ethical, and policy and social implications of what they were doing. So I started to suggest that we have some courses in this area that were more interdisciplinary in nature, and that would- one of the first courses I taught actually combined law students and graduate computer science students in the same class to look at these issues together because the lawyers sort of lack the technical background and the technical people aren’t as steeped in the socio political sorts of issues, and having everybody in the room at the same table seemed like a good idea to me. So- and it’s sort of grown from there.
Patricia [00:04:47] And was that the first of its kind, the course where you teach about the ethical implications of like using tech and also law in the University of Waterloo?
Maura Grossman [00:04:55] My understanding it was the first of its type in not only Waterloo but in North America. It has since become more common to have these cross-disciplinary courses, but I was not aware of anything before that combining the two, and it was actually quite challenging. And not only because it’s technology, I mean now we’re all used to Zoom and everything like this, but this was pre-pandemic, so I actually had to find rooms in both schools that had the technology to do this. And one school graded on- with letters, and the other school graded with numbers, so there were all kinds of roadblocks, but I got it done. I now teach in both places, but I actually don’t teach the class asynchronously between the two. And the reason is wherever I am live, there tends to be much more discussion in the room that’s remote at the time sort of gets ignored. It’s very hard to have a conversation if everybody in the call is on Zoom. You can have a conversation, but if half the people are in the room and half the people aren’t. So I now teach the two courses separately, but it was a lot of fun doing them together, and I always enjoy my when I’m teaching both at the same time to see how different the thinking is in the two classes.
Patricia [00:06:27] I can definitely see that happening. I think it’s really nice that we were also able to adapt, especially during the pandemic and to still have classes even though everything was through Zoom, we might have not been able to do that had the pandemic happened a few years earlier. But one of the challenges that you’ve also spoke about in the justice system, [00:06:44]you’ve mentioned concerns about AI-generated evidence in the courts such as deepfakes. I wanted to ask you, how do you foresee this impacting the integrity of trials in the future and the justice system overall? [12.2s]
Maura Grossman [00:06:58] [00:06:58]We are starting to see AI make its way into the courts, and the way it comes in can be one of two ways, one of which is, I think, easier than the other. So the first is you and I both agree something is the product of an AI system. So say you were interviewing for a job, and you didn’t get the job because of an AI decision, and you decide to sue because you think that was discriminatory. That’s the easier case, because it’s really no different than other scientific and technical evidence. You have to look at the validity and the reliability of the tool, what the inputs were, how it was trained and so forth. That’s fairly straightforward. It may be black box for the courts, but I think they have the tools to deal with that. The harder question is when you and I don’t agree that the evidence is a deepfake or not. So you say you have a very nasty voicemail from me disparaging you and defaming you and saying all kinds of horrible things, threatening things, and I say, “No, that’s not me. I never made that voicemail.” And it is easy today for somebody to take- you do podcasts, I can take a minute of your voice and go on to a free online tool and make a very compelling fake clone of your voice that I can then get admitted in court pretty easily because the standard for getting it admitted is low. All I have to do is bring somebody in who is familiar with your voice and they say, “Yes, that’s Patricia’s voice. I’ve heard her voice a million times. Of course, that’s her.” And you’re saying, you know, “No, it’s not.” in the same voice I’m hearing. And this is going to present real havoc for the court. Imagine if you are a family court judge, and I come into court what’s called ex parte, only me. I don’t bring the other side in, and I say “My husband is threatening me. He’s threatened to lock me in the trunk with the kids and drive the car into the lake, and I want immediate custody, and I want a protective order to keep him anywhere from, you know, near the house. Here, I’m going to play you the recording on my phone.” Well, how on earth is a judge supposed to know whether that’s a deepfake or not? So I really foresee this creating all kinds of challenges for the court, and I’ve been pretty proactive with a retired US federal judge in proposing some rule changes both in the US and Canada to try to address these challenges. [170.5s]
Patricia [00:09:50] [00:09:50]And could you share some of the things that the court can do to adjust when there is this potential consequence? Eventually, when the court system becomes flooded with, you know, AI-generated evidence in the future, and it’s harder to authenticate. [12.5s]
Maura Grossman [00:10:04] [00:10:04]I think we’re going to need experts, and that is going to increase the cost and the time of trials, because now ,there’s going to be a trial within a trial about whether the evidence is real or not. But experts can look at the pixels and can look at the metadata, the data about the data, and figure out when things were created by who on what device and so forth. But that sort of going to take us down an extra rabbit hole in a process that’s already very expensive and time-consuming. I think judges have to learn what questions to ask, and I know later we’ll talk about watermarking and other stuff like that, so we’re hoping there’ll be some technical relief. But I think for the foreseeable future we’re going to be relying on on experts in many cases to try to- they won’t be able to say definitively one way or the other, but they can certainly alpine that it looks like this has been manipulated in some way. Or alternatively, you know, if you’re having open heart surgery at the time, supposedly you left me a voicemail, well, that’s not going to be possible if you were unconscious. So, you know, there may be ways to corroborate or failure to corroborate that may be useful. [84.9s]
Patricia [00:11:29] And I just wanted to ask, like, have you seen cases recently where there has been an issue of AI-generated content being submitted to the court?
Maura Grossman [00:11:38] There have been a couple of cases. One is called the [00:11:44]State of Washington versus Puga. [1.2s] It was a criminal case in which a defendant in a murder, a bystander, had seen the- had recorded on a phone, the alleged murder. And the defendant, I think, wanted to argue, “Well, how do you know that’s not my cell phone as opposed to a gun? The video was very grainy.” So a witness for the defendant used AI to sort of regenerate or to enhance the video. And the court ultimately did not let the enhanced video in saying it was done with, you know, the person said machine learning, but they couldn’t explain how it was made. It wasn’t generally accepted, you know, in forensic community as a tool that’s reliable. So that’s one case. And then there have been a couple of cases where people have tried to bring in generative AI evidence to show either the definition of a term or that a price or charge was reasonable, something like that. But so far, the courts have not let a lot of this material in.
Patricia [00:13:06] [00:13:06]And you’ve discussed earlier the possibility of using watermarks or markers to identify AI-generated content. How effective do you think this will be in practice, especially as AI tools become more sophisticated? [11.3s]
Maura Grossman [00:13:18] [00:13:18]My understanding from many people I’ve talked to is that a lot of these tools, one, a watermark can be removed fairly effectively, and two, that they’re not necessarily reliable, a lot of the detection tools. And that’s a part, at least with audio and video, a function of the generative adversarial networks, the GANs that are used to build these tools. You have a tool that creates content and a cool tool that discriminates content, and the discriminator gives the generator feedback and the generator gets better at creating realistic feedback. So as soon as you develop a discriminator, that’s very good, your generator gets better. So it’s been a technical challenge up to this point, and it’s also possible to take something that was made in a watermark system, and bring it to a system that doesn’t watermark and then say reproduce this without the watermark. So there are workarounds. I think it’s going to be a challenge moving forward. I’m much more hopeful about the tools like C2PA where you mark something at the time of creation to show it’s genuine or to show it’s a deep fake, and that stamp, which comes from the very start, indicates every change made to the media after that, but it’s going to be years before that’s in every device, and it’s standardized. So we’ve got a technical challenge for a while, I think. [96.1s]
Patricia [00:14:55] And do you think it is possible for us to reach a point where every piece of AI-generated content would be properly watermarked or authenticated in real time?
Maura Grossman [00:15:05] [00:15:05]Every no, [0.5s] But I think if you look, for example, at journalists, they have every incentive to use that because they want you to believe their reporting, in their picture that accompanies their story, so they’re going to adopt it. It’s- the challenge will be to get every single person on the planet to adopt it, but I think we will get a majority at some point, and then we’ll really only have to worry about the ones where there isn’t this kind of marker of provenance from the beginning that we need to worry about where did this come from.
Patricia [00:15:43] You were talking about earlier bringing in experts, right, for some of these challenges that we’re getting, especially now that generative AI is fairly new. [00:15:52]Do you think the proprietors of this technology have some sort of responsibility to maybe adjust with either law, education or wherever these generative tools are, you know, impeding on or becoming a challenge? Do you think they have a responsibility to maybe create those watermarks and so that people can tell if a certain content is AI-generated? [20.7s]
Maura Grossman [00:16:14] [00:16:14]I do think that the big tech has some role to play in this. What’s challenging is the following. So say we have that HR dispute, and you say that the tool discriminated against you, and that’s why you didn’t get hired. And so we’re in a lawsuit, and I say, “I want to test the tool to show it’s not it’s not valid, it’s not reliable, and i want to see the training data to see if the training data was representative of who the tool was used on.” And then the company says, “Sorry, that’s proprietary information. You can’t have it.” Well, I don’t think you can have it both ways that you can say this is a good tool, and you should be using it, and we can all count on it, but nobody can look at it, and test it and check it. The challenge is coming up with something that either can’t be removed or can’t be placed on something that is actually realistic content. So I think there are technical challenges, but I do think that big tech has a role to play. And I think from what I understand, many of the companies are involved in C2PA and are very much interested in trying to come up with methods to help distinguish between deepfakes and real evidence. [87.1s]
Patricia [00:17:42] [00:17:42]You had an interview where you said AI is not inherently ethical, but rather it can be used in ethical or unethical ways. What steps can be taken to ensure AI is used responsibly, especially in high stakes areas like law or education? [13.7s]
Maura Grossman [00:17:58] [00:17:58]Right. So, the point I was making in the interview was that this is a tool. Electricity’s a tool. Fire’s a tool. A hammer’s a tool. And it can be used in good ways or bad ways. We’ve had a lot of ethical guidelines promulgated. Unfortunately, I don’t think that’s been enough and hasn’t had enough teeth to ensure compliance and we have too much surveillance, violation of privacy, things like that. My guess is we’re going to need regulation of some sort. Now, it was done in the European Union with the AI Act, but I think having nothing probably is not a particularly good idea. And I think you have to look at what it is we’re worried about and deal with those things first. And, you know, some people say that existential risk and robots taking over the world is not the highest priority. We can’t lose sight of that possibility, but there are much more problems that are closer to home, like privacy violations, and misinformation and discrimination that we should be dealing with first. [76.9s]
Patricia [00:19:15] [00:19:15]And what do you think are the key elements that AI regulations should address to minimize harm such as bias and unfairness? [7.0s]
Maura Grossman [00:19:24] [00:19:24]So I think if you look at what appears in most of the ethical guidelines and what I would consider the minimum conditions for AI to be responsible and trustworthy, one is accuracy that the AI is valid and reliable. It measures or predicts what it’s supposed to do, and it does so consistently under similar circumstances. It should be unbiased and fair to the degree we can agree on what that means. It should be safe, secure and privacy protecting. It should be reasonably transparent and explainable depending on context matters there. We may not care as much compared to something that suggests, you know, you need to have brain surgery. Accountability. Somebody needs to be responsible when things go wrong, and I think we need informed consent, particularly in health and legal areas where I think people need to be told that their- how their data is being used or how tools are being used on them. The problem is I don’t know if you can achieve all of these at once. Right now, at least. [73.7s]
Patricia [00:20:38] [00:20:38]And how do you view the current state of AI regulation globally and what can countries like the US and Canada learn from the EU’s effort to regulate AI? [8.3s]
Maura Grossman [00:20:47] [00:20:47]So the EU has taken what is called a risk-based approach. They have ranked AI algorithms in terms of their potential negative impact. And the more risky a tool is, the more it is subject to regulation. So something like a spam filter or a recommender system might have virtually no regulation at all. One level up would be something like you’re dealing with a customer service bot. You should at least know it’s a bot. And then you get into areas like health, the justice system, employment, education, where those are high risk, high impact systems and they have a lot of requirements. The US has had very, well, it’s had nothing federal other than guidelines and some attempt by the White House to promulgate, at least through executive order, some kinds of rules for at least government agencies, things like that. But it’s been mostly fractionated. It’s been mostly, you know, very narrow rules that are promulgated in very narrow jurisdictions like a particular city or state. So New York City, for example, has a rule about the use of AI algorithms in hiring, but that’s only for New York City. It’s not broader than that. There are other states and jurisdictions that have rules for biometrics or public surveillance using facial recognition by police and so forth. But it’s very been broken up and it’s been moving. And it even becomes challenging internationally because, say, I have my car, my autonomous vehicle, and it’s programed in the United States to protect the passenger at the expense of everybody else. And I drive it over the Peace Bridge into Canada and now say Canada has a different set of values and says that should protect the four pedestrians. Well, how are we going to change? We can change the algorithm in the car as it goes over the bridge? So, you know, these cross jurisdictional rules are very, very complex to come up with, and they sometimes have unintended consequences. So I know a lot of people feel that the AI Act is going to hold back innovation in Europe, so you have to regulate but regulate in a way that doesn’t leave you behind everybody else on the planet in terms of developing AI, and I think Canada is somewhere between the US and the EU, certainly in terms of its privacy regulations. And there is a bill, C-27, that’s being considered that is in many ways similar to the EU AI Act, but it hasn’t passed through the legislature yet. It’s still under consideration. [182.9s]
Patricia [00:23:51] Are these soft laws enough, these ethical guidelines enough, or do you think firmer legislation is absolutely required?
Maura Grossman [00:23:58] Absolutely required. I think we haven’t seen success. And if you look at big tech, as soon as there’s an economic downturn, the ethics group is the first one to go. Or sometimes you have situations like research showing that facial recognition algorithms don’t work well on women of color. And, you know, depending on who you believe, she’s either pushed out or fired or quits because she feels it’s untenable to work there. But that’s not a that’s not a solution. When people flagged problems that they end up, you know, not working in the places that need that help.
Patricia [00:24:39] [00:24:39]Now, I wanted to talk about responsible AI use in data science. When you teach your course on law, ethics and policy. What are the main ethical concerns surrounding either you emphasize to your students? [9.9s]
Maura Grossman [00:24:50] [00:24:50]The main course that I teach at the moment focuses on discrimination and surveillance and privacy violations. Because I think those are concrete and those are things that my students can actually do something about and in fact may be in the only position to do something about those things. So that’s my focus. But we certainly do cover other ethical areas. What I’m trying to teach them is how to spot issues, not how to solve every problem. Because in one semester or less, I can’t make somebody ethical. What I can do is help them see where the issues are. “Oh, I just designed something that’s not going to work very well for people with disabilities. Maybe before I ship this, I ought to think about what tweaks I can make to make this more inclusive.” So I’m hoping if I can help them spot the issues and teach them how to communicate those issues in a compelling and convincing way that they’ll all go out and hopefully do the right thing. [71.5s]
Patricia [00:26:02] [00:26:02]And can you share some of the key principles that you emphasize regarding the responsible use of AI in both academic and professional settings? [7.6s]
Maura Grossman [00:26:11] [00:26:11]I point out to the computer scientists, they often feel that their job is just to maximize the algorithm. Make it the most efficient, the fastest, or something like that. And I try to help them own that where that data came from, whether that data is representative, whether that data is fair. Who benefits and who doesn’t benefit from the application? Who’s included in its use when it’s designed? All of these things are important issues for them to think about. And so I want them to think about that broadly. And we’ll have one class where we do talk about existential risk or loss of jobs, things like that. But what I’m trying to do is give them the tools to think more broadly and to feel like it’s their problem. [48.2s]
Patricia [00:27:00] [00:27:00]How do you help students understand the potential biases in AI systems, and what strategies do you teach them to minimize these biases in the development of AI tools? [9.4s]
Maura Grossman [00:27:11] [00:27:11]So I use both recent media, and you can always find tons of examples, and I also use research studies. So, we’ll do case examples. For example, the tools that predict criminal recidivism. Are you likely to commit another crime within two years? And then we look at what data was this trained done. What are some of the questions on this test? Oh, it doesn’t ask race, but it asks 15 things that are correlated with race. Maybe race is coming in as a proxy variable, and I might give them a paper to write saying you are a consultant for the company that’s designing one of these and they’ve asked you to try to make it fairer. What are you going to do about that? Or we may look at the five times where facial recognition algorithms misidentify people of color when analyzing grainy surveillance tape. And five people, all of whom were black, were arrested in the United States using these tools. Okay. What’s the problem here? How can we make these better? So I use a lot of case examples from the current news. I have a section every week called From the News where we find some disaster in the past week, and we pick it apart, and we talk about how would we do this differently. Does it mean having a more diverse team of developers? Does it mean having a stakeholder focus group? Does it mean having a discussion about whether we should have different weights or different variables? So I’m constantly trying again to raise that consciousness rather than to give them, teaching them to fish rather than giving them a fish for dinner. [116.0s]
Patricia [00:29:08] [00:29:08]And can you share how you approach the issue of data privacy and the ethical handling of sensitive data when working with AI? [6.6s]
Maura Grossman [00:29:16] [00:29:16]We often start with general polls like are you more in favor of your personal privacy or convenience? So, would you rather know if your friends were in the neighborhood or would you rather your location be private? And often at the beginning of the course, they’re perfectly happy for everybody to know everything about them because of “I’m not doing anything wrong. So what’s the big deal?” And then we start to dive into examples like the [32.0s] [00:29:48]Uyghurs in China [1.6s] [00:29:51]or people in Turkey who got wrapped up in all kinds of problems, legal problems and law enforcement problems because of certain apps they had on their phone. And we’ll talk about the pros and cons of surveillance in places like Singapore, where the examples are closer and closer to home. And eventually I think they start to pick up on, that can happen to me too. So one example we start with is Monsignor Burrell, who was a Catholic priest, very high in the church who was using Grindr, which is a gay hookup app. And it was, he thought, supposed to be private and anonymous, except that somebody checked his ID outside the parish and then they checked his ID when he was on a trip in Las Vegas in a bathhouse, figured out all of them were him, and then they outed him. And then people start to see, “Oh, that could happen to me. Somebody could triangulate all of my locations.” And hopefully they become a little more sensitive. So again, it’s a lot of case studies and polling discussion, and they and they get to hear different views, so they’ll hear some of the women say, “I feel very safe in Singapore and I can walk around at night and I don’t have the slightest worry.” And then you hear a woman talk about somebody put a tracking device in my car and my suitcase, and that was very scary, and then they start to think about these issues, because they’re closer to home. [101.9s]
Patricia [00:31:34] [00:31:34]Now, I wanted to talk about the implications of generative on student learning. And this probably isn’t as big of an issue that you face in your classes, but I wanted to ask, how do you think the widespread availability of generative AI tools like ChatGPT is impacting the way students learn and engage with their coursework? [17.2s]
Maura Grossman [00:31:52] [00:31:52]So for one of my courses, I used to have weekly essays. I decided they were too easy to do using generative AI. So last year and this year, I’m having the students keep a journal. It’s much harder to write journal entries. I suppose you could put them in after you’ve written them, but ChatGPT is not coming into my class and can’t say what was discussed there and how you felt about it. I allow them- the thinking has to be on their own part. They can use it, especially if they’re a non-native English speaker to clean up their grammar and things like that. But they have to disclose. Last year I started to feel on some of the written assignments that they were being written by generative AI and I gave a one week amnesty and said, “I suspect some of you are using this, no questions asked. If you used it, resubmit your paper, no questions asked. If you didn’t use it, that’s fine, and if I catch you, you’re in big trouble.” And about half the class resubmitted their papers, so that told me something. I worry that people will outsource their thinking, particularly in the area of ethics. That’s not something you can just say to ChatGPT, “Tell me what’s right.” That’s something they have to get in touch with their own values, because what’s right for them may be very different than what’s right for me. And my goal isn’t to shove my view of right and wrong down their throat. It’s to help them get in touch with their values. So if they are working for an employer whose values are not the same as their own, they either speak up or they find an employer who they’re more in simpatico with. I worry. I worry that this will descale a whole generation in terms of writing skills. Computer scientists already don’t have great writing skills, because that’s not what they spend a lot of time doing, and I worry that they’ll lose that, and they’ll lose, you know- I don’t have a problem, I suppose, if you’ve already learned to code, to have an AI clean up your code or point out where there might be an error, I can see benefits in that. But in ethics, you want people to think about that themselves. [141.4s]
Patricia [00:34:15] And how do you believe educators should address the potential for academic dishonesty with AI generated text, and what measures should schools put in place to mitigate these risks?
Maura Grossman [00:34:27] I think faculty members have to be very clear at the beginning of the term about what their policies are and how they view these things. And professors differ. Some feel this stuff is coming. We might as well teach people how to use it, and teach them the strengths and weaknesses. And other people feel- know this is not a good development. So I think you have to be very, very clear. Try an honor system to the best you can. Some of- I know your company is in the business of detectors for text. I know some of them have not been all that reliable. And I have a lot of students who are not native English speakers, so that’s been a problem at Waterloo because many of these tools mistake native English speakers, non-native English speakers for AI. There’s no way to prove one way or the other. I think I’ve- you try to reduce the opportunity for cheating as much as you can, but ultimately, you can only provide people with learning opportunities, and be as good a professor as you can, and try to make the assignments interesting so people want to do them. But ultimately, I can’t stop people who are going to cheat, and I have to sort of accept that.
Patricia [00:35:56] Yeah. I wanted to ask you about that. What do you think of AI text detection tools, and do you think there is a possibility where it will become a staple in educational institutions? And what do you think are the challenges with this implementation if these text detection tools are implemented in schools?
Maura Grossman [00:36:15] Well, you know, it’s very much like the plagiarism tools. I think they got better over time. And I know, for example, some of these plagiarism tools look for real unusual characteristics that just couldn’t appear by chance in two different students papers unless they in some way coordinated, because it’s such a unique way to phrase code or something like that. I think they’re helpful. I worry about the risks of accusing somebody who didn’t use these tools, and I worry that often that tends to be minority or disadvantaged people who get accused and hurt. So I’m nervous about using something like this until you are really comfortable that it is accurate and that it is not discriminatory. Otherwise, I think that’s very, very harmful. So I want to see the technology develop to the point where there aren’t the level of false positives that there are now, because I really, really don’t want to accuse somebody when they haven’t done something. I’d rather miss a few people who’ve cheated. They are the ones that will suffer in the long run, because they haven’t learned. So it’s a challenging issue. I hope the tools get better, but I think ultimately, again, I’m not sure this is something we can 100% prevent.
Patricia [00:37:54] [00:37:54]And in your opinion, what balance should educators strike between allowing students to explore AI tools and ensuring they develop their own ideas and skills independently? [9.1s]
Maura Grossman [00:38:05] [00:38:05]So I think you have to sit down at the beginning of the term as I did, or why you’re designing your syllabus and figure out what are the skills I want people to leave this course with and how do they get those skills. And some of those skills may be learning to use these tools in responsible ways to improve their work, or to be more creative, to get ideas. And so I think you design your course, and your assignments, and your assessments consistent with what it is you want people to want to walk away with. And if you want them to be able to communicate, to discover, and identify and communicate ethical problems that they see. So I think you have to look at what your learning objectives are for the students and then look at your assignments and your assessments and make sure they line up in a coherent way with that. So I think for some things it’s perfectly fine. And I think students do need to learn how to use these tools because employers are going to want them to use these tools. You just have to decide, is that the right course in this particular course or in this particular area of this course. So I think what I’m going to do for the final project is encourage students to use these tools for the final project, but not to use them in in writing their journals and their weekly papers. [93.2s]
Patricia [00:39:40] [00:39:40]And do you think generative AI has the potential to equalize learning by offering students access to powerful resources? Or do you worry that it may widen the gap between students with differing levels of technological literacy? [13.6s]
Maura Grossman [00:39:55] [00:39:55]So, it was funny. I was talking this morning with someone. We’re doing a webinar on mentoring. We talked about will AI mentoring tools create two levels, you know, two different systems of mentoring. Those who can get, say in law firms, white men who easily can find mentors like them who are willing to take them under their wing, and go golfing, and all that stuff and show them the ropes versus people of color and women who find it harder to find role models and mentors, and they going to end up with the AI. On the other hand, the AI is available 24/7 and can certainly help with skills development. So, I can practice interviewing a witness at two in the morning by giving the gen AI the personality of the witness that I want to interview. And I can do that, and I can’t do the night before a deposition with my legal mentor, because I may not be able to get into his or her office for another two weeks, and my deposition’s coming up this week. So I see pros and cons of the scalability, the ease of access, the ability to personalize as useful, but I also worry that other people need to have the- everybody needs to get the access to the live human interaction as well. It can’t be just that some people get the AI, and other people get all the goodies. [98.7s]
Patricia [00:41:34] [00:41:34]And how do you think generative AI tools should be regulated in academic settings to prevent misuse while still encouraging innovation and learning?[8.0s]
Maura Grossman [00:41:45] [00:41:45]Same as I said before. I think the rules of the game have to be clear, and I think they have to- students have to have opportunities to be able to explore, experiment and learn these tools. And the rules of the road, you know, you need to know in which courses that’s okay and in which courses it’s not. And so maybe for your paper writing requirement, it’s permissible for checking your grammar after you have a complete draft, but it’s not permissible for writing your paragraphs, because you could end up with stuff that’s plagiarized. In the end, I think it is a very individual decision for most faculty I know at Waterloo and at Osgoode, where I teach, it’s been left to faculty to decide. There’s no one rule about how it’s done. And some people have gone back to in-person closed book exams. Other people have sort of thrown up their arms. I don’t know how schools are going to deal with this. I think it’s, again, it goes back to figuring out what it is you’re trying to teach people and making sure they have the opportunities to learn that. But you can’t. The more you bar it and say you absolutely can’t touch this, the more people are going to use it. So, it’s just human nature. [80.6s]
Patricia [00:43:07] [00:43:07]And just one last question before I let you go. As AI technologies continue to evolve, what do you think are the most urgent ethical issues that we as a society need to address in the next few years? [10.5s]
Maura Grossman [00:43:19] [00:43:19]One obviously, is discrimination and lack of inclusiveness in these tools that often they are used against certain groups in the favor of some groups. I’m very worried about data getting concentrated in the hands of a few big tech companies that are unregulated. I worry about use of biometrics and surveillance, and lots of us are just sort of throwing up our hands because it certainly is convenient at the airport, you know, to just stand in front of a scanner, and then it says, “Go ahead.” But I’m not sure I want everybody being able to check where I’m going, you know, everywhere. I worry about agents. So it’s one thing to say to gen AI, “Tell me about where I might go on my trip to Italy. What are the interesting places? What are the restaurants? Where might I want to stay?” It’s very different to say, “Here’s my credit card. Go make my vacation plans.” And I see us moving more from conversational and sort of knowledge generating bots to agents, and that scares me a little bit, because then you start to get into the world where things have a mind of their own or maybe are misaligned. So maybe the bot decides that bringing some underage children with you and some opium, you know, might be a good idea, and be fun and adds that into your vacation, and that certainly wouldn’t have been something that you had planned. So keeping control of this stuff. I worry about lethal autonomous weapons, I think is is an issue. And I’m hopeful in health care, for example. I think we can move from reactive health care to more proactive health care, that we can solve problems we haven’t been able to solve before. But but the problem is, up till now, many of these tools have not been used in the most ideal ways. So, we have to make sure that we’re getting the benefits, and we’re avoiding the risks as much as possible and leaving that in the hands of a few very wealthy white men in Silicon Valley is probably not a really great idea. [144.4s]
Patricia [00:45:44] Well, thank you so much more. And is there anything you’d like to share, any message you’d like to share with our audience? Maybe advice? Anything at all before you go?
Maura Grossman [00:45:52] People should experiment. I think it’s useful to play around with these tools just to see what they can do and what they can’t. I’m not a huge user of generative AI, so I like to write my own stuff because I think I write better. But that doesn’t mean there aren’t uses or that you should stick your head in the sand because the stuff is coming whether we like it or not. So you might as well learn about its strengths and weaknesses. I think people need to speak up when they see wrongful uses. I think people need to stay really vigilant with the deepfakes, especially at times of election and other political disagreements. So I think we’re moving into a- it’s a little bit scary, but I think we have to, you know, keep abreast of the technology and talk more, be more transparent about how it’s being used, what are safe uses and what are not safe uses.
Patricia [00:46:54] Thank you so much more for gracing our podcast for your time and the valuable insights you shared with us today. And of course, thank you to everyone who joined us on this enlightening episode of The AI Purity Podcast. We hope you’ve enjoyed uncovering the mysteries of AI-generated text and the cutting edge solutions offered by AI Purity. Stay tuned for more in-depth discussions and exclusive insights into the world of artificial intelligence, text analysis and beyond. Don’t forget to visit our website, that’s www.ai-purity.com, and share this podcast to spread the word about the remarkable possibilities that AI Purity offers. Until next time, keep exploring, keep innovating, and keep unmasking the AI. Goodbye!