robots and the potential dangers of AI

Dangers Of AI You’ve Got To Know Today

Feb 28, 2025 | Uncategorized

AI Purity discusses the dangers of AI and why now more than ever – when artificial intelligence is becoming more and more ingrained in society – should we start paying attention. 

On episode 8 of The AI Purity, cognitive scientist Dr. Jim Davies spoke about how AI is the fastest adopted technology, and true enough, in recent years we’ve only seen more AI tools available for use and more use-cases being integrated into various industries. Whether its healthcare, education, and media, artificial intelligence is becoming a prevalent force in today’s society. 

There’s no doubt about the benefits of AI, but because AI has been adopted at such a rapid pace, it is also equally important to educate its users about the potential dangers and how to use it ethically and responsibly. Whether or not a majority of the population agree, there is a huge impact of artificial intelligence on society and its better we address issues as early as now so we can begin to find better solutions and ways to utilize this technology.

tech art on a phone’s screensaver

Dangers of AI (Artificial Intelligence)

We can begin to tread a more responsible path of AI use once we understand its potential threats and risks of misuse. Here are some  artificial intelligence problems and challenges that we should be aware of. 

Spread of Misinformation

If you’re a user of Chat GPT, Claude, or other AI chatbots that generate text, you might want to think twice before taking anything it says at face value. One of the biggest dangers of AI is its ability to spread misinformation. These AI models can often “hallucinate” or in other words, generate content that cannot be backed up by facts or evidence. After all, these AI chatbots never reveal the sources of their information, giving its users the responsibility to fact check on their own. The issue is that many people do take Chat GPT and the likes’ responses as truth, rarely making their own research in the process. Therefore, the risk of potential misinformation spreading becomes a big problem.

An overreliance on tools like these that don’t always generate correct responses can pose a bigger problem down the line. Especially if people are starting to use the tool to seek health advice or act as a proxy for real human connection.

Like any use-case for AI, there’s a need for further and stronger human supervision when generating AI content. Using Chat GPT and similar tools isn’t inherently bad, but as a user of the tool, you have the responsibility to double check its responses and find secondary sources to back up the information you receive.

Deepfakes

In an article published by Standford University on the ‘Dangers of Deepfake: What To Watch For”, deepfakes were defined as a form of “hyper-realistic media” usually in the form of video, image, or audio. In layman’s terms, this technology can essentially take a person’s likeness and have it transformed, allowing for content to be created that impersonates a person’s likeness.

Examples of deepfakes that have spread around the web include voices of artists and musicians singing songs they’ve never sung before, images of women’s faces being plastered on bodies that aren’t their own and so much more.

According to the article by Standford, this technology makes it easy to convincingly impersonate anyone, and in the hands of cybercriminals, it could be leveraged for scams and identify theft. Deepfakes as one of the dangers of AI is especially scary because there’s a chance of it being used against certain people. If you’re a particularly frequent user of social media or have audios, videos, and images of you up on the web, it would be so easy to create deepfakes of you and since there’s hardly any regulations or laws surrounding this, it’ll be hard to get rid of fake videos, images, and audios of you once it’s up.

AI Scraping

AI scraping is the process “scraping” data from websites, social media platforms, or other sources. AI scraping is usually done to train AI models and scrapers or crawlers browse web pages to extract text, image, and videos as long as the content is publicly available.

One might say there’s nothing wrong with this process and online data is being used to train AI models, it can’t be that bad, especially if the end goal is to make AI tools better.

However, the problem lies in the consent of the owners of these data. For example, authors risk their works being scraped from the internet by crawlers to train AI models without their permission. The same goes for artists who upload their works online. If an AI model is trained on certain data, then it’ll be able to regurgitate that data whether in the form of text, audio, video, or imagery and it’ll be akin to plagiarism. This opens up copyright issues and violations where the righftul owners of these data can raise legal disputes.

The risk of AI trainers scraping information from all around the web also raises the problem of further bias and misinformation spreading. If AI is scraping biased material or incorrect data, that data will reinforce inaccuracies in the content it generates. 


An example of artists fighting against AI scrapers was written in an article published by The Arts Law Centre of Australia. The rising epidemic of unauthorized data scraping and artistic exploitation by proprietors of AI tools are being fought by artists who feel their brands and reputations are hurt with their art styles being copied without credit or compensation.

AI Model Bias & Discrimination

AI models are trained on human data, which means it can also perpetuate the human biases present in our society. As discussed in AI scraping, some AI models scour the web for content to be used to train their AI models, but without supervision and without the meticulous sifting of data, the bias present in scraped data will eventually become part of the AI’s algorithm. 

The risk in that is the reinforcement of already existing inequalities in society by AI. If the AI model is trained on biased, racist, and sexist data, it’ll only further the gap among members of marginalized society like people of color, women, members of the LGBTQ community and more. 

 On an episode of the AI Purity Podcast, Dr. Vered Shwartz discussed what algorithmic bias was and how to eliminate it. She provided an example of algorithmic bias affecting underrepresented communities, citing Amazon, which used a CV filtering system that discriminated against women. The AI model that they used was trained on data that reflected bias against women, so resumes sent out by women were treated as “out-of-distribution” and were less likely to be selected. 

Potential Job Displacement

There have been a lot of headlines in the news that talk about “AI taking over people’s jobs,” and it is a very real likelihood. Corporations are finding ways to automate systems, thus replacing human jobs, but at what cost? Is automation truly better, and will these cut costs really enhance operational efficiency overall? The most common sectors affected by the potential of AI replacing humans in the workplace include customer service, administrative tasks, content writing, marketing, travel and tourism, and even teaching.

With jobs becoming more automated because of AI, there will be massive layoffs and job displacement that could eventually lead to economic disruptions. With a large population increasing unemployment rates, the widespread AI adoption and automation can destablize the job markets and lead to financial insecurity for a lot of people.

Having certain jobs replaced by AI also means we lose human touch and creativity. AI models lack empathy and nuance problem solving which a lot of customers already voice out. Not everyone wants to be met with a robot to deal with their immediate issues and there’s nothing quite like human skill to aid in troubleshooting. There are simply complex and human-centered decisions and solutions that AI is unable to make and this will lead to flawed outcomes and even more problems down the line.

Promoting Academic Dishonesty

One of the biggest issues that has arisen since the widespread adoption of AI generative tools like Chat GPT is the prevalence of academic dishonesty. It is that very reason that AI Purity was even born – to aid both students and educators in the age of AI.

Educators are faced with an influx of their students passing AI-generated work as their own and using AI tools to make their essays and assignments for them. The risks of artificial intelligence amongst students could have lasting effects. They risk losing the ability to conduct research, write, and think critically for themselves. 

AI tools aren’t inherently bad. In fact, they are a great way to work efficiently when used right. AI tools can expedite research processes because they can summarize long pieces of data but it’s not meant to do all the work for you. 

 The misuse of AI generative tools in education poses such a huge threat that instutions are considering creating regulations for its use and some professors even suggest having their students take exams and write their essays manually with pen and paper.

an orange robot surveiling

The Future of Artificial Intelligence

Why do we need artificial intelligence? Well, for the most part, is does help improve our daily lives by automating tasks and solving complex problems that would take years to solve.

For example, on an episode of the AI Purity Podcast, we had Konnex.AI founder David Wild talk about the strides AI has made in the industry of healthcare and in the pharmaceutical industry to be exact. According to David Wild, AI helps speed up certain process like bringing important drugs to the market, making it readily available for those who truly need it. 

The future of artificial intelligence, when used correctly, does look like a bright future for all, but like many experts say, it’ll be a team effort to get there. This means having more diverse teams with the responsibility to train AI to not risk the creation of biased algorithms, and users also have to carry the responsibility of using AI technology for good. 

The negative effects of artificial intelligence aren’t as widely spoken about, and we need to have these discussions early on so we can mitigate the dangers of AI and make sure we’re building societies that benefit from this tool rather than being taken advantage of.

Nidia Dias on visualizing AI project by Google DeepMind

Choose AI Purity

Whether you’re an educator, a student, a web developer, a writer, or creative, you can very well benefit from AI Purity’s fast, reliable, and accurate AI text detector.

At AI Purity, our aim is to facilitate the responsible and ethical use of AI tools by providing our users with a platform they can trust to detect traces of AI-generated content.

Don’t let yourself be misguided by misinformation, and make sure you always double-check what you’re reading online by using an AI text detector to decipher the sources of the content you consume. 

Join our mailing list!

Latest Updates

Debunking AI Myths: Understanding the Role of AI in Content Creation

AI in Content Creation: Why Myths Stick Around AI is taking over! …Or is it? If you listen to the internet, AI is either the best thing since sliced bread or the beginning of the end for writers. No middle ground. Just pure hype or doomsday panic. The truth? AI is a...

Pin It on Pinterest