Generative AI: Debunking These 5 Common Myths and Misconceptions

Martin van Blerk
4 min readFeb 13, 2024

Artificial intelligence (AI) has been at the forefront of the conversation surrounding digital transformation over the last few years. The year 2023 was a watershed year for AI, as generative AI — technology that produces content based on user prompts — saw immense growth and widespread integration into businesses of all types. Programs like OpenAI’s ChatGPT, which was released in late 2022 and kicked off the breakout year, demonstrate AI’s great potential to mimic human intelligence and make a variety of tasks easier.

This is especially true in the business world, since generative AI can significantly improve employee productivity and satisfaction by performing otherwise mundane tasks like requests for proposals and procurement and quickly providing insight in key areas like compliance, supply chain management, and finance. Beyond promising performance enhancements, the technology is scalable, meaning it can be implemented companywide.

Because of its seemingly rapid rise in the zeitgeist, some detractors view generative AI as a threat to modern business practices, employment, and education. Others believe it’s too new and risky to be implemented at such a large scale. Let’s dive further into these and other misconceptions.

Misconception #1: AI will replace human employees.

The notion that generative AI will eventually replace human employees is one of the most common misconceptions about the technology. However, the truth is generative AI is a collaborative tool that requires prompts from humans to create content, provide easier access to data, and troubleshoot problems. It cannot think or produce original material on its own; it’s a complementary technology designed to free workers up to spend more time on strategy and other important aspects of their job.

“Some of the more boring parts of the job may disappear,” notes Oxford economist Carl Benedikt Frey, speaking to Business Insider. “We may begin focusing more on generating the right ideas, asking the right questions, things that are more interesting.”

Generative AI has the ability to dramatically change how work is done, but not at the expense of human workers. Rather, it can be used to help humans learn new skills and improve their productivity. More than half of the 54,000 respondents in PwC’s Global Workforce Hopes and Fears Survey 2023 said they expected AI to positively impact their careers within the next five years.

Misconception #2: It’s too new and risky.

Generative AI has existed for years but is only recently becoming accessible and powerful enough to interest most organizations. Tech companies like Microsoft and Google employ generative AI throughout their respective organizations, while many Fortune 500 business leaders are exploring how to best utilize the technology. In an April 2023 Ernst & Young poll of 254 technology leaders, 80 percent of respondents said they anticipate increasing AI investments within the next year. PwC’s US firm, for instance, plans to spend $1 billion to optimize its AI capabilities, which includes the development of a generative AI factory and the ChatPwC assistant.

Generative AI does pose risks for legal violations and data breaches, but these can be mitigated with a robust AI framework that covers strategy, control, and responsible day-to-day practices. Building these safeguards and practices into generative AI use can help address cybersecurity, performance, and privacy risks.

Misconception #3: It’s only for data professionals.

In the early stages of its development, generative AI was primarily used by data professionals and others with advanced technical expertise. However, the advent of user-friendly platforms in recent years has made the technology more accessible and easier to use for people with varying degrees of technical savvy. Whereas AI experts are still needed to create sophisticated algorithms for generative AI models, programs like ChatGPT are more approachable for people with minimal AI experience.

With ChatGPT, users simply type in a text prompt to receive an answer or corresponding content. This can be used in an office setting to write emails, create code, and answer queries. In human resources, professionals are using AI to schedule interviews, assess resumes, and perform employee evaluations. There’s skill involved in writing an effective, specific prompt to produce higher quality responses, but a little practice and training can get users up to speed.

Misconception #4: AI will negatively affect education and learning.

It’s unfortunately true that have been many instances of students using ChatGPT and other generative AI tools to write papers. However, higher learning institutions are responding by implementing improved plagiarism detection tools and promoting the ethical use of these tools. When used ethically and appropriately, students can harness the power of generative AI to inform and complement their own critical thoughts and enhance their overall educational experience.

Misconception #5: AI is unbiased.

While the misconceptions above highlight the problems people have with AI, many people incorrectly believe the technology’s biggest advantage is that it is neutral and free of human bias. This misconception is popular in HR, where AI is used to sort through resumes and find suitable candidates based on keywords and other criteria. The assumption is that AI won’t take into consideration gender, race, or other personal information, whereas some humans may have implicit biases.

However, because AI is constantly learning and evolving, some models can perpetuate biases based on historical data. That means it’s incumbent upon managers and executives to thoroughly review all AI-based employment decisions and implement ethical AI practices to ensure transparency and accountability.

--

--

Martin van Blerk
0 Followers

A NZ entrepreneur studied business, management, marketing, and game development at the University of Waikato and joined the University Game Developers Programme