Why your entire workforce should become AI experts

Overview

AI isn’t just for data scientists anymore. In this episode of Today in Tech, host Keith Shaw talks with Red Hat Distinguished Engineer Mo Duffy about how businesses can empower everyone — from HR to marketing to customer support — to become fluent in AI.

💡 Discover:
* Why AI fluency is becoming a must-have skill like internet literacy
* A four-layer AI skills pyramid: from prompt engineers to AI researchers
* The role of open source in democratizing AI expertise
* How vibe coding and AI assistants are lowering the barrier to building tools
* Real-world success stories and ethical considerations in AI adoption

If you're a leader trying to upskill your workforce, or an employee looking to grow your AI capabilities, this conversation will give you the blueprint for success in an AI-first workplace.

#AIExpertise #MoDuffy #RedHat #GenerativeAI #OpenSourceAI #PromptEngineering #TodayInTech #KeithShaw #EnterpriseAI #AITraining #FutureOfWork #aiinbusiness

Register Now

Transcript

Keith Shaw: AI is everywhere right now — from the boardroom to every department in the modern business. But while the technology keeps advancing, many companies are now asking: how do we make everyone in the organization an AI expert?

Not just data scientists or the IT group, but marketers, HR teams, support staff — everyone. Is this even realistic? We’re going to dive into this issue on today’s episode of Today in Tech. Hi everybody. Welcome to Today in Tech. I’m Keith Shaw.

Joining me on the show today is Mo Duffy. She is a Distinguished Engineer at Red Hat and a leading voice in open source innovation and AI enablement. Welcome to the show, Mo. Mo Duffy: Hi Keith, good to be here. Keith: All right, so let’s jump right into it.

What are some of the main drivers you’re seeing behind the idea of making employees into AI experts? It feels like there are probably a lot of different reasons companies are doing this. What are you seeing in this space?

Mo: I’d say there are two main categories of reasons. The first is that it’s becoming a new form of fluency.

I’m old enough to remember when search engines first became a thing and we had to train the workforce on how they worked — what you could do, what you couldn’t do, basic Boolean logic, things like that.

It feels like AI is becoming a new kind of literacy — ubiquitous, but not everyone has picked it up yet. And this spans across all business roles. The second category is about AI expertise being rare and hard to find.

The market for talent is so competitive that many companies find it more practical to grow that talent in-house. If you can even get one qualified AI expert internally, that person can help evaluate more talent or solutions.

But if you don’t have anyone, how can you assess whether a candidate really knows what they’re doing?

Keith: We’re definitely hearing a lot about the skills gap. Companies are poaching AI experts from each other, which just reinforces the need. So, is it more practical for companies to say, “Let’s train the people we already have”? Mo: Exactly. Everything is moving so fast.

You might look over and see your competitors adopting AI while you’re sitting back thinking, “We don’t have the budget or expertise.” Even if you just start training your current staff to get closer to that level, it’s better than doing nothing.

And given how fast the field changes — literally weekly — it makes sense to start somewhere and evolve from there.

Keith: Let’s talk about the definition of an “AI expert.” When you use that term, what skill sets are you thinking of? It could be everything from prompt engineering to full-on data science and model development.

Mo: I like to think of it as a pyramid with three or maybe four layers. At the base, you have prompt engineering — just becoming comfortable using AI tools, understanding what different models can do, and incorporating them into your daily work.

It’s like how we all had to learn to use search engines 25 or 30 years ago.

Then you have the middle layer, which is more technical. These are your traditional software developers who build applications and are learning how to integrate AI into what they’re already doing. For example, connecting an app to a hosted model or running a local model to enhance their app’s capabilities.

They may be building agentic workflows, retrieval-augmented generation (RAG) systems, or using APIs to add AI functionality.

Above that, you have the model customization experts — data scientists and data engineers who can work with a company’s proprietary data to fine-tune models or create new ones tailored to the business’s needs.

At the very top are the AI researchers — the ones creating new techniques and pushing the boundaries of what’s possible. They’re the ones keeping up with the rapid pace of innovation and helping translate it into practical, consumable technology.

Keith: Where do you think most companies are right now in that pyramid? Mo: Most are chasing the top, but in reality, a lot of value comes from just moving people up from the base to the middle.

You don’t have to be a researcher to get a lot of benefit from AI. Just learning how to use agents effectively can make a huge difference.

Keith: I’d probably put myself in that base level. I’ve played around with enough tools to be a decent prompt engineer, but maybe the next level for me is learning to work with agents. Mo: That’s totally fine!

It’s like with Linux — using it provides value, even if you’re not writing your own kernel modules. You don’t need to be building your own AI models to benefit from them. Just understanding how to use and adapt open source tools can take you a long way.

Mo: What’s exciting about this AI wave is that you don’t need deep coding expertise to get value from it. As long as you have strong domain expertise, you can build useful tools. It’s like remodeling your bathroom — you don’t need to be a contractor building houses from scratch.

With the right tools and guidance, anyone can experiment and solve real problems. At Red Hat, we’re seeing departments that traditionally didn’t build anything start creating their own solutions to become more efficient. They’re applying their domain knowledge in new ways.

Keith: That makes sense — Red Hat is a tech company. I imagine your teams jumped into AI quickly. But what about other companies? Are you still seeing resistance? Mo: Yes, absolutely. And honestly, I think much of the hesitation is valid.

For example, in the open source world, there's a big question around licensing. When a model generates code, how do you license that output? If the model was trained on a mix of code with different licenses, what does that mean for what it produces?

Then there are ethical concerns — where did the training data come from? Was it used with permission? Some people are asking for guardrails, more transparency, and control over these systems.

There’s also the healthy skepticism — some folks are asking, “Is this just another hype cycle like blockchain or NFTs?” But as more people have their “aha” moment with AI, they realize it’s different — there’s real value here. Keith: What was your own “aha” moment?

Mo: I come from a UX engineering background — not the world’s best programmer. I used to lean on colleagues for help, asking, “How do I run this command again?” One day, I just asked an AI model instead. It made me feel more self-sufficient.

I didn’t have to bother anyone, and I got answers quickly. That autonomy was empowering.

Keith: Let’s talk about open source AI. Are you seeing more companies embrace it as a way to help employees build expertise — rather than just going through corporate training programs? Mo: Definitely. There’s a lot of excitement around open source–licensed models like LLaMA, Mistral, and Granite.

You can download them, run them locally, and experiment without worrying about token limits or usage costs. While they may not match trillion-parameter models, they’re good enough to learn and build. Open source frameworks are also evolving quickly. Traditional training programs become outdated fast.

But when you follow open projects, you're always at the cutting edge.

There’s also confusion around the term “open source AI.” It involves three key artifacts: Data – What data was used to pre-train or fine-tune the model? Was it properly licensed? Code – This includes model architecture, inference logic, and surrounding application code.

This is the most understood area thanks to established software licenses. Parameters – These are the weights of the model, essentially a fingerprint of the data. Licensing here is tricky, since the weights are derived from potentially mixed-license datasets.

The Open Source Initiative (OSI) is working on a definition for open source AI (OSAID), but not everyone agrees with it. Some think it's too restrictive; others think it's too loose. The Linux Foundation has a more flexible framework called the Model Openness Framework, which recognizes a spectrum of openness.

Keith: Would you say open models help more with ethical and transparency concerns? Mo: Yes, I think so. Transparency — just knowing what went into the model — is like nutrition labels on food. Maybe they can’t list every dataset, but even a general description helps you make better decisions.

As creators, we want to know how our work is being used. Supporting open, transparent models is one way to push the industry in the right direction. Keith: So what gets you most excited about the idea of democratizing AI expertise?

Mo: This is a bit of a dream for me. I came to open source from a UX and design background. I wasn’t a strong coder, but I wanted to contribute. A lot of people had amazing ideas but couldn’t realize them because they didn’t have the coding skills.

With AI, that barrier is coming down. It’s like a new abstraction layer on top of programming. People can describe what they want, use AI to generate something — even if it’s not production-grade — and then take it to developers with a working prototype. That’s empowering.

I’d love to see a renaissance in open source software driven by this accessibility.

Keith: What’s holding people back from reaching that point? Mo: Fear. Fear of job loss, fear of security issues, hallucinations, legal risks. Not everyone’s had a chance to explore these tools hands-on, so there’s still uncertainty. And depending on the industry — tech vs.

law firms or hospitals — the tolerance for risk varies a lot. That said, I always encourage people to try open source models on their own systems. Download one to your personal laptop and experiment. Gain fluency while waiting for your company to catch up.

Just don’t go rogue on company systems!

Keith: Do you think AI fluency will eventually become a baseline skill, like using the internet? Mo: Yes, I think it’ll become boring in the best way. Everyone will be fluent in how to use these tools — like we all use search engines or scroll social media.

We won’t all be AI engineers, but interacting with AI will be second nature. Keith: Should business leaders also gain fluency, beyond just funding AI for the IT department? Mo: One hundred percent. If I were spending money on a tech investment, I’d want firsthand experience.

Even executives can benefit from summarization tools — just to stay informed and make faster decisions.

Keith: You mentioned “vibe coding” earlier. What are your thoughts on that trend? Mo: It’s a bit hyped, but there’s value. I define vibe coding as letting go of control and trusting the AI assistant — letting it write the code while you review the results.

It’s useful for quick prototyping or early-stage user testing. The problem comes when people treat vibe-coded apps as production-ready without applying guardrails, testing, or security protocols. You still need expertise and processes in place.

Keith: Should people who’ve never coded try vibe coding? Or should they learn some basics first? Mo: They should definitely learn the basics. This isn’t the death of expertise. Anyone can buy art supplies, but that doesn’t make them a painter.

Having foundational knowledge makes the results better and more satisfying. So yes — play and explore, but also invest in learning.

Keith: Looking ahead to next year — or five years from now — what does the AI-literate workforce look like? Mo: I’d hope everyone is at least at the base level of the pyramid — comfortable using tools.

Most developers would be in the middle layer, integrating AI into their work. More importantly, I’d love to see AI used to eliminate toil — the repetitive tasks that drain time. Think of the way meetings can now be auto-transcribed and summarized, with follow-up emails and tasks generated automatically.

That’s where AI should shine.

Let people “eat the marshmallows,” as I like to say — focus on the fun, meaningful parts of their work while AI takes care of the rest. Keith: (Laughs) We used to call the cereal bits the “Luckys.” You can’t just leave a bowl of Luckys behind!

Mo, thank you again for this fantastic conversation. I’d love to have you back next year to see how things have progressed. Mo: Absolutely. Looking forward to it! Keith: All right, that’s all the time we have for today’s episode.

Be sure to like the video, subscribe to the channel, and drop your thoughts in the comments below. Join us every week for new episodes of Today in Tech. I’m Keith Shaw — thanks for watching.