blog
banner

An AI Reality Check for L&D Leadership

🕑 6 minutes read | Jan 30 2025 | By Bob Gulla, TTA Learning Consultant
banner
blog

We’re hearing about it constantly. AI will save the planet! AI will destroy us! AI will take all of our jobs. AI needs human assistance to function properly. So, which is it? How will AI impact our lives, personally and professionally? Will it hurt? Will it help?

The fact is AI is the biggest technological innovation since the internet, and some might say it is, or will be, the biggest leap in technology of our lifetimes. When the internet showed up back in the ’90s, we certainly weren’t sure about the impact it would have on our lives. Tech-forward progressives had an inkling, but the rest of us took a “wait and see” approach until we could figure it out. Sure enough, it grew to dominate and change our lives in ways we could never have imagined.

AI, like the internet—and throw in the personal computer and the smartphone—is traveling the same route, if a little more accelerated. It’s not as tangible as the internet. It’s not a site you can visit. But, like summer afternoon shadows, AI is beginning to creep over all of us.

AI in L&D

Sam Friedman, founder of ChatGPT, was asked about what the best use case for AI would be moving forward and he stated quite matter-of-factly that it was best, at least in its current evolution, for “learning and mentorship.”

That should be encouraging enough.

Yes, we’re still in the early days. Yes, there are currently moments of enlightenment. But in order to avoid utter confusion, or at least a little disappointment, let’s not get ahead of ourselves. We know that AI can help with L&D, but first ask yourself, “How do we apply this in the most meaningful way possible, ways that will make sense to both Learning Developers and End Users?”

Finding meaningful ways to implement AI in L&D will take time and energy. It’s not going to be faster and cheaper right away. As humans, we need to agree on the best ways to filter AI through our learning processes, our training programs, and our development curricula.

That said, at this point, it’s not an option but a necessity. AI, and its cousin ML (Machine Learning, a subset of AI that incorporates learning from data to make decisions or predictions) are capable of personalizing content, predicting learning needs, and enhancing data analytics, all with speed and efficiency.

5 Foundational Components of AI in L&D

Now that we’ve taken a step back, and a few deep breaths, we can accept that jumping hastily into AI is a bad idea. Now let’s talk about the foundational components of AI that you can and should put into place. These are the things that leaders can fundamentally count on to make an impact on your L&D, whether you’re a tech progressive or a Luddite.

The key here is thinking it through. Don’t get too far ahead of yourself. Before you start erecting a high-rise, you’ve got to lay the cornerstones. Less is definitely more, especially if you’re unsure of the best use to begin with. AI is only as powerful as the person or people using it.

Once you’ve become familiar with the potential of AI, then take a step back to determine how to realize that potential. It’s only then that you can start making decisions. Here are a few ways learning leaders can use AI without getting too far over the skis.

  1. Personalization of Learning/Designing Customized Learning Paths: The most transformational capability of AI in education/training/development is its ability to create customized learning paths. It’s already doing this in schools, and it’s basically the same principle in the workplace. By analyzing performance and making adjustments accordingly, AI can tailor learning to who’s doing the learning. This personalization is a game-changer. AI analyzes performance data, identifies weaknesses, and predicts outcomes. This capability can identify gaps and create personalized paths, optimizing educational design and development.
  2. Routine Tasks in L&D: Offloading time-consuming administrative responsibilities to AI is miraculous, enabling overburdened support staff to regain valuable hours. AI can automate routine tasks such as scheduling, collecting feedback, tracking progress, and issuing results. With all of this taken care of, your staff can use that time for more strategic undertakings and personal interactions.
  3. Scalable Solutions: Through an ongoing interpretation of results, AI enables the scalability of learning initiatives. This, in turn, allows organizations to efficiently deliver training to a large number of people across different locations. This wholesale distribution can also be localized to encompass different languages and cultural differences. The World Economic Forum says, “AI applications in [L&D] must be designed collaboratively and with equity in focus, addressing disparities across various demographics and ensuring accessibility for all.”
  4. Predictive/Learning Analytics: AI can provide more profound insights into the effectiveness of educational programs by examining patterns in data. This allows Learning Developers to identify which training methods and materials are most effective, creating a process for continuous improvement. AI can also predict future learning requirements that will anticipate skills gaps and allow for organizations to plan upcoming initiatives.
  5. Feedback: With feedback, more is more. For humans, providing more feedback could be a full-time job. For machines, not so much. Just as you adjust your AI model to deliver learning components, you can train it to deliver feedback. Feedback, both positive and negative, motivates learners to push forward and encourages a deeper understanding of learning concepts. Related to that, AI can review competency assessments in real-time, eliminating that deadly silent gap that happens while you’re “waiting for results.” Through this real-time analysis, instructors can identify strengths and weaknesses in performance quickly, which allows for instant and targeted strategies.
Words of Caution

While developments in generative AI offer new opportunities to leverage tools, L&D leaders must approach this technology with prudence and caution. Here are a few things to consider:

  • Education: Education should prioritize understanding AI’s potential risks as well as how it’s developed. According to the World Economic Forum, “These skills are critical for shaping future talent capable of ethically designing and developing AI tools that benefit economies and societies.”
  • Integration: It’s also important to remember that AI is integrated into your processes, not layered over them. It won’t do things faster and more economically right away, not until learning leaders figure out how to integrate it best. It doesn’t work that way. Basically, AI answers the question, “How do we do this specific function better and/or more efficiently?”
  • Over-Reliance: It’s easy to over-rely on AI for decisions and behaviors. These auto-generated decisions could in fact conflict with available, more contextually accurate information and work against one’s own interests. This can negatively affect outcomes as well as human-to-human cooperation, leading to even poorer results. Just be aware of the risk here.
  • Data Privacy: AI data privacy is a formidable challenge for any company interested in this technology. Sensitive information can easily find its way into large language models (LLM) through training data as well as through inference which can seep unknowingly into an LLM, influence the content generated, and expose sensitive information.

It’s Still Early Days

Ideally, we will speak less about AI as it gets more fluidly integrated with what we’re doing. But now, it’s a shiny object, a new array of tools that common folks like us in the learning space are just beginning to discover. The fact is, we’ve moved into a boundaryless space, after spending most of our time being firmly penned in.

It reminds me of a news segment I saw recently about “free-range” chickens. Supposedly, that refers to chickens caged in for the early part of their lives, but are allowed to “range free” for the latter part. In order to refer to them as “free-range,” farmers would open the cage doors and allow them to leave. Weird thing is, they were deathly afraid of roaming. The only home they knew was the cage. So instead of running through the open doors and roaming free, they scampered back into their cages, where they were “comfortable.”

You know where I’m headed with this. Will we admire the openness and freedom of AI from afar, then scurry back into our cubicles? Or will we take full advantage of the boundaryless technology this innovation affords?

 

 

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *