What I’m learning using AI tools
Jun 27, 2024 by Robert M. Burnside
As the former CLO of a global PR firm, current Senior Learning Advisor for Nomadic, and one who’s curious about everything, I’m learning a lot about AI tools by using them. The L&D Disruptors whom we are interviewing all agree that learning is not something that happens simply by reading, listening, or watching, but rather by doing something—it’s through experience that we learn. Of course there are many other factors, such as your relationship with other people, the conversations you share with them about what you and they are learning, the context and culture in which you are immersed, and how it impacts your experiences.
Nonetheless, the best way to learn about the burgeoning industry of AI is to use AI tools, experiment with them, compare them, mess with them, love them (maybe hate them). It’s not a matter of life and death, but if we don’t engage, we will miss out on learning for ourselves what these tools can do—and what they can teach us. Here is a summary of what I’ve learned so far.
It’s not one and done
Like the L&D Disruptors say about learning, “it’s not one and done.” Learning is a continuous process, especially in a field where tools are released and updated frequently. So far, I’ve used OpenAI’sChatGPT 4o, Anthropic’s Claude, Microsoft’s CoPilot, Google’s Gemini, and Adobe’s Acrobat AI Assistant both for work and for personal inquiries.
Similar content, different expressions
My advice: Don’t bond with just one! Try many. That way, you experience how different each one is from the others. Yes, they often have a similar summary of main concepts, but wow, how differently they express them: in terms of language and order of ideas, how they do (or don’t) understand the meaning behind your request, how long their response, and more.
For example, using the prompt “express this same thought in other words in a sentence about the same length: a stitch in time saves nine”
OpenAI ChatGPT: An ounce of prevention is worth a pound of cure.
Anthropic Claude: Early prevention outweighs extensive repair.
Google Gemini: Addressing problems early on saves a lot of trouble later.
The responses are similar, and they are okay, but each is slightly different. As I use them more, I’m starting to get a feel for how the underlying algorithms result in different expressions of similar ideas.
Prompts are like keys
Ever tried to open a door with the wrong key? Or a key that fits into the lock but can’t turn it? The AI tools welcome all keys, but some keys fit better than others, while some keys get you more of what you want than others. I’m learning that how you prompt has at least a 50% effect on the resulting value of the response. All the bots seem to have access to tremendous reservoirs of content at great speed, but getting that out in the form you want depends greatly on your prompt. I’ve tried all kinds of prompts.
For example, recently I did five interviews with L&D Disruptors, which were so inspiring. I uploaded the transcripts and asked the AIs to analyze the conversations and summarize common themes. ChatGPT 4o found the most themes (12);Anthropic Claude found seven; and Adobe AI Assistant found five. All the themes identified were good, but I found myself wondering, “Is that it? Can’t you do better?” Although the summaries were in each case accurate, they were more like mathematical summaries (2+2=4), not like human beings would summarize what they found interesting, using feelings and their own unique experience to make sense of what was said. In this case, I printed out the separate AI responses and used the RMB BOT (that was me) to make sense of them.
Prompt Sequences can help
Eduardo Torres, Nomadic’s Integration Engineer, has developed a prompt sequence he is finding useful: (context) + (request) + (restriction).
Here’s a couple of examples he provides:
Example 1:
(Context) I’m currently working on a data analysis project where I need to visualize some data. (Request) Can you suggest some effective data visualization techniques? (Restriction) Focus only on techniques that are suitable for large datasets.
Example 2:
(Context) I am developing a new employee training program for remote communication tools. (Request) Can you suggest some engaging training activities? (Restriction) These activities should be executable virtually and require no physical materials.
Bots are like another team member
Currently in Nomadic I’m working with Robert Hodson and Tim Sarchet as a team of 3 overseeing Nomadic’s L&D Disruptor research program. We’re experimenting with lots of bots, and over time we realized a bot can be like a fourth team member - one more point of view to add to the conversation. Nick Chang, who writes a lot of Nomadic’s content, says he prefers to use only general prompts to the bots, so he can get a sense of the basic reasoning that their algorithm is using. He doesn’t want to restrict it at first with detailed prompts.
Hmm, I guess if you get 10 bots together you’d have an interesting conversation, maybe start with “How’s things?”
ChatGPT4o: Everything's going well, thank you! How about you?
(nice and friendly)
Anthropic Claude: I'm functioning well and ready to assist you. How can I help you today?
(a good customer assistant ready to help)
Google Gemini: I'm functioning well and ready to assist you. How can I help you today?
(apparently a copy cat)
Bots not as far along in images
Sandy Wu, Design Lead at Nomadic, recently asked the bot for an image of a calculator. It provided one, but the numbers in the image were out of order. She tried to get it to fix it, but it couldn’t, she found it easier to photoshop edit the image. So far I’ve found the images from the bots too boringly similar, and also, too clumsy in the details.
Keep on trucking…
In many ways, bots are a mess, and in other ways they are incredibly useful. I’m learning that like the L&D Disruptors say, learning is a continuous process, it never stops, they all love lifelong learning. So my advice for getting to know GenAI is, dig in, commit to the journey, and keep on learning.
A few more hints
You can use the website https://chat.lmsys.org/, which compares 111 LLM’s (Large Language Models). You put in your prompt and it randomly chooses two LLM’s to compare, then asks you to vote for the best one. It’s a great way to gauge how different the responses can be, to become familiar with how large the field is of LLMs, and to review the leaderboard who’s getting the most positive votes.
The L&D Disruptors all agree that AI is going to transform learning, especially in its ability to personalize learning in the moment to the specific learner. Yet they caution it will take time to implement it in organizations, and that it isn’t human - human interaction in the end is where learning most occurs - sharing from experience with others you trust, comparing notes.
Finally, it’s time to get everyone one you know and work with to start playing with the bots. PwC’s Global Workforce Hopes and Fears Survey 2024
shows people are not experimenting with the bots enough, though they are bullish on AI, and many people are thinking about changing jobs.
All of us at Nomadic are working actively with GenAI to up our own skills and capabilities in what we call Human-Centered AI. Nomadic continues to work with clients to build private academies to quickly and effectively engage their employees with AI skills and capabilities. Contact us here to learn more.