What does the CLO really want? The ideal learning data set
By Robert M. Burnside
In learning, as in business, most actionable insights first arise from qualitative data—discussion comments, free-flowing conversations, verbatims, and so on. If organizations aren’t consistently generating and then analyzing that qualitative data, then they’re missing opportunities to innovate.
All data is an abstraction from the texture of life. It simplifies and flattens and renders comprehensible infinite complexity. But different kinds of data simplify the world in different ways. And they lead to different kinds of insights.
Because of its emphasis on scale, digital learning has long had a highly quantitative bias. Most enterprises capture little more than completion data in their LMS’s, (learn more on Why can’t I find stats on e-learning completion rates?); complex questions about the relative effectiveness of different forms of learning get reduced to quantitatively answerable questions about who’s signing up for and completing the programs on offer.
More significantly, though, learner engagement and understanding are reduced—at best—to quiz scores. And the vast universe of learner-generated knowledge, unique best practices, employee concerns, and business opportunities becomes invisible within learning programs—those environments most suited to their sharing and discussion.
One solution to the problem is to analyze quantitative data more creatively. That’s an important step but never enough. The reason is that analyzing qualitative data is the best way to learn new things about the world. Every unexpected thing a learner writes or says represents a potential opportunity, not only to improve the learning experience, but to improve the enterprise as a whole.
One example: Nomadic created an end-to-end Communications College for one of the world’s largest Food and Beverage companies, designed to enhance digital media competency and collaboration across the Comms organization. Learners from the CR (Consumer Relations) function—the people answering calls and social posts from (mostly) angry consumers—worked together with corporate comms and PR specialists to solve urgent business problems. After analyzing the discussion threads in some of the learning programs, the client company realized that the CR professionals could play much more of a strategic listening and early-warning role than they previously had been. Why? Because they were the people gathering some of the most urgent qualitative data, data that couldn’t be replicated with automated social listening software.
In most programs we run, we find that the most valuable, actionable insights are culled from the analysis of discussion comments. That’s not to say, of course, that quantitative analysis is unimportant, (learn more on The CIO’s Ideal Learning Data Set); it’s usually a necessary complement to qualitative data. For all the depth and texture of the qualitative, it’s usually missing the dimension of scale. When we generate an insight based on a learner’s discussion comment, for instance, we can use more quantitative data—engagement metrics, quiz scores, polls, surveys, and the like—to test and validate that insight.
It’s the interplay between the qualitative and the quantitative that paints the most complete picture of any enterprise. So, there are two simple questions we should all ask before launching a new digital learning initiative: Will the program generate actionable qualitative and quantitative data? And, are we ready to leverage that data to drive positive change?
Learn how we gather qualitative data at Nomadic - Book a demo today!
Subscribe with RSS
Be notified whenever we publish a new blog post.
By Robert M. Burnside
By Debra Newcomer
By Nomadic Learning