Adaptive learning offers a variety of benefits to the higher education community. Long-term cost savings, increased accessibility, data gathering, and personalized instruction are driving factors for those who support adaptive learning. But expense, accessibility, and adaptability become relatively moot points for any product or service without demonstrable ROI. And with adaptive learning tools, the best measure of ROI is student outcomes.
For a long time, ROI-type measurements eluded proponents of adaptive learning because no one had yet extracted sufficiently meaningful data to show that student outcomes had empirically improved. Some data suggested that students in online courses performed as well or better than their peers in traditional courses, but par-for-the-course doesn’t impress much when you’re looking for real returns.
Data gets bigger, better
As the technology improved, so did the data. New analyses indicated a stronger correlation between adaptive learning and student outcomes, but much of the data still relied on murky measurements and weak correlations. It was still too easy for skeptics to poke holes in the outcomes case because the technology still couldn’t measure actual learning data, only engagement data.
That changed in the wake of the big data boom. Detailed analytics that measured learning instead of engagement emerged, and the metrics that adaptive learning advocates had suspected all along existed finally started to make their way to the surface. One study from the Open Learning Initiative in 2008 showed that students in an accelerated, adapted statistics course scored six times better than their counterparts in traditional courses, and in half the time.
Another study showed a 27% pass-rate increase, a 9% retention increase, and a 10% rise in final grade average. Yet another indicated a nearly 50% increase in students who received an A or a B using adaptive technology rather than traditional methods. There’s more data like this, and it’s likely there will be more to come.
Trust the data
The improved student outcomes aren’t as surprising as the ongoing skepticism that adaptive learning has any significant impact on those outcomes. That adaptive learning would improve the educational experience seems almost intuitive but even if it wasn’t, there is still plenty of data that shows drastic increases in passing students, student retention, and student scores.
Analytical data is extremely useful in both predicting and understanding behavior, and understanding is the first step toward improving any outcome. Just because we don’t have all the data doesn’t mean that we can’t use the data we do have.
All of this brings us to an important point, which is to trust the data that’s available. The science offers valuable insight into the learning experience from the student’s perspective, which is going to give educators a better understanding of their students.
The good news is the data is getting better. For example, the combination of participation and learning data generates a more nuanced view of student learning than anything previously. Acrobatiq combines information about student activity with a proprietary statistical model that generates a learning estimate for each student and every learner outcome.
That means educators can
- see what students did or did not learn;
- quantify how well students have learned each skill;
- find meaningful patterns in student learning behaviors; and
- measure the effectiveness of their instructional and design choices.
The outcomes case is the only case
In a way, the outcomes case is the only case for adaptive learning. If the whole point of education is to give others access to learning, knowledge, and understanding, then the outcome really is the most important measurement of success.
For years we’ve judged high schools and universities by their graduation and dropout rates, so it stands to reason that we would judge adaptive learning technology by the student outcomes it helps to produce. And those outcomes regularly suggest a correlation between adaptive learning technology and improved student outcomes that is too significant to be ignored.
J.G.C. Wise is a freelance writer specializing in higher education and healthcare practices.