When educational leaders (and leaders in many other professional domains) are looking for evidence-based research about what innovations are actually effective, they are likely to come across the work of SRI International before long. SRI (originally known as Stanford Research Institute at its founding in 1946) is an independent nonprofit that conducts sponsored research covering basic science, policy, and product implementation in technology, computer science, biosciences, and education.
SRI’s education division includes the Center for Technology In Learning, which has participated in or led many of the influential impact studies related to education technology and adaptive learning. And many of those were led by the center’s director, Barbara Means, a Ph.D. in educational psychology and the author or co-author of several books, including Learning Online: What Research Tells Us About Whether, When and How from Routledge Press.
Means is an expert in the often misunderstood design of impact studies, particularly how they can be used to understand interventions with a technology element to them and to improve access to higher learning. We were eager to ask her about those subjects, and she generously gave her time for the following interview, which is condensed and edited for clarity.
Are impact studies becoming more important as the use of education technology grows?
Yes. Educators are getting more sophisticated in looking for the learning impacts of the tools they’re thinking about using. The blending of technology and human instructors is a major trend, and it’s going to be the major way learning happens. Given the large number of choices, we need some basis that people can use in deciding which technology tools to explore.
What are university leaders looking for in impact studies?
Higher education institutions are trying out different approaches before making larger decisions. They’re starting to institute processes that take learning impacts into account and continually measure them.
Some of the things they’re concerned with are cost savings. The idea of moving to open educational resources or other low-cost technology-based resources is attractive.
Being able to teach more students effectively is another. If the technology can take on the mechanical aspects of grading so an instructor can handle more students without losing learning effectiveness, that’s attractive.
Another thing that stands out is the desire to catalyze better instruction by the people teaching the course. Higher education leaders and innovative faculty have a sense that in a large lecture course you’re not reaching all students and want methods that are more student-centered. They see technology as a partner with the instructor to do that.
Are the existing impact studies giving any definite answers about what interventions work?
It’s important to recognize that the studies are about interventions and not about products. There are well-designed studies that are giving answers about interventions that work, but the intervention most always involve more than just a particular technology product. You really need to look at all the supports for changing instructor practices and the nature of the outcome measures in the study. When we do that, we are seeing things that have positive results for student learning.
One is what the network of colleges working with the Carnegie Foundation for the Advancement of Teaching is doing. It’s been replacing the traditional developmental math sequence with a new approach that involves blended learning including courseware.
Is it possible for university leaders to skip to the end? If I’m designing a new online degree program, can I look at the impact studies and go shopping for the interventions that work?
No. You really need to start with understanding the outcomes you’re trying to promote. Are you trying to prepare people for a profession? For the next course in the series? Are you trying to pass muster with the accrediting agency? You have to be clear about your goals before you can design something that is likely to get you there.
The answer is not having a list of effective products or even effective interventions, because you’re likely to have to adapt to local circumstances. What we need to do is get clear about processes for designing instructional experiences and for keeping track of their impacts and improving them over time.
University leaders should really focus on building the right organizational practices that emphasize learning impacts and lead to improvement over time.
What other common misunderstandings are there about impact studies?
We find very often a tendency to compare course pass rates and declare victory or, vice-versa, declare defeat because it’s 4 percentage points lower. What often doesn’t happen is a consideration of what changed in those cohorts. Perhaps the admissions process changed. The instructors have different grading policies. Maybe the alternative higher education options in the state got more or less expensive, so you have a more or less able group of students entering the course.
Part of what we’re trying to do in our work with the Gates Foundation is educate institutions about the need to get an apples-to-apples comparison so they can make sense of the data. We found that the characteristics of students, even in the same course vary significantly from term to term. You really have to control for those characteristics.
How do you control for that? It doesn’t sound easy.
If you have an achievement or pre-test measure for students being compared, that gets you a long way. It requires combining your course outcome data with other kinds of data the institution already has on students. If you build in the process, it’s not that hard to do. People are just not aware of the need to do it.
What are some other principles for doing impact studies well?
You need to have the same outcome measure on both the students experiencing the new intervention and on the business-as-usual students. The easiest thing is to look at grades. The problem is that many things go into grades — class participation, the number of assignments the students choose to do. The assignments may be different in the different terms. So grades, although they’re put on a common metric, are not really the same outcome measures.
Ideally, you actually have some assessment of learning in both the intervention courses and in the comparison courses. That happens sometimes if it’s an occupationally oriented course and there is a professional accrediting exam all students take. It also happens where the department has a common final examination that all introductory biology or introductory psychology students take. In those cases, you’ve got the common outcome measure.
From what you said earlier about what university leaders are looking for, it sounds like it’s not just about which technology helps with immediate needs but about how technology can encourage a look at the fundamentals.
Taking a systemic approach in thinking about an intervention or an instructional system is useful, because it does evoke that kind of thinking: Wait a minute. What are the outcomes I really care about? What combination of experiences are going to help get the student to that place? How can I know as the course is progressing whether my students are making progress in this direction? And what can I do with that information?
As people start thinking about those things, it really does have profound implications for the way they teach.