One common criticism of adaptive learning software is that it too frequently relies upon machine-scored formative and summative assessment types like multiple choice, true/false or labeling activities. Detractors of adaptive learning software point out that these assessment types tend to measure what students have memorized and not so much what students can do. What’s more, these assessment types can be boring for students to complete and lack the deep insights about learning performance that other, more authentic assessment types can provide like project-based activities, for example.
So what if we could add these more challenging and robust assessment types into the adaptive learning system? What if the analytic model that underlies the adaptive experience could recognize and incorporate scores from both machine-graded and human-graded activities? Well, now we can.
Recently, Acrobatiq introduced rubric-based assignments as a new feature of Smart Author — Acrobatiq’s cloud-based adaptive learning curriculum authoring environment. Course creators can now develop and assign activities that can be graded by an instructor rather than by the system. What’s more, human-graded activity scores can be combined with machine-graded scores to produce a much more nuanced view of what what a learner knows.
How does it work?
This feature allows students to be creators: they might write a paper, make a video or create an image or presentation. They can then submit that artifact to their instructors for grading.
Within the Smart Author interface, the feature includes two parts: a rubric (for those doing the grading) and a hand-in assignment page (for the students to complete). The rubric is created by the course author and includes fields to insert instructions on how to complete the assignment, criteria for grading assignments, and a section for providing feedback. The rubric feature is extremely flexible and can be applied to almost any type of assignment. Content authors decide how many criteria there will be on a given rubric and how those criteria will be weighted.
The hand-in assignment page where students experience the assignment includes instructions for the completing the assignment and a space for students to upload their work. Currently, the the Acrobatiq platform accepts any kind of file the student can upload, including documents, presentations, images and URLs.
Once the activity has been assigned to students, instructors access the project through a grading interface that mirrors the rubric. As the instructor reviews students’ work, they complete the rubric row by row, evaluating the students on each skill and learning outcome. This interface also allows instructors to leave custom feedback as well as standard feedback.
It should be noted that human-graded assignments have been happening already in blended classes that use the Acrobatiq courseware. (See, for example, the case studies of our current users.) Some classes that use the courseware to handle online instruction have been using class time for presentations, projects and group work that is graded outside Acrobatiq’s platform.
This update has simply allowed instructors to bring the grading for that work into the platform, informing our analytics about the work students are doing on those projects and allowing us to capture a rich data stream being generated outside of the platform.
Making human-graded assignments easier for all the humans involved
While the assignments will be richer for students and the quality of the instructor feedback is now more nuanced, there is one drawback from a student perspective: receiving a grade is less immediate. Assessments are being scored manually, and that takes time.
So, for those learners who are impatient to receive their grades, a grading tracker has been created. Once a project is turned in, the students can watch the tracker to see instructors’ progress on their work. When grading is completed, students can access their grades and the instructor’s feedback.
The drawback from the instructor perspective? The human-graded rubric is more labor-intensive than automatically-graded quizzes. Instructors are now grading assignments one at a time (and their progress is now being monitored by students hungry for feedback).
To make the jobs of the graders a little easier, we’ve added a feature that will let those instructors add common feedback for mistakes repeatedly made by students.
Suppose a science course assigned students a paper on motion systems. A few papers into grading, the instructor sees that her students are consistently using the word “piezoelectric” incorrectly. Rather than type her feedback over and over each time she sees that error, she can add the correct meaning and usage of piezoelectric to a list of common feedback. The next time a student misuses the word, she can simply select that feedback and apply it to the assignment.
Another feature that will make life easier for educators: one rubric can be used for different assignments across a project. That way, the science instructor in the example above can use the rubric for the paper on motion systems for every paper throughout the course. Grammar, content and the correct use of words like “piezoelectric” can be weighted the same way in each of the rubrics.
Putting educators at the center of adaptive courseware
One misconception about adaptive learning is that it removes human instructors from the equation entirely. While one of many use cases for adaptive learning is self-paced learning, the majority of use cases still rely heavily upon expert-guided instruction.
Education leaders, innovative faculty and instructional designers who are trying to institute adaptive learning platforms can point to human-graded rubrics as further evidence that expert-guided instruction is still a critical component of the learning process.
Rubric-based grading offers the best of both worlds: adaptive courseware that responds to students’ needs remotely while allowing educators to see exactly how students are performing — and to allow instructors to grade students’ work in a nuanced and flexible manner.
Related reading: How a Skills Graph is the First Step Toward Competency-Based Education