What Matters Most When You’re Evaluating Edtech Tools

Look past the marketing and focus instead on the student—and teacher—experience and built-in engagement features.

 

Because of pandemic-induced distance learning, planning lessons and remaining connected with students involves an increasingly sophisticated understanding of the benefits and potential problems with edtech resources. As a veteran teacher and instructional coach with years of experience in edtech, I’ve seen it over and over again: Teachers’ inboxes and social media news feeds are inundated with advertised tech products, and it’s hard to know what will enhance the learning experience for our students, whether they use the tool in class or remotely.

There are a host of factors that educators need to consider when choosing edtech tools and resources that will support their students and instructors.

EFFICACY

Educators know to look for a research base behind the resources they use in their classrooms, but Nell K. Duke, professor of education at the University of Michigan, warns that conclusions drawn from research are only as sound as the research itself.

A teacher choosing a learning software program likely does not have the time to thoroughly investigate the research that each organization cites. To quickly gauge the impact that a tool can have on your students’ learning, examine testimonials from students and teachers who have used it. The quotes shared in sales presentations and ribboned across product pages often express excitement for features like earning points, watching entertaining videos, or raising a state test score, so look closely for indications that students have actually learned something valuable.

Check to see that there are testimonials where a student enthusiastically describes a new concept they learned, elaborating on how their perspective on an issue changed, or even mentioning what they are reading about or what interesting problems they are learning to solve.

THE STUDENT EXPERIENCE

Can you demo the tool as a student would? If so, explore the experience from the perspective of three different student personas (e.g., an English language learner, an advanced reader, and a student with ADHD).

When you look at the product from the perspective of a student persona, you might notice that there is no audio feature or captioning to support striving readers or language learners to independently access complex texts. Or, when you play the audio, you might detect that the voice is robotic and unengaging. Or maybe there is a human voice that reads with expression, but all the texts are read by people with the same regional dialect or accent, which is problematic when teaching through an equity and inclusion lens.

Also ask yourself, “How will visually impaired students experience the tool?” and “Will students with mobility issues who cannot use a mouse be able to use this tool?”

When assessing from the students’ perspectives, determine if this tool you are investigating will meet their varying needs, and if so, what protocols, modifications, or settings you will need to put in place.

INTRINSIC MOTIVATION

While small rewards work to motivate students to complete basic tasks, incentivizing meaningful learning activities has the opposite effect. Yet many edtech tools rely on features like points, badges, and even competition among peers to extrinsically motivate students to stay engaged.

Celebrating students’ progress with quantifiable measures can be helpful, but the progress celebrated has to be meaningful to the student. When exploring edtech tools, ask yourself:

Do students set the goals?

Who guides them to set ones that are measurable and achievable?

Do they get to choose when they see status updates, or do dings and cheers confront them uninvited?

Are these rewards a minor feature, outshined by the stimulating and relevant content the student is actually engaging with, or are they the primary factor in motivating the student to participate?

Extrinsic rewards can help some students initially engage, but then the stimulation offered by the content and learning activities should take over. When testing a program, consider the balance between extrinsic rewards and opportunities for students to cultivate intrinsic motivation to learn.

ZONE OF PROXIMAL DEVELOPMENT

There is perhaps no concept more referenced by publishers and program designers of student learning technology than Vygotsky’s Zone of Proximal Development (ZPD). The zone refers to that optimal learning space where a student engages with tasks that are beyond what they have the skill to accomplish alone but that they can accomplish with a more knowledgeable guide that supports their productive struggle with “imitative” models (like worked examples and guiding questions).

But the ZPD, suggests Annemarie Sullivan Palincsar, professor of education studies at the University of Michigan, is “probably one of the most used and least understood constructs to appear in contemporary educational literature.” Edtech programs may claim to provide an optimal ZPD for each student user, but the actual experience can be quite similar from one student to the next.

In one adaptive literacy software program I have worked with, the “knowledgeable guide” turned out to be simply an audio recording of a more challenging text that a student was meant to comprehend. The topic’s complex vocabulary and the necessary background context were not explained. As a visiting literacy specialist, I observed students receiving low scores on their “right-fit” passage questions within the software, indicating that either their precise ZPD had not been determined appropriately to begin with, or the direction and support provided were weak.

It’s important to know if a program is actually serving as a skilled guide for students working in their true ZPD or simply providing general scaffolds or assisted instruction. If it is offering the latter, then teachers can proceed to provide the former.

THE TEACHER EXPERIENCE

Aware of educators’ valid concerns that edtech aims to replace teachers in the classroom, most companies assert their digital tool’s inferiority to teachers. Their marketing copy will suggest that their program helps teachers to do their jobs better or that they function as something along the lines of “the best assistant a teacher could ever have.”

I have observed thousands of students using adaptive literacy software in which they were meant to progress through increasingly difficult reading passages and comprehension questions. The program did not inform students of their current levels or when they were ready for the next one; they were dependent on the teacher to share with them this crucial information. In theory, the teacher was to monitor report data from the program to celebrate students’ progress and inform small group instruction. But the reports were vague, listing only percentage scores on activities that teachers could not see or experience themselves. And there were no alerts to inform teachers that their students had mastered a level and could be promoted.

Without knowing how, why, or when to change levels, teachers discovered at the end of the school year that dozens of students had been consigned to repetitive low-level work they had mastered months before. The personalized adaptive software, in these cases, did not serve as an adequate teacher’s assistant and wasted valuable learning time.

In some cases, a program’s flaws can be addressed with ongoing training and coaching, so it’s important to know from the start if the tool is so complex that expert training is needed in order for teachers to use it effectively.

Before introducing digital learning tools to students, determine exactly how the technology will help you—the most important resource for your students—to do your part well

By Shveta Miller