- Teaching Lab Studio
- Posts
- Are Your Students Getting Their Proper Practice Nutrition?
Are Your Students Getting Their Proper Practice Nutrition?
For a long time I have believed in the engaging power of technology and the transformative roles it could play in the classroom. Especially with the advent of AI, more and more people are now imagining the way technology could transform learning and unlock the massive potential in every student. And yet… learning loss, adequate yearly progress, or whatever term we want to use for standardized test performance continues to dominate district strategic agendas. Kids aren’t demonstrating progress they say. This is in spite of the huge number of engaging tools at our disposal and record spending on provisioning them to schools.
How can that be?
Laurence Holt would argue that it is in part due to the 5% effect- the idea that only 5% of students actually use those practice or assessment products as intended in order to experience the expected academic outcomes. Perhaps it’s an issue of access for some students, while more of lack of intrinsic motivation for others. Either way, I’d argue that it’s definitely not because of a lack of engaging options out there. Instead, I believe many of the solutions available lack what I would call educational nutritional value. Kids are clicking through them but nothing is being retained.
Platforms like Kahoot, Quizizz, and many others do a fantastic job of getting kids to participate through gamification. Walk into any class mid-trivia game and you will hear celebration, excitement, anticipation, anguish- all hallmarks of an engaged class. Compare it to an average day in another classroom and it’s easy to get intoxicated by the potential of technology in the classroom.
AND YET…
If that participation is via a mechanic like a multiple choice question, I strongly believe most kids aren’t practicing thoughtfully. Assessment writers will argue that a well-crafted multiple choice question absolutely demands critical thinking to answer correctly. Sure. There certainly are those great students who always thoughtfully participate, but they are a minority. A good chunk of the rest are just bubbling away educated guesses. And it’s not their fault either. The experience of answering a multiple choice question sucks. We force kids to do them all day. In the long run, it can feel soul crushing and students are forgivable in taking the easiest way out.
These educated guessing strategies are incentivized by many existing solutions, where more points can be earned for answering quicker than your classmates. Kids know that kahoot clock ticking above means they are losing ground on their peers in the class competition. If you’ve taught, you know how eager many students are to ‘finish’ practice assignments as soon as possible. The MCQ allows students to do that with extreme efficiency. We’ve made it too easy to let kids bubble their way out of thinking and spend so much time analyzing what can be invalid data. And it’s part of the reason why I think students don’t show adequate progress despite their use.
“If not the multiple-choice question, then what?!”
You would be right to ask. So much of the educational ecosystem is built on it and for good reason- the data is easy to consume and analyze, it’s a quicker assessment than most other forms so you can do it more often, and it’s more accessible for students who can’t write properly. I’m not proposing overhauling this system overnight, but advancements with AI and LLMs have started to paint a picture of a possible alternative. LLMs are already getting pretty effective at grading short answer responses per Gautam Thapar of Enlighten.ai (also a Teaching Lab fellow).
In grad school, I was lucky enough to take a class from Dr. Dan Schwartz and Dr. Kristen Blair who researched the potential education impact of teachable agents- this concept that by teaching another digital student, students could demonstrate what they know and receive feedback when they watched the digital student apply what it’s been taught. One day during a contemplative moment in the middle of my generous paternity leave (thank you Nearpod!), I got to thinking, how might LLMs transform the way a teachable agent could engage and interact with students? The idea kept spinning and spinning and I kept pitching it to more and more friends.
That takes us to today. I applied to the Teaching Lab Studio in order to try to build out the idea into an actual product. I was so grateful for the model Teaching Lab followed, where they recruited ed tech industry veterans and offered a salary and health insurance (which I desperately needed post-pat leave) on top of budget to pursue the idea. In this first quarter, I spent time and money building out a teaching environment similar to the objective of Betty’s Brain but with a few key differences:
Instead of teaching from scratch, the teachable agent would have a very specific misconception
We could curate resources for them to teach with
We would allow students to share those resources with the agent via a chat
What we came up with was a dual-pane environment like the one you see above. Our best initial use cases tended to be in Social Studies and Science, where students are often asked to reason with evidence and it can be hard to assess this skill. In science classrooms, sometimes this exercise is called Claim-Evidence-Reasoning (CER). In Social Studies classrooms, teachers ask students to source evidence from primary and secondary sources to support arguments. Teachers in both subjects recognized this practice environment as a means to assess this type of thinking, perhaps better than a multiple choice question could.
When we felt confident in students' ability to make it through the user flow independently, we set out to do some user testing. We contracted AlleyCorp Nord to facilitate 8 user research sessions with 13-15-year old students and see if we could test whether our teaching environment was both engaging and academically impactful (aka nutritious). Each student in the user research sessions were given two practice sets:
Practice Set 1: Traits and mutations- agent Toby believes that all mutations are bad.
Milestone 1: Student identifies the misconception in the paragraph provided.
Milestone 2: Student has one piece of evidence accepted.
Milestone 3: Student has second piece of evidence accepted.
Practice Set 2: Lewis and Clark- agent Reece believes that Lewis and Clark only had positive interactions with Native Americans on their journey.
Milestones same as Practice Set 1
All 8 students were given some brief instructions before being asked to complete the Science or Social Studies practice module and if time allowed and they wished for the one they did not choose first. After both were completed or 30 minutes had passed, they answered some follow-up questions about the experience for 15 minutes.
What we found in the data and interview responses was very encouraging. All 8 students were able to successfully complete the milestones for the practice module. 7 of 8 students elected to do the additional practice opportunity even though it wasn’t required. Students demonstrated a learning curve in terms of the amount of time required to find the second piece of evidence that carried through to the second activity. 7 of 8 students could articulate the educational use cases for such an exercise and 7 of 8 found the experience. In looking at the data above depicting the amount of minutes it took to have each piece of evidence accepted, one could make a case for a minor learning curve.
Direct quotes from student interviews:
“It’s a fun way to help [...] students learn and it’s fun to help the chatbot fix their errors”
“It helped because I kept getting a deeper understanding on a question and if I missed something I can go back see what I missed”
“I felt good and I feel like I got what reece kept asking for.. It gave me a chance to keep looking and find more information even though I didn’t know I could find more”
“I thought that I learned something as well as them even though they are chatbots. I think that the way they asked for more feedback is helpful to me. It makes both of us learn”
When we showed the transcripts of the student chats to 5 educators, all 5 said they contained evidence of student critical thinking and that the student output would help them assess student thinking better than a multiple choice question on the same topic. Of course, we don’t yet know if this type of practice will have an actual academic impact beyond this environment. Nor do we know if this practice environment can truly scale to subjects and topics beyond those tested. We think based on this small and informal user research study, this exercise holds promise as both engaging and educationally nutritious and our goal is to further validate in schools in the fall. The goal is to get more rigorous with our efficacy documentation as the product evolves.
We are also currently building into the platform the ability for teachers to create versions of the practice environment for their own classroom, whereby they would articulate the topic, the resources to teach with, and the role and criteria for success the agent holds- check it out and make an account at studybuds.org. Toward that end, we are recruiting teachers to do paid curriculum design and testing with this prototype. If you are interested in trying out this platform in your classroom, please send me an email at [email protected]. I’m looking forward to where this next quarter takes us and will be sure to update y’all with progress along the way. If you made it to the end of this, thank you!