Overview - Wechsler Individual Assessment Test (WIAT). Create a solution that allows a practitioner the ability to submit a child's essay to be graded automatically within the system while following detailed grading criteria. This design won the international hackathon. There was a trophy and has moved down the patent workflow within Pearson.
My Role - Engage senior research directors, engineering teams and stakeholders to gain a comprehensive understanding of how it might work. Verify pain points with end users and clinical practitioners. Design a solution that instills confidence in the automated results and builds an understanding of the criteria that is being met consistently across all essays.
Challenge - Initially, we had four days to create a working prototype that allowed an essay input to be passed to a third party, utilize their algorithm to grade it based on scoring criteria and return the results to the platform.
Approach - The essay itself is a timed task given to a wide range of students with varying levels of motor skills. The essay is handwritten although the testing battery is followed from the iPad. After the iPad is synced with the website platform, the practitioner has to transcribe verbatim the essay before submitting it for automated scoring. The results then are returned for review.
Discovery - The algorithm was the wildcard. We didn't know how robust it would be or what the turn-around time would be based on high-traffic times of the year. We needed to allow the user to be notified wherever they were within the site when the results were returned. In addition, we had to account for an error bounce back if the submitted essay was gibberish and the system couldn't grade it. So there was a hand scoring component safeguard for that edge use case.
Vision - Stay true to the hackathon product. Executives saw the value of having a consistent, automated grading mechanism instead of the subjectivity of a teacher/practitioner having to follow a time-consuming, tedious 40 page manual.
Requirements - Allow the user to understand how it is being graded without inundating them with technical engineering speak. Allow them an opportunity to hand score if the system fails to return results.
Framework - Work closely with engineering and pair with the 3rd party department to discover nuances of the algorithm. Stay consistent with the iPad interactions for detailing the process of collecting the essay and make sure the input UI works within the website experience and feedback.
Design - Increased fidelity of wireframe. I believe we white boarded for the hackathon and worked closely with the engineering director to ensure a consistent experience while adding new functionality and innovation.
Refinement - Negotiated with the product owner, stakeholders and senior research director to confirm the look and feel, and experience, belonged within the platform and the solution was scalable as the product evolved.
Impact - The end user was happy although transcribing the essay was a bit tedious, they saw the savings and the consistency. The company deemed it worthy of a patent.
There was a trophy. Not only did we win, we designed and built it across multiple platforms.
The test results needed to fit in consistently with the existing platform as well as the entry for the initial data entry of the essay itself.
A messaging design had to be created to allow the end user a clear understanding of the timeliness of the results being return to the system and be scalable.
Copyright © 2021 Garrigan UX - All Rights Reserved.
Powered.