Top Hat EdTech Systems Thinking User Research Design System

Building a better question experience

Redesigning the future of questions at Top Hat's to improve user experience, reduce technical debt, and unlock opportunities in STEM.

Role Product Designer
Timeline 2023-2024
Team 1 PM, 8 Engineers, 2 Designers
Platform Web Application
Question Builder interface showing the redesigned question creation experience

Click to expand

Top Hat is an EdTech company dedicated to empowering educators by facilitating engaging learning experiences. The app’s questions feature enables educators to connect with students, promote active learning, and rapidly adjust materials to meet student needs.

New features and content types were essential for instructors who are actively looking to drive real and new education outcomes. However, the existing question infrastructure had accumulated significant technical debt, making it difficult to innovate and maintain consistency across the platform. By the time I joined in September 2021 our team was facing three key issues...

Dev Velocity

Lack of a flexible, scalable foundation made repeat work the norm.

New feature work required 6+ months of dev time, 9+ months to create new question types.

Tech debt rendered QA process completely unsustainable.

User Experience

The gradual degradation of questions led to a poor quality, inconsistent user experience.

Question offering began to lag behind competitors without the capacity to catch up.

The team was inundated with bugs, many could not be fixed.

Opportunities

Top Hat was missing valuable opportunities in STEM.

The existing foundation could not support the flexible question constructions STEM disciplines required.

This prevented transition of our national title offering to e-Texts.

So we crafted a plan

I worked alongside Senior Design Manager Greg Rogers to lead the redesign. Our team hypothesized that by reworking the foundation of questions into a flexible, scalable system...

We could:

  • Improve the experience for users & developers
  • Unlock rapid development of new features
  • Empower educators to create better questions

Analyzing customer feedback and working closely with our internal Textbook Authoring team via biweekly brainstorming meetings allowed us to workshop initial design concepts and helped to inform the scope and design for our MVP.

Before diving into designs, I pulled together an in-depth audit of the existing questions framework. We found that existing question forms were inconsistent with no universal components causing questions to render differently across the app.

Audit of the existing questions framework showing inconsistencies

Click to expand

"How can we simplify this?"

Through our investigation we determined that the lifecycle of a question in Top Hat could be distilled into two main sections: authoring & rendering.

We set out to create a single experience for each section, a universal Question Builder where educators could craft questions in a "what you see is what you get" style editor, and a Question Renderer consisting of reusable components to display questions consistently across the app.

Initial design concepts for the Question Builder and Renderer

Click to expand

A flexible universal Question Builder

In order to support the kinds of expressive questions we needed, the Builder had to be flexible. Nothing like this existed in the market so we took inspiration from products like Notion and Confluence where a blank canvas, coupled with interchangeable blocks, allowed for endless customization.

This block-based editing system allowed us to leave behind the concept of distinct “question types” opting instead for response blocks that encapsulated functionality. Now, all questions could be created inside a single editor, simplifying the process to add new “question types”. We extended the block idea to features like images; allowing for placement anywhere within a question construction.

In the new foundation, any feature added to the universal Question Builder would automatically be available for any question construction, seamlessly integrated across the entire platform, ensuring consistency and scalability.

The flexible Question Builder interface in action

Click to expand

How much flexibility is too much?

While flexibility was ideal for STEM, what we quickly realized through user testing was that users were overwhelmed by a completely blank canvas.

To bridge the gap, we introduced fully editable “templates” which could be dropped onto the canvas to provide a guide while maintaining the Builder’s flexible nature.

Question templates providing guided flexibility

Click to expand

Creating consistency across the app

Once a question was authored, the job of our Question Renderer was to create a consistent question experience from this point on.

The Renderer was designed as a single component comprised of multiple smaller systems that work together to react to the question’s context and present the relevant information. This ensured questions looked great and worked smoothly anywhere in the app.

Question Renderer creating consistent experiences across the platform

Click to expand

Challenging our assumptions

Our team knew the Question Builder was a novel concept. We were taking a risk in our assumption that the Builder would meet the needs of users creating both complex and simple question constructions.

To de-risk our assumptions we decided to conduct a usability study. Through this research we hoped to:

  • Measure the learning curve to adapt our Question Builder
  • Determine whether that curve was reasonable
  • Find opportunities to improve the experience and optimize onboarding

Check out the full Question Builder UXR Summary.

User testing sessions for the Question Builder

Click to expand

Pinpointing areas for improvement

The data told us that once users got it, they got it. Question creation speed and success increased drastically across the board on the second attempt.

Unfortunately, the learning curve was too steep and would likely prevent a users from returning outside of a structured study. We needed to find away to help users learn the builder more quickly.

By analyzing the data we honed in on three strong signals telling us what tripped participants up during the initial experience.

Setting consistency

83% of users could not find or understand the correctness toggle

67% of users wanted us to bring back the old correctness checkbox

50% of users wanted us to “just show” the correctness toggle

Looking for feedback

83% of users were looking for more feedback and guidance to learn how to use the builder

Numeric input clarity

67% of users were confused by the lack of a label on the numeric input

67% of users were unclear what would be in the numeric advanced menu (cog icon)

We immediately began ideating and iterating on these key areas. We prioritized updates to the correctness toggle and added a new onboarding experience to help first-time users understand how to use the feature.

New onboarding experience for the Question Builder

Click to expand

Maintaining quality and consistency through a multi-year project

Few things are worse than giant, bloated design files. Greg and I were adamant from the start that, regardless of the projects length and complexity, drowning in dense documentation and clunky files would not be our fate.

Together, we crafted and implemented three strategies that were instrumental in maintaining quality and communication throughout the project.

1
Archiving completed work

Messy design explorations stayed in their own individual files. Once we agreed on a design decision we added it to our main file via a new branch and archived the exploration file.

We maintained one main spec file using pages to separate each new feature. We wrote specs as needed and deleted them using branches once completed.

2
Writing our own JIRA tickets

This kept most written documentation out of our spec files and avoided duplicate work for PMs who would normally translate specs into tickets. It also gave us control over visual requirements and expectations.

3
Design finalization column in JIRA

If a ticket contained any visual or UX changes it had to go into Design Review before moving to done. This improved quality, reduced time running team QA, and allowed design to provide timely feedback to devs.

Design process and documentation workflow

Click to expand

So, what was the outcome?

While the project is still ongoing we’ve already had some major wins, both internally and externally.

As of March 2024, 70-90+% of all multiple choice and numeric questions are created with the new builder

The new Question Foundation has reduced dev time to create new question types by 67%, requiring only 2 months to add the numeric question type

Our team was able to use the Question Renderer as a foundation to run rapid experiments with AI generated questions

Note: as of March 2024, multiple choice and numeric questions are available with additional question types to come.

Key learnings

What worked well

  • Close collaboration with engineering from day one
  • Iterative testing helped us catch issues early
  • Block-based approach provided the flexibility we needed
  • Strong design system maintained consistency throughout

What I learned

  • Multi-year projects require clear documentation
  • User research should be continuous, not just at the start
  • Flexibility should be balanced with guided experiences

This project was made possible by all of the incredibly talented members, both past and present, of the Questions & Gradebook team.

Huge thank you to Greg Rogers for being a fantastic mentor and challenging me to think outside the box, to Product Managers Elia, Ana, and Adrian for fearlessly guiding our team, and to Rahul, Sean and all of the talented devs who always advocated for quality and accessibility. I could not have asked for better partners.