Getting Childcare Right
WEEKDAYS, INC.
Role: UX Researcher
Parameters:
The Client:
This was a project for a client called Weekdays, Inc., a tech start-up creating a platform to address problems in childcare through technology. Currently in childcare, the cost of childcare to parents is high while the pay to childcare workers is low. Weekdays is working to take out the middleman facility costs by empowering childcare providers to start and run their own in-home daycare business in their own neighborhoods.
The Project:
This project was a 2-week sprint culminating in a presentation to the client, CTO and software developer for Weekdays, Ben Zulauf. I was the researcher on a team of three including an information architect/visual designer and interaction designer.
The Weekdays platform has two different sides, one is the internal facing tool for the childcare provider, and the other is the external facing tool for the parents. The client specifically tasked us to focus on the internal facing tool for the childcare provider, and due to our 2-week timeline, we collectively decided on a native iOS app platform.
Purpose
I wanted to tell the story of the user while understanding the complexities and edge cases I learned about along the way. I also wanted to tell the story to the client while anticipating his company needs.
My goals were to:

1. Understanding the user and telling her story

To understand the user, I conducted 4 user interviews in 1 day. Each ranging from 20 to 40 minutes, three were over the phone, and one was in-person.
The participants were all former childcare providers, specifically former nannies and daycare workers. All participants used smart phones daily and had a high comfort with digital devices.
I created a total of 173 data points from my interview notes and engaged my team in affinity mapping.

From the 173 data points, we discovered 23 trends.

I further synthesized these trends into 4 major pain points and 4 goals.
I documented my synthesis on my team's shared folder so design decisions could be cited to documented data. Download my affinity mapping synthesis (PDF, 157KB).

Problem
Childcare providers need a way to manage their time and manage parents, because they want to spend their time caring for children and need time for their own self-care, and do not want to spend their time on administrative tasks and managing unclear expectations from parents.
Solution
How might we provide childcare providers a way to manage their time and manage communications with parents?
Hypothesis
We believe that by creating a Weekday app for childcare providers, we will help childcare providers gain more scheduling stability, increase efficient use of time, and reduce burnout.
We will know this to be true when we see:
1. A reduction in childcare providers leaving the profession; and
2. An increase in chidcare providers using the app.
“I never had a free moment. I had to be all eyes and all ears even when the children were occupied.”
Interview Participant 4
Through all the data synthesis, a proto-persona was starting to take form.

Meet Sally
36 years old | high tech empathy | former full-time nanny
-
Mother to a 3 year old daughter
-
Looking to earn some income
-
Loves caring for children
-
Needs help managing time
-
Needs help managing communications with parents
How can Weekdays help?
Reduce time on admin tasks
Make contacting parents fast
and easy
Create features for reliable time management
2. Getting the whole picture: stakeholder interviews
and edge cases
But the story doesn’t just stop at the user. To get the whole picture I talked to stakeholders as well including parents and our client as an internal stakeholder.
I interviewed two parents. Although we were not tasked with designing the external parent facing portion of the app, I felt it was important to understand parents as stakeholders in this process.
The main takeaway from the parents was their need was about trust. They put an immense amount of trust in the person watching their child.

When I asked how to build trust, the parent answered:

When I asked about how to communicate, they didn’t care about the mechanism. It was more about building a relationship around trust.

Furthermore, I talked to the client twice during the sprint to learn his perspective as an internal stakeholder. During our second meeting, the client revealed that he was discovering many of his potential users were actually not as comfortable with technology as he thought. These nondigital natives may not be just an edge case, but target users.
We were too far along in our design process to create a new proto-persona, but I put our digital prototype in front of two grandparents who fit the demographics of the edge case and glean whatever insight I could from their experience.
Grandparents: Edge Cases
Although the grandparents I tested were edge cases and not consistent with Sally our proto-persona, they provided valuable insight.
I sat with the grandfather for 12 minutes and the grandmother 10 minutes and watched them struggle with the prototype. I did not tell them how to complete the task, but rather I asked them, “What would you do if this were real life, and you were watching a child?”
Here is how they responded.


Although the prototype would work well for Sally, it most certainly did not meet the needs of the nondigital native users from the edge cases.
3. Proving the design solves a problem: usability testing

We did five rounds of testing with 22 participants. Each round tested a revised iteration of the prototype.
Eight of the participants were connected to childcare by being parents, childcare providers, former childcare providers, teacher, or grandparents.
When deciding what metrics I would use to evaluate our prototype in usability testing, I thought carefully about what I wanted to measure. I did research and chose to go with the ISO (International Organization for Standardization) recommendations of effectiveness, efficiency, and satisfaction.

Within these broad areas, I needed to decide the exact metrics I would look for to prove the success of the prototype. In making this decision, I took into account:
-
Which metrics had the highest likelihood of producing insightful data
-
How much help I could get from my teammates in conducting the studies
-
How much time it would take for each test participant
Balancing these priorities, I chose to measure:
-
Completion
-
Errors
-
Time
-
SUS
-
SEQ
Download my metrics notes to see my methodology research, PDF (92KB).
Effectiveness
Testing for completion and errors

Out of 22 participants, 20 successfully completed the task of adding an activity.
Unsurprisingly, the two participants who did not complete the task were the edge cases, the grandparents.
1.75
0.6
Errors went down from a Round 1 average of 1.75 to a Round 5 average of 0.6.
Again, the grandparent edge cases are represented as the anomalous spike in errors in the center of the graph.

The high completion rate coupled with the decreasing number of errors over time proved the prototype was effective. Moreover, the effectiveness of the prototype increased over time with each new iteration.
Efficiency
Testing for time to complete task

77.3
55.4
Completion time went down from a Round 1 average of 77.3 seconds to a Round 5 average of 55.4 seconds.
Interestingly, the parents who tested with the prototype had some of the fastest completion times in the study. Perhaps this is a foreshadowing to the efficiency of the future parent facing app?
The downwards trend in the number of seconds to complete the task proved the prototype was efficient. Furthermore, the efficiency of the prototype overall increased over time with each new iteration.
Satisfaction
Testing using SUS and SEQ scores
80.9
96.5
SUS scores went up from a Round 1 average of 80.9 to a Round 5 average of 96.5.
Predictably, we see the grandparent edge cases at the lowest point dipping below 50.

In the presentation to the client, I made sure to explain the SUS scale and dispel it's inherent 0 to 100 confusion. I did research to understand the SUS scale and how to properly apply the score. For example, anything below 50 is terrible, the benchmark for passable is actually at 68, and good scores start at about 80.3.

4.4
4.8
SEQ scores went up from a Round 1 average of 4.4 to a Round 5 average of 4.8.
Again, we see the grandparents in the dip in the graph, but what would account for the second dip?
Thanks to copious documentation and data collection during usability testing, I could point to the exact reason for this fall in satisfaction—a feature location change.
Testers specifically and consistently struggled with the "Add" button positioned in the upper right part of the screen.

Interaction Design by Shoshanna Thomas-McCue
Why use two separate metrics to measure satisfaction? Wouldn't the SUS be enough? I was casting a wide net to see if the SEQ would provide any additional insight that the SUS did not. It paid off because the SUS scores did not have a clear an indication about "Add" button.

With metrics showing trends increase for satisfaction and decrease in errors and time, Sally would likely find the optimal path in the app to be efficient and effective with high satisfaction.
Overall, the metrics and data supported the prototype design with each iteration building upon user insight and therefore improving the prototype's user experience.

We presented our project to the client at the end of our sprint and suggested next steps include building out the calendaring and messaging features to further address Sally’s pain points, needs, and goals.
The client asked what direction he should take for future research. I explained that parents and the edge cases of nondigital natives were two paths he could take, each needing robust research in their own right and being worthy of separate projects in and of themselves.
4. Documenting the process and why
I created a research plan which I updated and changed throughout the research process depending on need.
Download my research plan (PDF, 71KB).
I also combined all my analysis in a research findings report for the client. In deciding how to present the report, I kept in mind I wanted to increase the chance of someone wanting to read it and make it high enough quality to show to potential investors.
Download my research findings report (PDF, 2MB).
I organized my process into generative research and evaluative research and collected documentation along the way.
Generative
Interview scripts
Interview notes
Client meeting notes
Research synthesis
Evaluative
Testing script
Testing notes (qualitative)
Raw data (quantitative)
Graphs
Why was documentation important?
In the fast pace of the 2-week sprint, the team did not have the luxury of verbally explaining and discussing every item or question. We relied heavily on our shared drive to update each other on new information and ideas. For example, I could empathize with my interaction designer that raw data including interview notes and data in spreadsheets would not be helpful to her. What she needed from me was the synthesis of that data, in an easy to read and organized format.
I also recognized that if I didn’t document my data synthesis, when I left the project, I would take all that knowledge with me. If the client needed it at a later date, he would have to reinvest time and resources for another researcher to re-synthesize the raw data all over again.
Also, although the client did not specifically request a findings report as a deliverable, I anticipated that having a clear and readable report would be helpful, especially if he needed numbers to present to potential investors.
What did I learn?
I love the complexity of internal vs. external facing systems. Not only did I need to learn about the childcare provider as the user, but I had to take into consideration their consumer, the parents.
The design process, including research, is messy and nonlinear.
Example 1:
Originally, the client was going to share the research that Weekdays had already done including three established personas. However, due to legal reasons, we did not have access to that information, so I had to adapt. I learned that even in a short time frame, I could provide my team with robust research to drive and inform their design decisions.
Example 2:
Timelines are not always linear and I had to make judgment calls throughout the process by balancing priorities. For example, when we received information about the edge cases, we were already one week into the 2-week sprint. We had already established our proto-persona through research, and there wasn’t time to establish a second persona. But, the edge case information our client provided was important enough that I decided to get what insights we could during the usability testing phase.