Build Experiments Based on Your Hypothesized Assumptions
by Saad Kamal, Product Manager for AWH
We all have assumptions, and that’s ok. Your assumptions have likely saved you from unpleasant circumstances in the past. My correct assumption that my wife wouldn’t want an instapot for our anniversary probably did me more good than harm, but my assumptions have hurt me too. My constant assumption that I’ll be miserable going to a social gathering has been proved wrong, again and again.
Just as well, your assumptions have also either promoted or limited your actions to some degree. As we mature, we’re mentally cataloging and codifying our sensory inputs (and the chemical response to those sensory inputs) to form relative experiential patterns of outcome, and that’s how assumptions are born. It’s funny because an experience doesn’t need to be a direct one to form your assumption biases. It could be something we read, or heard, or saw happening to someone else to form assumed predictions on the experience. That has great value in the natural world. If my friendly, neighboring tribe told me they get pillaged by a third tribe from across the river every dry season, I’ll be a little wary of that band across the river.
Was it David Hume who said that all true human knowledge was derived solely from experience? An interpretation of this that people echo is ‘I can imagine, even empathize being hit by a bus, but I don’t truly have the knowledge of being hit by a bus until I’m actually hit by a bus’. That’s is a high bar for us, as it relates to our topic of experimenting with the validity of our assumptions. I reference this because if you’re like me, there’s a mental block as you approach starting to validate or invalidate assumed knowledge based on other’s observed experience because you may never really know the truth of the experience unless you go through that experience yourself. Even then, it’ll likely never be the same exact experience because your personal chaos, chemistry, and conditioning are not the same as anyone else’s.
There is no clean answer here. Rather, a mix of social conditions, economic factors, our bio-chemical risk to benefit triggers, ego, and experiences to name a few pieces of the equation. What is the natural world telling me when I fear to understand potential truths? Maybe it’s that truths are pretty rare, and I can’t cope knowing that? But we also know the world isn’t black and white. Something that works for condition set A may be totally incompatible for condition set B. We also know that our knowledge isn’t absolute. We learn countless things on a daily basis that question our understanding of human predictability.
That being said, there’s still learnings to be gained from perceived and/or observed ancillary experiences. If what Hume said is true, there may be greater value in learnings over perceived knowledge as perceived knowledge isn’t the truth anyways.
We have known and unknown assumptions and biases, so we can’t truly predict outcomes as well as we’d like to. What we can do, however, run experiments to discern how hot (or how cold) our known, predicted assumptions are in the real world with real people.
In the last post, we talked a little about building a hypothesis, because we have questions. From those hypotheses, we can set up experiments that either validate or invalidate (and refine) our assumed reasons behind a phenomenon. I want to emphasize here that we’re not looking for a ‘pass’ or ‘fail’ for our experiments to validate in invalidate our hypothesis. Nor are we looking for truths. We’re looking to learn, that’s why we experiment.
The simple ‘why?’ answer behind the reason to run experiments is because we don’t actually know how someone will react to something new in front of them. Furthermore, because time/budgets/resources are limited, and we want to build something that actually has value in our problem space.
Experiments are a leaner path to gain understandings, than fully built functionality is. This isn’t to say that we need to stop experimenting after our initial hypothesis produces enough insight for us to make the call to move forward with one path or the other. Rather, it’s to point us in the right direction to optimize the use of the limitations we have in front of us.
Preparing to experiment
We experiment because we want answers to our assumptions. We have assumptions because we’ve talked to our potential target audience around the problem space of what our product is trying to solve for.
A well-structured experiment starts with the audience in mind first.
You’ve come across personas before, and they come in all shapes and sizes. For those who are new to the wacky world of personas, in over-simplified terms — a persona is a top layer of understanding of a type of user/customer/audience. Think of it as a profile of a person who has the rolled-up characteristics of actual people who fit that classification. I think of my personas as actual people
There are countless templates for how to build a persona and something that deserves its dedicated post. For our purposes here, I’ll use the classification that has worked for me. For me, a persona consists of the following:
- Name: This is the actual name of the persona (i.e. Linda Taylor)
- Picture: I need a face to the person; it helps everyone on the team empathize with the people we build things for.
- Demographics: Age, Location, Education, Income…
- Background: A paragraph or two about the persona’s background against the product or problem space.
- Hopes/Desires: Specifically, around the problem space and then around the ‘jobs’ that need to be done.
- Painpoints/Frustrations: Again, around the problem space and then around the jobs that this persona needs to do in that problem space.
- Frictions to Change: Most people have the same frictions — Time, Energy, Money, but not necessarily for the same reasons. The way I define friction is, ‘If my product achieves all their hopes and desires and alleviates all of their pain points and frustrations for this person — what still gets in the way for this person to adopt the product solution.”
From the aggregation of commonalities that compose your persona’s hopes, frustrations, and frictions, you have questions. The how’s and the why’s as to the reasoning behind you hearing what you heard. Let’s say, for example, you’re working on a product that hopes to solve the problem of budgeting for a vacation. Your interviews bring to light that a common pain point is that people have a hard time accounting for visa fees for foreign travel. You can ask, ‘How can Jim (persona) account for the ancillary fees for getting a visa in Ecuador based on his Philippine passport?’ And thus, the spark for the experiment begins.
From the questions which spawn out of your persona’s desires, pain points and frictions, the next step is to come up with some hypothesis as to why those are on your list and if they can be explained. I put explained in italicizes as what you’re really doing is asking more questions, rather than explaining the phenomena. I’m partial to the “What if” hypothesis, you can read about that here from the last post. What if Jim knew how much his visa fee would cost him for his international trip before arriving at the airport?
Now that we have an asked hypothesis, we can start setting up our experiments to feedback into our assumed hypothesis. This is where we can start thinking about implementation approaches that will alleviate Jim’s woes.
1) Would emailing Jim a blanket list of all visitor visa fees for non-residents of the destination country holding Philippine nationality provide Jim more assurance when he budgeted for his vacation?
2) Could allowing Jim to input his total desired budget ahead of planning for a vacation, accounting for visa fees and only returning results within his dollar limit help Jim better account for visa fees?
3) Could listing the expected visa fee when Jim selects his arrival destination when looking for flights to allow Jim to budget better for his vacation?
All good questions, and all still assumptions which are the foundation of potential experiments we can run.
There are countless ways to tackle testing your experiments, but the most important thing to understand is that you need willing participants to take part in them. That is, the base of your targeted audience, those who match the persona of your experiment.
What to expect
Whether you set up A/B tests, multi-variance tests, in-person usability and experience tests or any other method will heavily depend on the resources you have on-hand and what you’re actually experimenting with in the first place.
Regardless of how you test your experiment, there are some common considerations to keep in mind as you go through the process:
- One size rarely fits all. You will likely never find a solution that works for all your personas. You will likely never even find a solution that works on all actual users who are classified within a single persona. That’s ok, use your judgment for making the best call based on the results of your experiments and your product needs.
- Timebox your experiments. Experiments inherently have no definite end. Whether it’s time-based, or metric-based, it is up to you to be disciplined enough to define what ‘done’ means for the purposes of your experiments.
- Experiments begat experiments. It’s likely, even probable, that your experiments don’t answer the initial questions you started with. That’s ok. What you’ll likely end up noticing is that you’ve opened up pandora’s box of question marks. Cool. You’re gaining a deeper understanding of the problem if that’s the case.
- Just because the findings of a small experiment have made their way through to production, doesn’t mean that you can’t gain more insight from the broader audience. Keep learning from your implementations, keep asking your target audience and let that guide you to your next set of experiments.
- The second you think you know the answer, challenge yourself to test those assumptions. Be your ego’s biggest critic. You may be right this one time, but you’re not going to be right all the time.
- Value learning over knowing. This isn’t confined to the world of product experimentation, rather for the entire journey of your product’s life. Don’t assume, the people who use your product will surprise you in both ugly and beautiful ways.
Not sure where to go from here? Do you have personas defined or hypothesis assumed? Do you have assumptions that need testing? Feel free to reach out and connect to set up a time and chat.
-Written by Saad Kamal, AWH Product Manager.