• Product Management

How to Generate and Validate Product Hypotheses

What is a product hypothesis.

A hypothesis is a testable statement that predicts the relationship between two or more variables. In product development, we generate hypotheses to validate assumptions about customer behavior, market needs, or the potential impact of product changes. These experimental efforts help us refine the user experience and get closer to finding a product-market fit.

Product hypotheses are a key element of data-driven product development and decision-making. Testing them enables us to solve problems more efficiently and remove our own biases from the solutions we put forward.

Here’s an example: ‘If we improve the page load speed on our website (variable 1), then we will increase the number of signups by 15% (variable 2).’ So if we improve the page load speed, and the number of signups increases, then our hypothesis has been proven. If the number did not increase significantly (or not at all), then our hypothesis has been disproven.

In general, product managers are constantly creating and testing hypotheses. But in the context of new product development , hypothesis generation/testing occurs during the validation stage, right after idea screening .

Now before we go any further, let’s get one thing straight: What’s the difference between an idea and a hypothesis?

Idea vs hypothesis

Innovation expert Michael Schrage makes this distinction between hypotheses and ideas – unlike an idea, a hypothesis comes with built-in accountability. “But what’s the accountability for a good idea?” Schrage asks. “The fact that a lot of people think it’s a good idea? That’s a popularity contest.” So, not only should a hypothesis be tested, but by its very nature, it can be tested.

At Railsware, we’ve built our product development services on the careful selection, prioritization, and validation of ideas. Here’s how we distinguish between ideas and hypotheses:

Idea: A creative suggestion about how we might exploit a gap in the market, add value to an existing product, or bring attention to our product. Crucially, an idea is just a thought. It can form the basis of a hypothesis but it is not necessarily expected to be proven or disproven.

  • We should get an interview with the CEO of our company published on TechCrunch.
  • Why don’t we redesign our website?
  • The Coupler.io team should create video tutorials on how to export data from different apps, and publish them on YouTube.
  • Why not add a new ‘email templates’ feature to our Mailtrap product?

Hypothesis: A way of framing an idea or assumption so that it is testable, specific, and aligns with our wider product/team/organizational goals.

Examples: 

  • If we add a new ‘email templates’ feature to Mailtrap, we’ll see an increase in active usage of our email-sending API.
  • Creating relevant video tutorials and uploading them to YouTube will lead to an increase in Coupler.io signups.
  • If we publish an interview with our CEO on TechCrunch, 500 people will visit our website and 10 of them will install our product.

Now, it’s worth mentioning that not all hypotheses require testing . Sometimes, the process of creating hypotheses is just an exercise in critical thinking. And the simple act of analyzing your statement tells whether you should run an experiment or not. Remember: testing isn’t mandatory, but your hypotheses should always be inherently testable.

Let’s consider the TechCrunch article example again. In that hypothesis, we expect 500 readers to visit our product website, and a 2% conversion rate of those unique visitors to product users i.e. 10 people. But is that marginal increase worth all the effort? Conducting an interview with our CEO, creating the content, and collaborating with the TechCrunch content team – all of these tasks take time (and money) to execute. And by formulating that hypothesis, we can clearly see that in this case, the drawbacks (efforts) outweigh the benefits. So, no need to test it.

In a similar vein, a hypothesis statement can be a tool to prioritize your activities based on impact. We typically use the following criteria:

  • The quality of impact
  • The size of the impact
  • The probability of impact

This lets us organize our efforts according to their potential outcomes – not the coolness of the idea, its popularity among the team, etc.

Now that we’ve established what a product hypothesis is, let’s discuss how to create one.

Start with a problem statement

Before you jump into product hypothesis generation, we highly recommend formulating a problem statement. This is a short, concise description of the issue you are trying to solve. It helps teams stay on track as they formalize the hypothesis and design the product experiments. It can also be shared with stakeholders to ensure that everyone is on the same page.

The statement can be worded however you like, as long as it’s actionable, specific, and based on data-driven insights or research. It should clearly outline the problem or opportunity you want to address.

Here’s an example: Our bounce rate is high (more than 90%) and we are struggling to convert website visitors into actual users. How might we improve site performance to boost our conversion rate?

How to generate product hypotheses

Now let’s explore some common, everyday scenarios that lead to product hypothesis generation. For our teams here at Railsware, it’s when:

  • There’s a problem with an unclear root cause e.g. a sudden drop in one part of the onboarding funnel. We identify these issues by checking our product metrics or reviewing customer complaints.
  • We are running ideation sessions on how to reach our goals (increase MRR, increase the number of users invited to an account, etc.)
  • We are exploring growth opportunities e.g. changing a pricing plan, making product improvements , breaking into a new market.
  • We receive customer feedback. For example, some users have complained about difficulties setting up a workspace within the product. So, we build a hypothesis on how to help them with the setup.

BRIDGES framework for ideation

When we are tackling a complex problem or looking for ways to grow the product, our teams use BRIDGeS – a robust decision-making and ideation framework. BRIDGeS makes our product discovery sessions more efficient. It lets us dive deep into the context of our problem so that we can develop targeted solutions worthy of testing.

Between 2-8 stakeholders take part in a BRIDGeS session. The ideation sessions are usually led by a product manager and can include other subject matter experts such as developers, designers, data analysts, or marketing specialists. You can use a virtual whiteboard such as Figjam or Miro (see our Figma template ) to record each colored note.

In the first half of a BRIDGeS session, participants examine the Benefits, Risks, Issues, and Goals of their subject in the ‘Problem Space.’ A subject is anything that is being described or dealt with; for instance, Coupler.io’s growth opportunities. Benefits are the value that a future solution can bring, Risks are potential issues they might face, Issues are their existing problems, and Goals are what the subject hopes to gain from the future solution. Each descriptor should have a designated color.

After we have broken down the problem using each of these descriptors, we move into the Solution Space. This is where we develop solution variations based on all of the benefits/risks/issues identified in the Problem Space (see the Uber case study for an in-depth example).

In the Solution Space, we start prioritizing those solutions and deciding which ones are worthy of further exploration outside of the framework – via product hypothesis formulation and testing, for example. At the very least, after the session, we will have a list of epics and nested tasks ready to add to our product roadmap.

How to write a product hypothesis statement

Across organizations, product hypothesis statements might vary in their subject, tone, and precise wording. But some elements never change. As we mentioned earlier, a hypothesis statement must always have two or more variables and a connecting factor.

1. Identify variables

Since these components form the bulk of a hypothesis statement, let’s start with a brief definition.

First of all, variables in a hypothesis statement can be split into two camps: dependent and independent. Without getting too theoretical, we can describe the independent variable as the cause, and the dependent variable as the effect . So in the Mailtrap example we mentioned earlier, the ‘add email templates feature’ is the cause i.e. the element we want to manipulate. Meanwhile, ‘increased usage of email sending API’ is the effect i.e the element we will observe.

Independent variables can be any change you plan to make to your product. For example, tweaking some landing page copy, adding a chatbot to the homepage, or enhancing the search bar filter functionality.

Dependent variables are usually metrics. Here are a few that we often test in product development:

  • Number of sign-ups
  • Number of purchases
  • Activation rate (activation signals differ from product to product)
  • Number of specific plans purchased
  • Feature usage (API activation, for example)
  • Number of active users

Bear in mind that your concept or desired change can be measured with different metrics. Make sure that your variables are well-defined, and be deliberate in how you measure your concepts so that there’s no room for misinterpretation or ambiguity.

For example, in the hypothesis ‘Users drop off because they find it hard to set up a project’ variables are poorly defined. Phrases like ‘drop off’ and ‘hard to set up’ are too vague. A much better way of saying it would be: If project automation rules are pre-defined (email sequence to responsible, scheduled tickets creation), we’ll see a decrease in churn. In this example, it’s clear which dependent variable has been chosen and why.

And remember, when product managers focus on delighting users and building something of value, it’s easier to market and monetize it. That’s why at Railsware, our product hypotheses often focus on how to increase the usage of a feature or product. If users love our product(s) and know how to leverage its benefits, we can spend less time worrying about how to improve conversion rates or actively grow our revenue, and more time enhancing the user experience and nurturing our audience.

2. Make the connection

The relationship between variables should be clear and logical. If it’s not, then it doesn’t matter how well-chosen your variables are – your test results won’t be reliable.

To demonstrate this point, let’s explore a previous example again: page load speed and signups.

Through prior research, you might already know that conversion rates are 3x higher for sites that load in 1 second compared to sites that take 5 seconds to load. Since there appears to be a strong connection between load speed and signups in general, you might want to see if this is also true for your product.

Here are some common pitfalls to avoid when defining the relationship between two or more variables:

Relationship is weak. Let’s say you hypothesize that an increase in website traffic will lead to an increase in sign-ups. This is a weak connection since website visitors aren’t necessarily motivated to use your product; there are more steps involved. A better example is ‘If we change the CTA on the pricing page, then the number of signups will increase.’ This connection is much stronger and more direct.

Relationship is far-fetched. This often happens when one of the variables is founded on a vanity metric. For example, increasing the number of social media subscribers will lead to an increase in sign-ups. However, there’s no particular reason why a social media follower would be interested in using your product. Oftentimes, it’s simply your social media content that appeals to them (and your audience isn’t interested in a product).

Variables are co-dependent. Variables should always be isolated from one another. Let’s say we removed the option “Register with Google” from our app. In this case, we can expect fewer users with Google workspace accounts to register. Obviously, it’s because there’s a direct dependency between variables (no registration with Google→no users with Google workspace accounts).

3. Set validation criteria

First, build some confirmation criteria into your statement . Think in terms of percentages (e.g. increase/decrease by 5%) and choose a relevant product metric to track e.g. activation rate if your hypothesis relates to onboarding. Consider that you don’t always have to hit the bullseye for your hypothesis to be considered valid. Perhaps a 3% increase is just as acceptable as a 5% one. And it still proves that a connection between your variables exists.

Secondly, you should also make sure that your hypothesis statement is realistic . Let’s say you have a hypothesis that ‘If we show users a banner with our new feature, then feature usage will increase by 10%.’ A few questions to ask yourself are: Is 10% a reasonable increase, based on your current feature usage data? Do you have the resources to create the tests (experimenting with multiple variations, distributing on different channels: in-app, emails, blog posts)?

Null hypothesis and alternative hypothesis

In statistical research, there are two ways of stating a hypothesis: null or alternative. But this scientific method has its place in hypothesis-driven development too…

Alternative hypothesis: A statement that you intend to prove as being true by running an experiment and analyzing the results. Hint: it’s the same as the other hypothesis examples we’ve described so far.

Example: If we change the landing page copy, then the number of signups will increase.

Null hypothesis: A statement you want to disprove by running an experiment and analyzing the results. It predicts that your new feature or change to the user experience will not have the desired effect.

Example: The number of signups will not increase if we make a change to the landing page copy.

What’s the point? Well, let’s consider the phrase ‘innocent until proven guilty’ as a version of a null hypothesis. We don’t assume that there is any relationship between the ‘defendant’ and the ‘crime’ until we have proof. So, we run a test, gather data, and analyze our findings — which gives us enough proof to reject the null hypothesis and validate the alternative. All of this helps us to have more confidence in our results.

Now that you have generated your hypotheses, and created statements, it’s time to prepare your list for testing.

Prioritizing hypotheses for testing

Not all hypotheses are created equal. Some will be essential to your immediate goal of growing the product e.g. adding a new data destination for Coupler.io. Others will be based on nice-to-haves or small fixes e.g. updating graphics on the website homepage.

Prioritization helps us focus on the most impactful solutions as we are building a product roadmap or narrowing down the backlog . To determine which hypotheses are the most critical, we use the MoSCoW framework. It allows us to assign a level of urgency and importance to each product hypothesis so we can filter the best 3-5 for testing.

MoSCoW is an acronym for Must-have, Should-have, Could-have, and Won’t-have. Here’s a breakdown:

  • Must-have – hypotheses that must be tested, because they are strongly linked to our immediate project goals.
  • Should-have – hypotheses that are closely related to our immediate project goals, but aren’t the top priority.
  • Could-have – hypotheses of nice-to-haves that can wait until later for testing. 
  • Won’t-have – low-priority hypotheses that we may or may not test later on when we have more time.

How to test product hypotheses

Once you have selected a hypothesis, it’s time to test it. This will involve running one or more product experiments in order to check the validity of your claim.

The tricky part is deciding what type of experiment to run, and how many. Ultimately, this all depends on the subject of your hypothesis – whether it’s a simple copy change or a whole new feature. For instance, it’s not necessary to create a clickable prototype for a landing page redesign. In that case, a user-wide update would do.

On that note, here are some of the approaches we take to hypothesis testing at Railsware:

A/B testing

A/B or split testing involves creating two or more different versions of a webpage/feature/functionality and collecting information about how users respond to them.

Let’s say you wanted to validate a hypothesis about the placement of a search bar on your application homepage. You could design an A/B test that shows two different versions of that search bar’s placement to your users (who have been split equally into two camps: a control group and a variant group). Then, you would choose the best option based on user data. A/B tests are suitable for testing responses to user experience changes, especially if you have more than one solution to test.

Prototyping

When it comes to testing a new product design, prototyping is the method of choice for many Lean startups and organizations. It’s a cost-effective way of collecting feedback from users, fast, and it’s possible to create prototypes of individual features too. You may take this approach to hypothesis testing if you are working on rolling out a significant new change e.g adding a brand-new feature, redesigning some aspect of the user flow, etc. To control costs at this point in the new product development process , choose the right tools — think Figma for clickable walkthroughs or no-code platforms like Bubble.

Deliveroo feature prototype example

Let’s look at how feature prototyping worked for the food delivery app, Deliveroo, when their product team wanted to ‘explore personalized recommendations, better filtering and improved search’ in 2018. To begin, they created a prototype of the customer discovery feature using web design application, Framer.

One of the most important aspects of this feature prototype was that it contained live data — real restaurants, real locations. For test users, this made the hypothetical feature feel more authentic. They were seeing listings and recommendations for real restaurants in their area, which helped immerse them in the user experience, and generate more honest and specific feedback. Deliveroo was then able to implement this feedback in subsequent iterations.

Asking your users

Interviewing customers is an excellent way to validate product hypotheses. It’s a form of qualitative testing that, in our experience, produces better insights than user surveys or general user research. Sessions are typically run by product managers and involve asking  in-depth interview questions  to one customer at a time. They can be conducted in person or online (through a virtual call center , for instance) and last anywhere between 30 minutes to 1 hour.

Although CustDev interviews may require more effort to execute than other tests (the process of finding participants, devising questions, organizing interviews, and honing interview skills can be time-consuming), it’s still a highly rewarding approach. You can quickly validate assumptions by asking customers about their pain points, concerns, habits, processes they follow, and analyzing how your solution fits into all of that.

Wizard of Oz

The Wizard of Oz approach is suitable for gauging user interest in new features or functionalities. It’s done by creating a prototype of a fake or future feature and monitoring how your customers or test users interact with it.

For example, you might have a hypothesis that your number of active users will increase by 15% if you introduce a new feature. So, you design a new bare-bones page or simple button that invites users to access it. But when they click on the button, a pop-up appears with a message such as ‘coming soon.’

By measuring the frequency of those clicks, you could learn a lot about the demand for this new feature/functionality. However, while these tests can deliver fast results, they carry the risk of backfiring. Some customers may find fake features misleading, making them less likely to engage with your product in the future.

User-wide updates

One of the speediest ways to test your hypothesis is by rolling out an update for all users. It can take less time and effort to set up than other tests (depending on how big of an update it is). But due to the risk involved, you should stick to only performing these kinds of tests on small-scale hypotheses. Our teams only take this approach when we are almost certain that our hypothesis is valid.

For example, we once had an assumption that the name of one of Mailtrap ’s entities was the root cause of a low activation rate. Being an active Mailtrap customer meant that you were regularly sending test emails to a place called ‘Demo Inbox.’ We hypothesized that the name was confusing (the word ‘demo’ implied it was not the main inbox) and this was preventing new users from engaging with their accounts. So, we updated the page, changed the name to ‘My Inbox’ and added some ‘to-do’ steps for new users. We saw an increase in our activation rate almost immediately, validating our hypothesis.

Feature flags

Creating feature flags involves only releasing a new feature to a particular subset or small percentage of users. These features come with a built-in kill switch; a piece of code that can be executed or skipped, depending on who’s interacting with your product.

Since you are only showing this new feature to a selected group, feature flags are an especially low-risk method of testing your product hypothesis (compared to Wizard of Oz, for example, where you have much less control). However, they are also a little bit more complex to execute than the others — you will need to have an actual coded product for starters, as well as some technical knowledge, in order to add the modifiers ( only when… ) to your new coded feature.

Let’s revisit the landing page copy example again, this time in the context of testing.

So, for the hypothesis ‘If we change the landing page copy, then the number of signups will increase,’ there are several options for experimentation. We could share the copy with a small sample of our users, or even release a user-wide update. But A/B testing is probably the best fit for this task. Depending on our budget and goal, we could test several different pieces of copy, such as:

  • The current landing page copy
  • Copy that we paid a marketing agency 10 grand for
  • Generic copy we wrote ourselves, or removing most of the original copy – just to see how making even a small change might affect our numbers.

Remember, every hypothesis test must have a reasonable endpoint. The exact length of the test will depend on the type of feature/functionality you are testing, the size of your user base, and how much data you need to gather. Just make sure that the experiment running time matches the hypothesis scope. For instance, there is no need to spend 8 weeks experimenting with a piece of landing page copy. That timeline is more appropriate for say, a Wizard of Oz feature.

Recording hypotheses statements and test results

Finally, it’s time to talk about where you will write down and keep track of your hypotheses. Creating a single source of truth will enable you to track all aspects of hypothesis generation and testing with ease.

At Railsware, our product managers create a document for each individual hypothesis, using tools such as Coda or Google Sheets. In that document, we record the hypothesis statement, as well as our plans, process, results, screenshots, product metrics, and assumptions.

We share this document with our team and stakeholders, to ensure transparency and invite feedback. It’s also a resource we can refer back to when we are discussing a new hypothesis — a place where we can quickly access information relating to a previous test.

Understanding test results and taking action

The other half of validating product hypotheses involves evaluating data and drawing reasonable conclusions based on what you find. We do so by analyzing our chosen product metric(s) and deciding whether there is enough data available to make a solid decision. If not, we may extend the test’s duration or run another one. Otherwise, we move forward. An experimental feature becomes a real feature, a chatbot gets implemented on the customer support page, and so on.

Something to keep in mind: the integrity of your data is tied to how well the test was executed, so here are a few points to consider when you are testing and analyzing results:

Gather and analyze data carefully. Ensure that your data is clean and up-to-date when running quantitative tests and tracking responses via analytics dashboards. If you are doing customer interviews, make sure to record the meetings (with consent) so that your notes will be as accurate as possible.

Conduct the right amount of product experiments. It can take more than one test to determine whether your hypothesis is valid or invalid. However, don’t waste too much time experimenting in the hopes of getting the result you want. Know when to accept the evidence and move on.

Choose the right audience segment. Don’t cast your net too wide. Be specific about who you want to collect data from prior to running the test. Otherwise, your test results will be misleading and you won’t learn anything new.

Watch out for bias. Avoid confirmation bias at all costs. Don’t make the mistake of including irrelevant data just because it bolsters your results. For example, if you are gathering data about how users are interacting with your product Monday-Friday, don’t include weekend data just because doing so would alter the data and ‘validate’ your hypothesis.

  • Not all failed hypotheses should be treated as losses. Even if you didn’t get the outcome you were hoping for, you may still have improved your product. Let’s say you implemented SSO authentication for premium users, but unfortunately, your free users didn’t end up switching to premium plans. In this case, you still added value to the product by streamlining the login process for paying users.
  • Yes, taking a hypothesis-driven approach to product development is important. But remember, you don’t have to test everything . Use common sense first. For example, if your website copy is confusing and doesn’t portray the value of the product, then you should still strive to replace it with better copy – regardless of how this affects your numbers in the short term.

Wrapping Up

The process of generating and validating product hypotheses is actually pretty straightforward once you’ve got the hang of it. All you need is a valid question or problem, a testable statement, and a method of validation. Sure, hypothesis-driven development requires more of a time commitment than just ‘giving it a go.’ But ultimately, it will help you tune the product to the wants and needs of your customers.

If you share our data-driven approach to product development and engineering, check out our services page to learn more about how we work with our clients!

How to Generate and Validate Product Hypotheses

hypothesis statement of product

Every product owner knows that it takes effort to build something that'll cater to user needs. You'll have to make many tough calls if you wish to grow the company and evolve the product so it delivers more value. But how do you decide what to change in the product, your marketing strategy, or the overall direction to succeed? And how do you make a product that truly resonates with your target audience?

There are many unknowns in business, so many fundamental decisions start from a simple "what if?". But they can't be based on guesses, as you need some proof to fill in the blanks reasonably.

Because there's no universal recipe for successfully building a product, teams collect data, do research, study the dynamics, and generate hypotheses according to the given facts. They then take corresponding actions to find out whether they were right or wrong, make conclusions, and most likely restart the process again.

On this page, we thoroughly inspect product hypotheses. We'll go over what they are, how to create hypothesis statements and validate them, and what goes after this step.

What Is a Hypothesis in Product Management?

A hypothesis in product development and product management is a statement or assumption about the product, planned feature, market, or customer (e.g., their needs, behavior, or expectations) that you can put to the test, evaluate, and base your further decisions on . This may, for instance, regard the upcoming product changes as well as the impact they can result in.

A hypothesis implies that there is limited knowledge. Hence, the teams need to undergo testing activities to validate their ideas and confirm whether they are true or false.

What Is a Product Hypothesis?

Hypotheses guide the product development process and may point at important findings to help build a better product that'll serve user needs. In essence, teams create hypothesis statements in an attempt to improve the offering, boost engagement, increase revenue, find product-market fit quicker, or for other business-related reasons.

It's sort of like an experiment with trial and error, yet, it is data-driven and should be unbiased . This means that teams don't make assumptions out of the blue. Instead, they turn to the collected data, conducted market research , and factual information, which helps avoid completely missing the mark. The obtained results are then carefully analyzed and may influence decision-making.

Such experiments backed by data and analysis are an integral aspect of successful product development and allow startups or businesses to dodge costly startup mistakes .

‍ When do teams create hypothesis statements and validate them? To some extent, hypothesis testing is an ongoing process to work on constantly. It may occur during various product development life cycle stages, from early phases like initiation to late ones like scaling.

In any event, the key here is learning how to generate hypothesis statements and validate them effectively. We'll go over this in more detail later on.

Idea vs. Hypothesis Compared

You might be wondering whether ideas and hypotheses are the same thing. Well, there are a few distinctions.

What's the difference between an idea and a hypothesis?

An idea is simply a suggested proposal. Say, a teammate comes up with something you can bring to life during a brainstorming session or pitches in a suggestion like "How about we shorten the checkout process?". You can jot down such ideas and then consider working on them if they'll truly make a difference and improve the product, strategy, or result in other business benefits. Ideas may thus be used as the hypothesis foundation when you decide to prove a concept.

A hypothesis is the next step, when an idea gets wrapped with specifics to become an assumption that may be tested. As such, you can refine the idea by adding details to it. The previously mentioned idea can be worded into a product hypothesis statement like: "The cart abandonment rate is high, and many users flee at checkout. But if we shorten the checkout process by cutting down the number of steps to only two and get rid of four excessive fields, we'll simplify the user journey, boost satisfaction, and may get up to 15% more completed orders".

A hypothesis is something you can test in an attempt to reach a certain goal. Testing isn't obligatory in this scenario, of course, but the idea may be tested if you weigh the pros and cons and decide that the required effort is worth a try. We'll explain how to create hypothesis statements next.

hypothesis statement of product

How to Generate a Hypothesis for a Product

The last thing those developing a product want is to invest time and effort into something that won't bring any visible results, fall short of customer expectations, or won't live up to their needs. Therefore, to increase the chances of achieving a successful outcome and product-led growth , teams may need to revisit their product development approach by optimizing one of the starting points of the process: learning to make reasonable product hypotheses.

If the entire procedure is structured, this may assist you during such stages as the discovery phase and raise the odds of reaching your product goals and setting your business up for success. Yet, what's the entire process like?

How hypothesis generation and validation works

  • It all starts with identifying an existing problem . Is there a product area that's experiencing a downfall, a visible trend, or a market gap? Are users often complaining about something in their feedback? Or is there something you're willing to change (say, if you aim to get more profit, increase engagement, optimize a process, expand to a new market, or reach your OKRs and KPIs faster)?
  • Teams then need to work on formulating a hypothesis . They put the statement into concise and short wording that describes what is expected to achieve. Importantly, it has to be relevant, actionable, backed by data, and without generalizations.
  • Next, they have to test the hypothesis by running experiments to validate it (for instance, via A/B or multivariate testing, prototyping, feedback collection, or other ways).
  • Then, the obtained results of the test must be analyzed . Did one element or page version outperform the other? Depending on what you're testing, you can look into various merits or product performance metrics (such as the click rate, bounce rate, or the number of sign-ups) to assess whether your prediction was correct.
  • Finally, the teams can make conclusions that could lead to data-driven decisions. For example, they can make corresponding changes or roll back a step.

How Else Can You Generate Product Hypotheses?

Such processes imply sharing ideas when a problem is spotted by digging deep into facts and studying the possible risks, goals, benefits, and outcomes. You may apply various MVP tools like (FigJam, Notion, or Miro) that were designed to simplify brainstorming sessions, systemize pitched suggestions, and keep everyone organized without losing any ideas.

Predictive product analysis can also be integrated into this process, leveraging data and insights to anticipate market trends and consumer preferences, thus enhancing decision-making and product development strategies. This approach fosters a more proactive and informed approach to innovation, ensuring products are not only relevant but also resonate with the target audience, ultimately increasing their chances of success in the market.

Besides, you can settle on one of the many frameworks that facilitate decision-making processes , ideation phases, or feature prioritization . Such frameworks are best applicable if you need to test your assumptions and structure the validation process. These are a few common ones if you're looking toward a systematic approach:

  • Business Model Canvas (used to establish the foundation of the business model and helps find answers to vitals like your value proposition, finding the right customer segment, or the ways to make revenue);
  • Lean Startup framework (the lean startup framework uses a diagram-like format for capturing major processes and can be handy for testing various hypotheses like how much value a product brings or assumptions on personas, the problem, growth, etc.);
  • Design Thinking Process (is all about interactive learning and involves getting an in-depth understanding of the customer needs and pain points, which can be formulated into hypotheses followed by simple prototypes and tests).

Need a hand with product development?

Upsilon's team of pros is ready to share our expertise in building tech products.

hypothesis statement of product

How to Make a Hypothesis Statement for a Product

Once you've indicated the addressable problem or opportunity and broken down the issue in focus, you need to work on formulating the hypotheses and associated tasks. By the way, it works the same way if you want to prove that something will be false (a.k.a null hypothesis).

If you're unsure how to write a hypothesis statement, let's explore the essential steps that'll set you on the right track.

Making a Product Hypothesis Statement

Step 1: Allocate the Variable Components

Product hypotheses are generally different for each case, so begin by pinpointing the major variables, i.e., the cause and effect . You'll need to outline what you think is supposed to happen if a change or action gets implemented.

Put simply, the "cause" is what you're planning to change, and the "effect" is what will indicate whether the change is bringing in the expected results. Falling back on the example we brought up earlier, the ineffective checkout process can be the cause, while the increased percentage of completed orders is the metric that'll show the effect.

Make sure to also note such vital points as:

  • what the problem and solution are;
  • what are the benefits or the expected impact/successful outcome;
  • which user group is affected;
  • what are the risks;
  • what kind of experiments can help test the hypothesis;
  • what can measure whether you were right or wrong.

Step 2: Ensure the Connection Is Specific and Logical

Mind that generic connections that lack specifics will get you nowhere. So if you're thinking about how to word a hypothesis statement, make sure that the cause and effect include clear reasons and a logical dependency .

Think about what can be the precise and link showing why A affects B. In our checkout example, it could be: fewer steps in the checkout and the removed excessive fields will speed up the process, help avoid confusion, irritate users less, and lead to more completed orders. That's much more explicit than just stating the fact that the checkout needs to be changed to get more completed orders.

Step 3: Decide on the Data You'll Collect

Certainly, multiple things can be used to measure the effect. Therefore, you need to choose the optimal metrics and validation criteria that'll best envision if you're moving in the right direction.

If you need a tip on how to create hypothesis statements that won't result in a waste of time, try to avoid vagueness and be as specific as you can when selecting what can best measure and assess the results of your hypothesis test. The criteria must be measurable and tied to the hypotheses . This can be a realistic percentage or number (say, you expect a 15% increase in completed orders or 2x fewer cart abandonment cases during the checkout phase).

Once again, if you're not realistic, then you might end up misinterpreting the results. Remember that sometimes an increase that's even as little as 2% can make a huge difference, so why make 50% the merit if it's not achievable in the first place?

Step 4: Settle on the Sequence

It's quite common that you'll end up with multiple product hypotheses. Some are more important than others, of course, and some will require more effort and input.

Therefore, just as with the features on your product development roadmap , prioritize your hypotheses according to their impact and importance. Then, group and order them, especially if the results of some hypotheses influence others on your list.

Product Hypothesis Examples

To demonstrate how to formulate your assumptions clearly, here are several more apart from the example of a hypothesis statement given above:

  • Adding a wishlist feature to the cart with the possibility to send a gift hint to friends via email will increase the likelihood of making a sale and bring in additional sign-ups.
  • Placing a limited-time promo code banner stripe on the home page will increase the number of sales in March.
  • Moving up the call to action element on the landing page and changing the button text will increase the click-through rate twice.
  • By highlighting a new way to use the product, we'll target a niche customer segment (i.e., single parents under 30) and acquire 5% more leads. 

hypothesis statement of product

How to Validate Hypothesis Statements: The Process Explained

There are multiple options when it comes to validating hypothesis statements. To get appropriate results, you have to come up with the right experiment that'll help you test the hypothesis. You'll need a control group or people who represent your target audience segments or groups to participate (otherwise, your results might not be accurate).

‍ What can serve as the experiment you may run? Experiments may take tons of different forms, and you'll need to choose the one that clicks best with your hypothesis goals (and your available resources, of course). The same goes for how long you'll have to carry out the test (say, a time period of two months or as little as two weeks). Here are several to get you started.

Experiments for product hypothesis validation

Feedback and User Testing

Talking to users, potential customers, or members of your own online startup community can be another way to test your hypotheses. You may use surveys, questionnaires, or opt for more extensive interviews to validate hypothesis statements and find out what people think. This assumption validation approach involves your existing or potential users and might require some additional time, but can bring you many insights.

Conduct A/B or Multivariate Tests

One of the experiments you may develop involves making more than one version of an element or page to see which option resonates with the users more. As such, you can have a call to action block with different wording or play around with the colors, imagery, visuals, and other things.

To run such split experiments, you can apply tools like VWO that allows to easily construct alternative designs and split what your users see (e.g., one half of the users will see version one, while the other half will see version two). You can track various metrics and apply heatmaps, click maps, and screen recordings to learn more about user response and behavior. Mind, though, that the key to such tests is to get as many users as you can give the tests time. Don't jump to conclusions too soon or if very few people participated in your experiment.

Build Prototypes and Fake Doors

Demos and clickable prototypes can be a great way to save time and money on costly feature or product development. A prototype also allows you to refine the design. However, they can also serve as experiments for validating hypotheses, collecting data, and getting feedback.

For instance, if you have a new feature in mind and want to ensure there is interest, you can utilize such MVP types as fake doors . Make a short demo recording of the feature and place it on your landing page to track interest or test how many people sign up.

Usability Testing

Similarly, you can run experiments to observe how users interact with the feature, page, product, etc. Usually, such experiments are held on prototype testing platforms with a focus group representing your target visitors. By showing a prototype or early version of the design to users, you can view how people use the solution, where they face problems, or what they don't understand. This may be very helpful if you have hypotheses regarding redesigns and user experience improvements before you move on from prototype to MVP development.

You can even take it a few steps further and build a barebone feature version that people can really interact with, yet you'll be the one behind the curtain to make it happen. There were many MVP examples when companies applied Wizard of Oz or concierge MVPs to validate their hypotheses.

Or you can actually develop some functionality but release it for only a limited number of people to see. This is referred to as a feature flag , which can show really specific results but is effort-intensive. 

hypothesis statement of product

What Comes After Hypothesis Validation?

Analysis is what you move on to once you've run the experiment. This is the time to review the collected data, metrics, and feedback to validate (or invalidate) the hypothesis.

You have to evaluate the experiment's results to determine whether your product hypotheses were valid or not. For example, if you were testing two versions of an element design, color scheme, or copy, look into which one performed best.

It is crucial to be certain that you have enough data to draw conclusions, though, and that it's accurate and unbiased . Because if you don't, this may be a sign that your experiment needs to be run for some additional time, be altered, or held once again. You won't want to make a solid decision based on uncertain or misleading results, right?

What happens after hypothesis validation

  • If the hypothesis was supported , proceed to making corresponding changes (such as implementing a new feature, changing the design, rephrasing your copy, etc.). Remember that your aim was to learn and iterate to improve.
  • If your hypothesis was proven false , think of it as a valuable learning experience. The main goal is to learn from the results and be able to adjust your processes accordingly. Dig deep to find out what went wrong, look for patterns and things that may have skewed the results. But if all signs show that you were wrong with your hypothesis, accept this outcome as a fact, and move on. This can help you make conclusions on how to better formulate your product hypotheses next time. Don't be too judgemental, though, as a failed experiment might only mean that you need to improve the current hypothesis, revise it, or create a new one based on the results of this experiment, and run the process once more.

On another note, make sure to record your hypotheses and experiment results . Some companies use CRMs to jot down the key findings, while others use something as simple as Google Docs. Either way, this can be your single source of truth that can help you avoid running the same experiments or allow you to compare results over time.

Have doubts about how to bring your product to life?

Upsilon's team of pros can help you build a product most optimally.

Final Thoughts on Product Hypotheses

The hypothesis-driven approach in product development is a great way to avoid uncalled-for risks and pricey mistakes. You can back up your assumptions with facts, observe your target audience's reactions, and be more certain that this move will deliver value.

However, this only makes sense if the validation of hypothesis statements is backed by relevant data that'll allow you to determine whether the hypothesis is valid or not. By doing so, you can be certain that you're developing and testing hypotheses to accelerate your product management and avoiding decisions based on guesswork.

Certainly, a failed experiment may bring you just as much knowledge and findings as one that succeeds. Teams have to learn from their mistakes, boost their hypothesis generation and testing knowledge , and make improvements according to the results of their experiments. This is an ongoing process, of course, as no product can grow if it isn't iterated and improved.

If you're only planning to or are currently building a product, Upsilon can lend you a helping hand. Our team has years of experience providing product development services for growth-stage startups and building MVPs for early-stage businesses , so you can use our expertise and knowledge to dodge many mistakes. Don't be shy to contact us to discuss your needs! 

hypothesis statement of product

How to Write a Software Requirements Specification (SRS)

Proof of Value vs Proof of Concept: How Do They Differ?

Proof of Value vs Proof of Concept: How Do They Differ?

How Much Does It Cost to Maintain an App in 2024?

How Much Does It Cost to Maintain an App in 2024?

Never miss an update.

hypothesis statement of product

Hypothesis-driven product management

hypothesis statement of product

Saikiran Chandha

Saikiran Chandha is the CEO and founder of SciSpace — the only integrated research platform to discover, write, publish, and disseminate your research paper. He holds notable experience in research, development, and applications. Forbes, Fortune, and NASDAQ recently captured his entrepreneurial journey.

Join the community

Sign up for free to share your thoughts

hypothesis statement of product

From tax consultant to product manager: My journey and tips for aspiring professionals entering the IT field

hypothesis statement of product

Burnout in product teams

hypothesis statement of product

Radical Focus 2.0 — an excerpt from Christina Wodtke

hypothesis statement of product

Internationalising products: Experiences after building digital products for over 30 countries

hypothesis statement of product

Search icon

.css-1q79kkk-skeletonStyles-Skeleton{background-color:#eee;background-image:linear-gradient( 90deg,#eee,#f5f5f5,#eee );background-size:200px 100%;background-repeat:no-repeat;border-radius:4px;display:inline-block;line-height:1;width:100%;-webkit-animation:animation-bzdot9 1.2s ease-in-out infinite;animation:animation-bzdot9 1.2s ease-in-out infinite;}@-webkit-keyframes animation-bzdot9{0%{background-position:-200px 0;}100%{background-position:calc(200px + 100%) 0;}}@keyframes animation-bzdot9{0%{background-position:-200px 0;}100%{background-position:calc(200px + 100%) 0;}} ‌

hypothesis statement of product

How to create product design hypotheses: a step-by-step guide

(or, how to take down a rampaging hippo in one move).

Ivan Schneiders

Ivan Schneiders

UX Collective

So, you’ve decided to get your product team running Lean. You might even have a degree in behavioural science and understand the scientific method as if you were suckled on Karl Popper ’s breast, but your team… not so much. Worse still, the minute you start talking about induction and deduction or null and alternate hypotheses everyone’s eyes glaze over and you realise that if this was an experiment you’d be rejecting the hypothesis that ‘Lean will help me make better products’ (and imagine trying to explain that you’d actually be accepting the null hypothesis). Let’s not do that. I’m going to simply walk you through the what, why and how of hypothesis-driven design with a step-by-step guide.

(Cheat code: There’s a one-minute guide at the bottom of this page).

What is a product design hypothesis?

Well, the first thing to accept is that no matter how much research you do, your product is just a theoretical solution to a human need or want that you hope will result in business success. The hypothesis is your guess at why a particular solution will succeed. Once you succeed, and people are buying and using your product it’s no longer a hypothesis. It’s a fact.

Where do hypotheses come from?

I’m lazy, so when it comes to why we need good hypotheses the answer is, crap in means crap out because alchemy is not a thing. We won’t learn anything useful if our hypotheses are not insightful and well-formed. A lot of people that take on the Lean approach seem to think that they should just get in a room ‘ideate’, then throw it out to customers and see what sticks. Do that and chances are we’re going to waste a lot of time and money. Good hypotheses come from good observations . If you want to be more than a one-hit-wonder and develop an ongoing process of product development and innovation you need to find significant problems to solve. This is what observation is for and (and that can include the observation based on experience that we often call intuition) and why research is essential. Once you understand the space you’re working in ideation can begin and it can be systematic and rigorous.

Starting vision

Imagine, for example, that you have discovered that a lot of people suffer back pain as a result of sitting at an office desk and you want to make a product that solves this issue. Once you have formed a good customer problem statement (turning your intuitions or observations into a rationale) and prioritised your project by mapping its risks , your observations and business goals need to be summed up in a clearly articulated vision of the future. Once you know the general direction you’re going, you can move into solving problems.

It’s okay to use your imagination

Don’t let being scientific mean that your team turns into a group of soulless empiricists who may have learned how to apply a Vulcan nerve pinch but couldn’t design their way out of a paper bag.

Imagination and intuition are essential and have a very important role to play. Use them to diverge, and create as many possible solution ideas as possible. These are your hypotheses.

The actual Step-by-Step Guide starts here…

Step 1: imagine the change you want, and write it down.

What will the world be like for users of your product or service once they have it? This is your outcome, it’s your grand design, your vision for the future in which your product or service is a huge success and peoples lives are transformed. For example:

People who use our product no longer suffer back pain. They regard the product as a necessity and a delight. In fact, significant numbers of customers order more than one and frequently request the expansion of our product range. They love the product so much they basically sell them for us, and in the first year of release, we have sold 300,000 units and have over 30 distributors nationally.

This is your outcome statement. It provides a point of focus moving forward and should direct your investigation in general as well as directing your choice of success metrics for each and every experiment. A well-formed vision will cover the product’s desirability, viability and feasibility.

Step 2: Why is the status quo is the status quo

Now that you know what the world will be like after you succeed you need to ask, ‘What’s preventing the outcome being achieved?’ That is, ‘why isn’t it already how you want it?’

If these causes were removed your outcome would already be a reality. These are the root of your hypotheses. There should be a few of these and they are often multi-layered. There are two parts to this, one is causes and the other is blockers. A cause might be that muscles seize when not moved, a blocker however can be behavioural or situational, like, ‘I’m too lazy to do regular exercise’ ;) This is important because we have to work on things we can affect. The problem we are solving might be that the poorest people in our community have no savings, one cause is low income, and that could easily reduce the ideation to increasing income. However, it turns out that often this segment spends a meaningful amount on lottery tickets. By going beyond simple causes and looking at blockers we will broaden our potential for solutions. And remember,

Behavior is the medium of design — Robert Fabricant

So, back to our back pain problem:

  • People have bad posture when they sit at a computer which causes strain on spinal discs, muscles and ligaments a) They sit too close to the screen b) They sit too far from the screen c) They sit at the wrong height relative to the screen
  • People sit for too long causing ligaments and muscles to tighten a) They don’t move their muscles enough i. They lose track of time ii. They don’t have a clock nearby iii. They‘re too focussed to check the time iv. They are too busy to get up and walk around v. They aren’t sufficiently motivated to move until it’s too late
  • People don’t use ergonomic chairs because they are expensive and think they are ugly

You get the idea. Write as many as you can think of, obviously it’s preferable if these are based on actual research, but not essential.

Step 3: Dream like a scientist, ideate like a lunatic

Using your imagination effectively doesn’t mean smoking weed and waiting for the muse to magically infer upon you the perfect solution. Conversely, you don’t have to try to be a genius, experimentation will do that work for us and make us look like Einsteins.

Consider each of the obstacles listed above one at a time. What new ways of doing things might we try in order to remove these obstacles? What are the alternatives to the way things are currently done?

Let’s take cause number 2. People sit for too long causing ligaments and muscles to tighten a) They don’t move their muscles enough b) They aren’t sufficiently motivated to move until it’s too late c) They forget to move regularly … as many as you can find

and ideate some solutions…

  • Make the chair remind them to move. After sitting in the seat for 30 minutes make the chair vibrate enough to irritate them until they get out of the seat
  • Make the chair massage the muscles that cause back pain. Add massage pads to the chair that activate on an appropriate schedule and provide the muscle activation equivalent to getting out of the chair

It’s very easy to start ideating when the problems you’re solving for are specific and granular enough to engage with directly. Broad and abstract goals are very hard to ideate on because the parameters are too vague. For me it’s like being confronted with a blank canvas, it’s all a bit overwhelming and it’s very easy to freeze up creatively. More importantly, you’re now a hair’s breadth from having a rock-solid hypothesis to test.

Step 4: Writing hypotheses

I’m sure all your ideas are amazing, just like mine ;). However, it is statistically possible that some are or less amazing than others. So at this stage let’s agree that it’s still all hypothetical whether or not our ideas will successfully remove the obstacles to our desired outcome. Which brings us to the next step, writing hypotheses.

Take all your ideas and turn them into testable hypotheses. Do this by rewriting each idea as a prediction that claims the causes proposed in Step 2 will be overcome, and furthermore that a change will occur to the metrics you outlined in Step 1 (your outcome).

For example: Massage pads built into a chair that trigger on a schedule and provide the muscle activation equivalent to getting out of the chair will prevent muscle spasm that causes back pain.

which would be translated as I believe that by adding massage pads to an office chair which provide the muscle activation equivalent to getting out of a chair every 30 minutes we will reduce back pain among office workers because the user’s muscles will be sufficiently activated.

(technically this is a prediction. It’ll help you design your test. Note also that it is specific, ‘every 30 minutes’, as frequency is obviously a variable and you don’t want to give up because it turns out the core idea is good but the frequency needed was actually 29 minutes).

The structure of your hypothesis

I BELIEVE THAT <my feature/product/solution>

WILL <direction of change><thing that will change>

FOR <target user>

BECAUSE<reason for change>

Technically the first two lines are a prediction and only the last part (reason for change) is the underlying hypothesis, but it’s practical for us to combine them. In product development, it’s fair to say you could leave out the < reason for change > because you only want to know that it worked and may not care how or why. However, if you approach it in this way and discover from your first experiment that it didn’t succeed, you won’t know why it didn’t work and will be far less effective in getting to the product that succeeds. It also helps to improve your experiment design because we need to isolate the variables quite explicitly.

Our high level product design hypothesis might read,

I believe that office chairs with massage pads on the lumbar and thorasic spine that activate every 30 minutes will significantly reduce back pain for people whose injury is caused by excessive immobility because it will provide sufficient oxygen and nutrients (blood flow) to the muscles most vulnerable to injury.

Great work, we’ve got our first testable hypothesis. Do this for the rest of the ideas and when you’re done, don’t try to prove these to be true. Do the opposite.

Step 5: Testing the right thing and testing the thing right

By now you’re probably a bit excited about my massage chair, I know I am. I can already see towering office buildings full of people moaning with pleasure as my massaging office chair edifies their working life. I know my experiment is going to prove me right. This is the best chair ever!

But, you’re probably going to need to convince a bunch of stakeholders that they’re opinions are wrong and you are right. This approach is key to transforming the mindset of everyone in your organisation to one that understands the difference between opinion and justifed belief. The HIPPO (Highest Paid Person’s Opinion) won’t know what hit him or her.

I’m not sure if I trust myself to design a fair test. Experimentation is not only about finding out if people agree with me, even if they’re customers, it’s also about understanding why something works. That way it becomes repeatable, it becomes valuable knowledge. Knowledge helps us avoid bad decisions, like investing millions of dollars in a chair that promises to fix back pain and make people more relaxed only to discover that it doesn’t actually fix back pain any better than what is currently being used. So we really do need to be rigorous in our testing, and one of the most important steps on that road is to make sure you know what you think you’re changing, and this means disproving the current belief or the status quo. To do that you need to know what the status quo is and make sure it’s measurable.

Our hypothesis is wonderful and promises to be a great alternative to the status quo of back pain, stress and general unhappiness. Let’s prove we can do better than whatever’s happening now. If you’re so confident your hypothesis is true then this shouldn’t be a problem at all.

Rewrite the hypothesis as follows: Activating the muscles and ligaments with massage pads in an office chair will not decrease the prevalence of back pain in the target group.

or if there is a competitor that people believe works your null hypothesis might read,

Activating the muscles and ligaments with massage pads in an office chair will not decrease the prevalence of back pain in the target group more than the current method of informing people they need to stand up every 30 minutes.

The reason is that this will direct the design of the experiment to focus on whether or not the solution has a measurable impact or not. It means you’ll need to have a measurement for the status quo, and this is the metric that you are hoping will change when you test your new chair. If it doesn’t change then you need to go back and alter your original hypothesis and try again. It’s as simple as that. If you can’t measure the status quo your hypothesis is technically invalid because it’s not testable. If this is the case, and the impact of a bad investment is large, you will need to go back to either step 3 or 4 and write a new hypothesis.

Warning: Avoid the trap of the interesting but not impactful

For product innovators, we usually just care that it works, not neccessarily that it’s more effective than our competitors so we usually just test against not having our product. More importantly though, don’t get trapped into trying to discover things that are interesting but not impactful on the design decisions themselves, wasting time trying to get incremental improvements in accuracy.

The role of all research (including experimentation) is to reduce the risk of bad decisions. Remember, speed-to-market is a critical factor in product success. If we waste time answering irrellevant questions or trying to get perfect answers we are likely to increase the risk to our product’s success by being too slow.

Amazing work, we’re ready to start designing our experiments.

Damn, that looks like a lot of experiments

Remember when I told you I was lazy? It’s still true. We don’t have to test every single idea, and considering resources have limits for most of us, it’s important to reduce the risk of failure by prioritising. I created Uncertainty Mapping to prioritise my experiments.

The 1-minute guide

  • OUTCOME: Describe the ideal end state. What is the perfect world you’re aiming to create by bringing your product or service into existence? How will you know (what will you literally see) when this is achieved?
  • OBSTACLES: What are the things that research shows, or you believe, are preventing the outcome being achieved? That is, why isn’t the world already how you want it?
  • ALTERNATIVES: What new ways of doing things might we try in order to overcome these obstacles?
  • HYPOTHESES: Make a prediction based on each alternative under #3 that claims a change will occur to the metrics you outlined in it and in #1 (your outcome).
  • NULL HYPOTHESIS: Write the counter-argument to your first hypothesis. If your hypothesis claimed that something will happen, replace the word ‘will’ with ‘won’t’. Design your experiment to disprove this statement.
  • PRIORITISE: Apply the uncertainty mapping tool to your hypotheses to prioritise them.

7. EXPERIMENT DESIGN: Let’s talk about this another time, I’m sure I’ve pushed my luck getting you to read this far (cheatcode users excepted).

Thanks for reading, hope it’s useful.

(after several years and thousands of reads I’ve made some small edits to fix some grammar and hopefully improve clarity here and there. Thanks to all the readers, clappers and followers, it really means a lot)

Ivan Schneiders

Written by Ivan Schneiders

Product design, behavioural science and other things

Text to speech

4.7 STARS ON G2

Analyze your mobile app for free. No credit card required. 100k sessions.

SHARE THIS POST

Product best practices

Product hypothesis - a guide to create meaningful hypotheses.

13 December, 2023

Tope Longe

Growth Manager

Data-driven development is no different than a scientific experiment. You repeatedly form hypotheses, test them, and either implement (or reject) them based on the results. It’s a proven system that leads to better apps and happier users.

Let’s get started.

What is a product hypothesis?

A product hypothesis is an educated guess about how a change to a product will impact important metrics like revenue or user engagement. It's a testable statement that needs to be validated to determine its accuracy.

The most common format for product hypotheses is “If… than…”:

“If we increase the font size on our homepage, then more customers will convert.”

“If we reduce form fields from 5 to 3, then more users will complete the signup process.”

At UXCam, we believe in a data-driven approach to developing product features. Hypotheses provide an effective way to structure development and measure results so you can make informed decisions about how your product evolves over time.

Take PlaceMakers , for example.

case-study-placemakers-product-screenshots

PlaceMakers faced challenges with their app during the COVID-19 pandemic. Due to supply chain shortages, stock levels were not being updated in real-time, causing customers to add unavailable products to their baskets. The team added a “Constrained Product” label, but this caused sales to plummet.

The team then turned to UXCam’s session replays and heatmaps to investigate, and hypothesized that their messaging for constrained products was too strong. The team redesigned the messaging with a more positive approach, and sales didn’t just recover—they doubled.

Types of product hypothesis

1. counter-hypothesis.

A counter-hypothesis is an alternative proposition that challenges the initial hypothesis. It’s used to test the robustness of the original hypothesis and make sure that the product development process considers all possible scenarios. 

For instance, if the original hypothesis is “Reducing the sign-up steps from 3 to 1 will increase sign-ups by 25% for new visitors after 1,000 visits to the sign-up page,” a counter-hypothesis could be “Reducing the sign-up steps will not significantly affect the sign-up rate.

2. Alternative hypothesis

An alternative hypothesis predicts an effect in the population. It’s the opposite of the null hypothesis, which states there’s no effect. 

For example, if the null hypothesis is “improving the page load speed on our mobile app will not affect the number of sign-ups,” the alternative hypothesis could be “improving the page load speed on our mobile app will increase the number of sign-ups by 15%.”

3. Second-order hypothesis

Second-order hypotheses are derived from the initial hypothesis and provide more specific predictions. 

For instance, “if the initial hypothesis is Improving the page load speed on our mobile app will increase the number of sign-ups,” a second-order hypothesis could be “Improving the page load speed on our mobile app will increase the number of sign-ups.”

Why is a product hypothesis important?

Guided product development.

A product hypothesis serves as a guiding light in the product development process. In the case of PlaceMakers, the product owner’s hypothesis that users would benefit from knowing the availability of items upfront before adding them to the basket helped their team focus on the most critical aspects of the product. It ensured that their efforts were directed towards features and improvements that have the potential to deliver the most value. 

Improved efficiency

Product hypotheses enable teams to solve problems more efficiently and remove biases from the solutions they put forward. By testing the hypothesis, PlaceMakers aimed to improve efficiency by addressing the issue of stock levels not being updated in real-time and customers adding unavailable products to their baskets.

Risk mitigation

By validating assumptions before building the product, teams can significantly reduce the risk of failure. This is particularly important in today’s fast-paced, highly competitive business environment, where the cost of failure can be high.

Validating assumptions through the hypothesis helped mitigate the risk of failure for PlaceMakers, as they were able to identify and solve the issue within a three-day period.

Data-driven decision-making

Product hypotheses are a key element of data-driven product development and decision-making. They provide a solid foundation for making informed, data-driven decisions, which can lead to more effective and successful product development strategies. 

The use of UXCam's Session Replay and Heatmaps features provided valuable data for data-driven decision-making, allowing PlaceMakers to quickly identify the problem and revise their messaging approach, leading to a doubling of sales.

How to create a great product hypothesis

Map important user flows

Identify any bottlenecks

Look for interesting behavior patterns

Turn patterns into hypotheses

Step 1 - Map important user flows

A good product hypothesis starts with an understanding of how users more around your product—what paths they take, what features they use, how often they return, etc. Before you can begin hypothesizing, it’s important to map out key user flows and journey maps that will help inform your hypothesis.

To do that, you’ll need to use a monitoring tool like UXCam .

UXCam integrates with your app through a lightweight SDK and automatically tracks every user interaction using tagless autocapture. That leads to tons of data on user behavior that you can use to form hypotheses.

At this stage, there are two specific visualizations that are especially helpful:

Funnels : Funnels are great for identifying drop off points and understanding which steps in a process, transition or journey lead to success.

In other words, you’re using these two tools to define key in-app flows and to measure the effectiveness of these flows (in that order).

funnels-time-to-conversion

Average time to conversion in highlights bar.

Step 2 - Identify any bottlenecks

Once you’ve set up monitoring and have started collecting data, you’ll start looking for bottlenecks—points along a key app flow that are tripping users up. At every stage in a funnel, there’s going to be dropoffs, but too many dropoffs can be a sign of a problem.

UXCam makes it easy to spot dropoffs by displaying them visually in every funnel. While there’s no benchmark for when you should be concerned, anything above a 10% dropoff could mean that further investigation is needed.

How do you investigate? By zooming in.

Step 3 - Look for interesting behavior patterns

At this stage, you’ve noticed a concerning trend and are zooming in on individual user experiences to humanize the trend and add important context.

The best way to do this is with session replay tools and event analytics. With a tool like UXCam, you can segment app data to isolate sessions that fit the trend. You can then investigate real user sessions by watching videos of their experience or by looking into their event logs. This helps you see exactly what caused the behavior you’re investigating.

For example, let’s say you notice that 20% of users who add an item to their cart leave the app about 5 minutes later. You can use session replay to look for the behavioral patterns that lead up to users leaving—such as how long they linger on a certain page or if they get stuck in the checkout process.

Step 4 - Turn patterns into hypotheses

Once you’ve checked out a number of user sessions, you can start to craft a product hypothesis.

This usually takes the form of an “If… then…” statement, like:

“If we optimize the checkout process for mobile users, then more customers will complete their purchase.”

These hypotheses can be tested using A/B testing and other user research tools to help you understand if your changes are having an impact on user behavior.

Product hypothesis emphasizes the importance of formulating clear and testable hypotheses when developing a product. It highlights that a well-defined hypothesis can guide the product development process, align stakeholders, and minimize uncertainty.

UXCam arms product teams with all the tools they need to form meaningful hypotheses that drive development in a positive direction. Put your app’s data to work and start optimizing today— sign up for a free account .

You might also be interested in these;

Product experimentation framework for mobile product teams

7 Best AB testing tools for mobile apps

A practical guide to product experimentation

5 Best product experimentation tools & software

How to use data to challenge the HiPPO

Ardent technophile exploring the world of mobile app product management at UXCam.

Get the latest from UXCam

Stay up-to-date with UXCam's latest features, insights, and industry news for an exceptional user experience.

Related articles

Best A/B testing tools for mobile apps

Curated List

7 best ab testing tools for mobile apps.

Learn with examples how qualitative tools like funnel analysis, heat maps, and session replays complement quantitative...

work pic

Content Director

FREE Usability Testing Templates and How To Use Them

Unlock the power of effective usability testing with these ready-to-use templates, designed to help product managers gain actionable insights...

How to Increase Mobile App Engagement (10 Actionable Steps)

Discover the top strategies for increasing mobile app engagement and user retention. From push notifications to app gamification, our expert tips will help you boost...

How to Generate and Test Hypotheses in Product Development

product hypothesis

In today’s market, product-based companies are in great need of strong and experienced product managers. Unfortunately, there is a need for more quality education in this area, causing many professionals to attempt the role and make mistakes. As a result, specialists often rely on their intuition and assumptions rather than real data due to the lack of reliable tools for building and developing products. That is why we want to tell you about the hypotheses underlying product development.

What is a product hypothesis?

Can you explain a product hypothesis and how it differs from an idea? A product hypothesis is a statement that proposes a connection between two or more variables and is crucially testable. When creating a product, we generate hypotheses to validate assumptions about customer behavior, market needs, or the potential impact of product changes. These hypotheses aid us in identifying product-market fit and enhancing the user experience, and also: 

  • Decrease potential risks and uncertainties
  • Streamline decision-making by reducing biases and guesswork
  • Emphasize the principle of continuous learning and development, which is highly valued

Learning to construct and test hypotheses is crucial if you value data-driven development. Hypothesis testing is a primary method for collecting data and enables unbiased decision-making about product development by the product team.

Examples of Hypotheses

Crafting hypotheses becomes intuitive once you discern them from mere opinions or ideas. Their distinction lies in their testability and clarity about expected outcomes.

For instance, consider the statement, “We should optimize our Jira app’s dashboard loading time.” This is merely an idea because it has one variable (optimizing loading time) and lacks clarity on the expected outcome. However, with a slight tweak, it becomes a testable hypothesis.

“By reducing the dashboard loading time of our Jira app by 5% (variable 1), user engagement will increase by 15% (variable 2).” Now, if, upon implementation, user engagement rises by 15%, the hypothesis is validated. If not, it’s disproven.

Here are some more hypotheses tailored to a Jira app development scenario:

  • “Introducing a ‘project status’ widget to our Jira cloud app will lead to a 10% increase in monthly active users.”
  • “By providing in-app video tutorials, we’ll see a 20% uptick in premium feature subscriptions.”
  • “Releasing a developer interview about our latest Jira integration on our blog will drive an additional 1000 visits, of which 50 will result in app installations.”

It’s essential to remember that every hypothesis doesn’t necessarily warrant testing. Sometimes, the mere act of formulating hypotheses can sharpen your analytical skills. It’s vital to weigh the benefits of testing a hypothesis against the resources it would consume.

For instance, the developer interview hypothesis might not be worth pursuing if the anticipated 50 app installations don’t cover the time and resources spent on the interview.

Utilize hypotheses to prioritize development tasks based on the following:

  • Quality of impact.
  • Magnitude of impact.
  • Likelihood of achieving the impact.

By doing so, your team can focus on actions that promise the highest return rather than getting swayed by the novelty or popularity of an idea.

When to Create Hypotheses

Hypotheses form the bedrock of data-driven decision-making, especially in software development and improvement. For product developers, understanding when to create hypotheses can be pivotal in ensuring product success and user satisfaction. Here are the ideal moments:

  • Product Ideation & Features Addition:  When brainstorming new features or contemplating adding functionalities, hypotheses can help prioritize which features will likely have the most significant positive impact on user experience or drive desired KPIs.
  • User Feedback Analysis:  If users consistently raise certain issues or request specific enhancements, it’s time to form hypotheses about potential solutions and their impacts. For instance, “If we introduce a drag-and-drop task manager in our Jira app, will user task completion rates improve?”
  • Performance Optimization:  Whenever there’s a perceived lag or glitch in your product, hypothesize the potential fixes and their effects on user engagement or retention.
  • Expansion into New Markets:  If you’re considering offering your product in a new geographical region or to a different user segment, hypotheses can help predict user behavior and adoption rates in those markets.
  • Marketing and Outreach:  Formulate hypotheses about their expected outcomes before launching marketing campaigns or partnership initiatives. For instance, “Partnering with Atlassian Marketplace influencers will lead to a 20% increase in our Jira app downloads.”
  • Post-Release Analysis:  Monitor user behavior and feedback after launching a new version or feature. If things aren’t proceeding as expected, it’s time to hypothesize why and what can be done to course-correct.
  • Resource Allocation:  When you have limited resources—be it time, human resources, or budget—and need to decide where to invest, hypotheses can guide decisions based on anticipated ROI.
  • UX/UI Redesign:  If considering a major design overhaul, hypotheses about user navigation patterns, engagement hotspots, and potential friction points can be invaluable.

Remember, hypotheses aren’t just about anticipating the results of changes. They are also a tool for proactive problem-solving, guiding research, and ensuring that the development team remains aligned with user needs and company goals. So, every time you’re at a decision crossroads or when intuition alone doesn’t seem sufficient, lean on the power of hypotheses.

Creating Hypotheses for the Products

Hypotheses have a standard structure, requiring at least two variables and a connecting factor.

Step 1: Define Variables

Identify independent (cause) and dependent (effect) variables. For instance, introducing a “feature of adding email templates” is an independent variable, while expecting an “increase in the API use for sending emails” is a dependent one.

Independent variables might be product updates, such as revising landing page text or adding filters on a search panel. Dependent variables are typically measurable metrics: trials, subscriptions, monthly active users, etc.

Avoid ambiguous terms in your hypotheses. Instead of saying, “Users churn because setting up is hard,” be specific: “Providing clear setup steps will reduce user churn.”

Remember, when product managers prioritize user needs and value, it simplifies sales and monetization. Formulating hypotheses focusing on enhancing a product’s feature usage is beneficial. When users value a product, improving user experience becomes the primary focus over boosting profits.

Step 2: Linking Variables

The relationship between variables should be clear and logical. If it’s not, regardless of how well-articulated your variables sound, your test results will not be reliable.

Here are some common pitfalls to avoid when defining relationships between two or more variables:

Weak Relationship:  It may seem logical that increased website traffic will result in more registrations, but this is not necessarily true. Website visitors may need to be sufficiently motivated to use your product, and registrations typically require greater commitment. A more effective hypothesis would be to focus on modifying your pricing page’s call-to-action (CTA), which will likely have a more direct and impactful relationship with increasing registrations.

Made-up Relationship:  It is common to encounter issues when one of the variables depends on a metric that is not indicative. For example, the assumption that “Increasing social media views will enhance our Jira app users” may be erroneous. There is no clear reason a social media user would be inclined towards your product. They could be more attracted to your content than your actual product.

Interdependent Variables:  It’s important to keep variables separate from one another. For example, if you remove the “Sign up with Google” option, you’ll likely see fewer users with Google Workspace accounts because the two are directly connected. This is especially important in product development, where accurately defining these relationships is crucial. By ensuring your hypotheses are based on strong connections between variables, you’ll be able to make informed decisions and achieve desired outcomes.

Step 3. Determining Verification Criteria for Hypotheses

Determining the verification criteria is pivotal in validating hypotheses, especially in Jira app development. This step establishes the specific metrics or outcomes you’ll use to evaluate if a hypothesis holds true.

Steps to Determine Verification Criteria

  • Define Measurable Outcomes:  Ensure your outcomes are tangible and can be tracked. Avoid ambiguous criteria like “improve user satisfaction.” Instead, opt for “Increase the average session duration by 2 minutes.”
  • Set Benchmarks:  Before testing your hypothesis, understand your current metrics. For instance, if you currently have a 5% click-through rate (CTR) on a particular product feature, and your hypothesis expects to increase it, you need this baseline for comparison.
  • Specify Time Frame:  All hypotheses should be tested within a specific period. “We expect a 10% increase in new Jira app installations in the next 30 days” is a clear timeframe.

Examples in Jira App Development:

📌 Hypothesis:   “By simplifying the onboarding process in our app, we will increase user activation by 15%.”

  • Verification Criteria:  Track the number of users who complete the onboarding process and compare the rate before and after the changes over a 60-day period.

📌 Hypothesis:   “Introducing a dark mode in our Jira app will reduce the app uninstall rate by 5%.”

  • Verification Criteria:  Monitor and compare uninstall rates for a month before and after introducing the dark mode feature.

📌 Hypothesis:   “Highlighting our app’s integration features on the Jira marketplace will boost our demo requests by 20%.”

  • Verification Criteria:  Measure the number of demo requests received in the 30 days following the changes against the prior 30 days.

Remember, the clearer your verification criteria, the more actionable insights you’ll gather. 

Prioritizing Hypotheses in Product Development

Prioritizing hypotheses is crucial in ensuring your development efforts yield the most significant returns. The sheer volume of ideas and potential improvements can be overwhelming when developing apps for Jira. By ranking these hypotheses effectively, you can allocate resources more efficiently and achieve your goals faster. Here’s how you can prioritize your hypotheses:

1. Score Each Hypothesis :

Once you have verification criteria, assign a score (e.g., on a scale of 1 to 10) for each hypothesis based on these criteria. The higher the score, the more priority the hypothesis should get.

2. Rank and Prioritize :

With scores in hand, rank the hypotheses. Those with the highest aggregate scores across all criteria should be at the top of your list.

3. Review Regularly :

The product development landscape is dynamic. As user feedback comes in or business goals shift, revisit and re-prioritize your hypotheses accordingly.

For example :

📌 Hypothesis : “Adding a dark mode to our Jira app will increase nighttime usage by 20%.”

  • Potential Impact : 8 (Many users have requested this feature.)
  • Feasibility : 6 (Requires some redesign but manageable.)
  • Resource Requirement : 5 (Needs designer and developer time.)
  • Risk : 2 (Few users might not like the new design.)
  • Total Score : 21

📌 Hypothesis : “Integrating a voice-command feature will boost productivity by 30%.”

  • Potential Impact : 9 (Could be a game-changer for hands-free task management.)
  • Feasibility : 3 (Voice technology is still nascent and might have bugs.)
  • Resource Requirement : 7 (Needs significant investment in new tech and training.)
  • Risk : 5 (Could frustrate users if not implemented perfectly.)
  • Total Score : 24

📌 Hypothesis : “Improving the onboarding tutorial will reduce drop-offs by 15%. “

  • Potential Impact : 7 (Better onboarding can retain more users.)
  • Feasibility : 8 (We have clear feedback on improvements.)
  • Resource Requirement : 4 (Requires updating existing content.)
  • Risk : 1 (Low risk; it’s just improving existing content.)
  • Total Score : 20

In this example, despite its challenges, the voice-command feature is the top priority due to its game-changing potential. However, teams might tackle the onboarding tutorial first, as it’s more feasible and has fewer risks. The key is to balance impact and feasibility while always keeping the user’s best interests at heart.

Hypothesis Testing Process

Hypothesis testing can be an intriguing process, especially when determining the best methodology for each test. For example, the Jira software environment, known for tracking and managing software development projects, presents unique opportunities for hypothesis testing, particularly in app development. Let’s delve into some methods and creative examples:

1. A/B Testing  A/B testing involves creating two or more versions of a webpage, feature, or functionality and collecting data on user reactions.

Example:  Suppose you’re developing a Jira app with a built-in search feature on the dashboard. You hypothesize that positioning the search bar at the top right will increase user interaction compared to having it at the bottom left. You’d launch an A/B test to verify, exposing users to both designs. By tracking which group engages more with the search function, you can determine the optimal placement.

2. Prototyping  Creating a prototype is a cost-effective way to gather feedback. It’s flexible enough to prototype the entire product or a specific feature.

Example:  Consider introducing a new visualization tool in your Jira app. Instead of directly coding it, you can draft its design in tools like Figma.

3. User Interviews (CustDev)  Engaging in one-on-one interviews reveals underlying motivations, pain points, and desires. While more effort-intensive, the insights gained can be profound.

Example:  Picture a scenario where you’ve launched a Jira app extension aimed at project managers. By organizing 30-minute to 1-hour face-to-face or online interviews with real project managers, you can uncover nuances of their challenges, preferences, and desires. Feedback from these interactions could reshape the app’s roadmap, ensuring it becomes indispensable in a project manager’s toolkit.

To conclude, product hypothesis testing can benefit immensely from a blend of quantitative (like A/B testing) and qualitative (like interviews) methods. It’s about making changes and ensuring they resonate with the users. And in the agile world of Jira, such adaptability is crucial.

Documenting Hypotheses & Results

Every developer needs a systematic approach to hypothesis testing. In product development, consistency is key. Here’s how to streamline the process:

  • Centralized Recording : Utilize tools like Coda or Google Sheets to document your hypotheses, plans, and results. This creates a single reference point for your development team and stakeholders, ensuring everyone is aligned.
  • Reviewing & Decision-making : After testing, delve into the data. Examine key metrics to decide the next steps. Depending on the clarity, you may need further tests or can proceed with the proposed changes.
  • Best Practices for Testing in Jira Apps Development :
  • Precision : Ensure data accuracy. Whether you’re analyzing user behavior or gathering feedback, precision matters.
  • Experiment Volume : Don’t be afraid to run multiple tests, but be wary of confirmation bias. Accept the data and adapt accordingly.
  • Target Audience : Define who you’re testing for. Specificity ensures more reliable results.
  • Avoid Bias : Stay neutral. If you’re assessing user interactions, for instance, include comprehensive data sets.
  • Learning from Failures : Not all hypotheses will prove correct. However, every test can bring insights, like refining user experience aspects.
  • Prioritize Tests : Not everything needs a hypothesis test. If something is evidently off in your Jira app, address it immediately.

Integrating these practices into the product development process ensures a more streamlined and data-driven approach, leading to better products and user experiences.

Final Thoughts

Creating and testing hypotheses might seem challenging, but with the right tools, it’s simple. The key is asking the right questions, making clear statements, and using effective testing methods.

While developing based on intuition can be fast, using hypotheses helps create products that truly meet customers’ needs.

For those developing new apps on the Atlassian Marketplace, the Marketplace Reporter is your next step. Use it to dive into analytics, explore historical marketplace data, and spot trends. This will help you make informed, data-driven decisions easily.

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

Table of Contents

hypothesis statement of product

From Theory to Practice: The Role of Hypotheses in Product Development

This article explores why working with hypotheses is not just a quirky aspect of product management but an essential practice in the field.

Let's dive into what a hypothesis actually is when it comes to crafting a standout product. Think of a hypothesis as your project's leading detective, uncovering the mysteries of user behavior, pinpointing problem sources, and suggesting solutions to not just improve your product, but to make it a market sensation.

Consider a straightforward example. Imagine you have a pizza delivery app. You hypothesize that enlarging the "Order" button will lead to more orders. This is your hypothesis! You're assuming that a change in X (the button size) will result in outcome Y (increased orders).

Or, suppose you plan to refine the product filtering on your e-commerce site, enabling users to find what they need faster. Your hypothesis might be, "Implementing a new filtering system by price and brand will boost purchase conversions."

In product development, a hypothesis isn't just a guess or an idea; it's a data-driven assumption about how certain changes can achieve desired outcomes. It serves as a map, guiding you through the ocean of user needs and transforming your product into a true gem.

So, don't hesitate to formulate hypotheses, test them through experiments and data analysis—you'll surely navigate your product towards success!

Hypothesis vs. Simple Statement: Understanding the Nuance

Let's clear up the difference between a hypothesis and a simple statement, in a way that both you and your grandmother can grasp.

A simple statement is like saying, "My cat loves milk." It seems like an obvious fact. But a hypothesis is more like a weather forecast: "If today is sunny, my cat will be happier." Here, there's an assumption and a link between two phenomena.

For example, a statement might be, "My grandmother enjoys knitting sweaters." This is a fact of life.

However, a hypothesis could be, "If I help my grandmother with household chores every day, she will be happier." Here, there's a presumption that active participation will lead to my grandmother's happiness.

See the difference? A hypothesis attempts to predict and explain the relationship between phenomena, while a statement just provides information about something. Remember, to develop your product like a boss, you need to craft compelling hypotheses and test them in reality!

Why Formulate Hypotheses in Product Development

Imagine you have an idea for an app that makes it faster and more convenient for people to catch up on news. You could formulate a hypothesis that adding a feature to alert users about significant events will increase app usage. This is your working hypothesis!

Here's why it's so crucial. Hypotheses help us understand which product modifications can make it even better. They allow us to test our assumptions and adapt our product development strategy on the fly.

Moreover, by testing hypotheses in the early stages of development, we can save time and money by identifying potential problems and fixing them before the product hits the market.

Formulating Hypotheses: A Step-by-Step Approach

Step 1: identifying key problems or opportunities for verification.

The first step is akin to treasure hunting in the business realm. You need to unearth the primary issues or potential opportunities that will form the basis of your hypotheses.

For instance, imagine you're developing a fitness app and users are reporting that the interface is too complex. The problem is already highlighted, and your task is to formulate and test a hypothesis!

The best way to proceed is by gathering data. Embrace your inner detective and delve into user data, analytics, reviews, and more. Remember, everything must be fact-based to ensure your hypothesis isn't mere speculation.

Once you've pinpointed your targets and problems, you're ready to craft a hypothesis. It should be specific, measurable, and include an anticipated outcome, such as "Simplifying the app's interface will increase user satisfaction and the time spent using it."

Step 2: Crafting Your Hypothesis: How to Structure It

Clarity comes first - start with a clear formulation of your hypothesis. If you're developing a financial management app, your hypothesis might be, "Introducing a feature for upcoming payment alerts will enhance user engagement and reduce the number of late payments."

Measurability is key - decide how you will measure the success of your hypothesis. For example, you could track an increase in user activity post-notification implementation.

Hypothesis vs. Goal - understand that a hypothesis is not a goal! It's an assumption about the outcomes of a change that can be tested, whereas a goal is the ultimate outcome you aim to achieve.

Consider alternatives and limitations - don’t forget to account for alternative scenarios and potential limitations, such as other factors that could impact your success metrics.

Testing is where the fun begins - after formulating your hypothesis, launch an experiment, collect data, and analyze the outcomes. If the hypothesis is disproven, it’s still valuable insight for future research.

Step 3: Defining Key Metrics and Experiments to Test Your Hypothesis

Before diving into the verification of product development hypotheses, let’s talk about how to define key metrics and design experiments for their testing. Imagine standing before a door of opportunities, behind which lie the answers to making your product even better. Ready for the challenge?

First, determine how to measure the success of your product change. These key metrics should be specific, measurable, and tied to your product's goals. For example, if your hypothesis is about improving user attraction, a key metric might be the conversion rate from the homepage to the sign-up page.

Hypothesis example: Adding video reviews of products will increase the conversion rate on the product page.

Key metric: Increased time spent on the product page.

With your key metrics in hand, it's time to unleash your creativity and devise experiments to test your hypothesis. Experiments should be structured, controlled, and capable of providing a definitive result on whether the hypothesis holds.

Experiment example:

Hypothesis: Simplifying the checkout process will increase purchase conversions.

Experiment: Split users into two groups—one with a simplified checkout process and the other with the standard process. Measure the purchase conversion rate in each group.

Typical Mistakes in Working with Hypotheses

The importance of specificity in hypotheses.

Let's discuss why specificity is crucial in the world of product development hypothesis formulation. Imagine trying to solve a puzzle, but instead of clear instructions, you're overwhelmed with numerous ambiguous paths. Intriguing, yes, but where to go and what to do? Similarly, vague hypotheses create confusion and can lead us nowhere.

Formulating a vague hypothesis is like playing the lottery with your product. You're giving it a chance to succeed, but without a clear plan, it's more luck than strategy. Knowing your direction ensures you move forward confidently rather than wandering in the dark.

For example:

Vague Hypothesis: "Improving the interface will increase user satisfaction."

This hypothesis leaves too many questions unanswered: What exactly should be improved in the interface? Which specific changes will lead to increased satisfaction?

To make a hypothesis clear and specific, ask yourself several questions. What do we want to change? How will this change affect users? How will we measure the effect? Let's be careful architects building dreams from the bricks under our feet, not explorers without a map in a land of unknown opportunities.

For instance:

Specific Hypothesis: "Increasing the size and contrast of the 'Order' button on the product page will increase conversion by 20% within a month."

This hypothesis is precise, measurable, and clearly defines the goal.

Avoiding Ill-Conceived Experiments: How to Save Resources

Let's talk about how we can avoid the pitfalls of ill-conceived experiments that can lead to wasted time, money, and effort. Imagine embarking on a journey without knowing your destination or how to get there—a purposeless wandering in a sea of opportunities. Let's be more goal-oriented!

Neglecting careful planning of experiments risks wasting resources. Ill-conceived experiments often end up as a drain on the evergreen garden of new ideas, potentially leading to a situation where effort is high but results don't meet expectations.

Ill-Conceived Experiment: Changing the "Buy" button to a random shade of the rainbow without data analysis.

Result: No change in conversion or, worse, a decrease.

How to Avoid Wasting Resources?

To dodge this trap, meticulously plan each experiment before launch. Set clear objectives, define expected outcomes, and identify key metrics to measure success. Be like detectives with a detailed plan of action before starting an investigation.

Well-Planned Experiment: Changing the text on the "Try for Free" button to "Start Free and Access All Features for 7 Days."

Result: An increase in users registering for the trial period.

Ignoring Data: The Importance of Basing Hypotheses on Facts

Imagine building a ship without considering a sea map—you might get lost in the ocean of possibilities. Let's dive into the world of data and discover why it's our invaluable treasure!

Why Base Hypotheses on Facts?

Ignoring data risks creating hypotheses based on assumptions and intuition, which could be far from reality. Data are our reliable compasses in the world of change. They help us understand where to go, which paths to take, and how to avoid pitfalls.

Data-Based Hypothesis: "Increasing the number of product recommendations based on user preferences will increase the average order value by 15%."

This hypothesis is grounded in real shopping preferences, making it more likely to succeed.

To successfully work with hypotheses, carefully analyze data. Use information about user behavior, feedback, and results from past experiments. Be like archaeologists sifting through traces of the past to formulate fact-based hypotheses, not guesses.

Data-Based Hypothesis: "Reducing the number of steps to checkout based on analysis of customer behavior will increase conversion at the checkout stage."

This hypothesis stems from specific data on user difficulties during the purchase phase.

Tools for Working with Hypotheses

Popular online tools and platforms for formulating and testing hypotheses.

Let's explore a few popular online tools that will become your faithful allies in innovating and enhancing user experience. Ready for the adventure? Let's dive in!

Optimizely is a convenient tool for A/B testing and personalization, enabling you to test different page versions, design elements, and product functionalities.

Usage example: Suppose you hypothesize that changing the "Buy" button color will increase conversion rates. With Optimizely, you can easily set up an A/B test and compare which variant truly attracts more customers.

Google Optimize is a free tool from Google for A/B testing, helping you conduct experiments with web pages and analyze their effectiveness.

Usage example: If you want to test the hypothesis that altering the homepage headline will improve user retention, Google Optimize allows you to set up the test and monitor changes in user behavior.

Hotjar offers tools for analyzing user behavior on your site, including heatmaps, session recordings, and surveys.

Usage example: Imagine you hypothesize that users can't find the "Call Us" button due to its invisibility on the page. Hotjar enables you to analyze user behavior and either confirm or refute your hypothesis.

Recommendations for Choosing Tools Based on Team Needs

Choosing tools is like picking out a suit—it needs to fit both your size and style. Let's figure out how to determine which tool is right for your team!

For teams passionate about analytics and experiments:

Recommendation: A/B testing tools like Optimizely or Google Optimize are suitable for those eager to put every hypothesis to the test and extract valuable data from each experiment.

Usage example: Your e-commerce team suspects that changing the order of product display on the homepage will increase conversion rates. Using Optimizely, you conduct an A/B test to find the optimal arrangement.

For teams focused on user experience:

Recommendation: Behavior analysis tools like Hotjar will help you understand how users interact with your product and where issues arise.

Usage example: Through Hotjar, your team discovers that most users don't scroll to the end of the service description page. This insight becomes the basis for a hypothesis about the need for brevity and clarity in the text.

For teams emphasizing design and visual experience:

Recommendation: Prototyping and design tools like Figma or Adobe XD can be an excellent choice for teams working on improving user interfaces.

Usage example: After receiving feedback that site navigation is cumbersome, your team uses Figma to create a new prototype with an improved structure and navigation.

Wrapping It Up

So, why is effective hypothesis management the key to success in the product world? Hypotheses are not just assumptions; they are a powerful tool that helps teams align, move forward, and achieve success. Properly managing hypotheses reduces risks, speeds up product development, and leads to more targeted outcomes. Hypotheses are your guide in the world of endless possibilities for development and improvement. Remember, diligent work, patience, and data analysis will help you unlock new horizons and bring the most ambitious ideas to life. Let your product development journey be paved with valuable hypotheses and successful solutions!

If you need assistance with setting up analytics or developing a data collection flow from various analytical tools, don't hesitate to book a free call with our CTO or leave your contact details on our website, and we will surely help you address your concerns!

Last updated 4 months ago

Logo

  • Data Product Managers
  • Product Managers
  • Technical Product Managers
  • App Product Managers
  • Product Strategy Consultants
  • Digital Marketing Product Managers
  • Business Analysts
  • Digital Product Managers

Shipping Your Product in Iterations: A Guide to Hypothesis Testing

Glancing at the App Store on any phone will reveal that most installed apps have had updates released within the last week. Software products today are shipped in iterations to validate assumptions and hypotheses about what makes the product experience better for users.

Shipping Your Product in Iterations: A Guide to Hypothesis Testing

By Kumara Raghavendra

Kumara has successfully delivered high-impact products in various industries ranging from eCommerce, healthcare, travel, and ride-hailing.

PREVIOUSLY AT

A look at the App Store on any phone will reveal that most installed apps have had updates released within the last week. A website visit after a few weeks might show some changes in the layout, user experience, or copy.

Today, software is shipped in iterations to validate assumptions and the product hypothesis about what makes a better user experience. At any given time, companies like booking.com (where I worked before) run hundreds of A/B tests on their sites for this very purpose.

For applications delivered over the internet, there is no need to decide on the look of a product 12-18 months in advance, and then build and eventually ship it. Instead, it is perfectly practical to release small changes that deliver value to users as they are being implemented, removing the need to make assumptions about user preferences and ideal solutions—for every assumption and hypothesis can be validated by designing a test to isolate the effect of each change.

In addition to delivering continuous value through improvements, this approach allows a product team to gather continuous feedback from users and then course-correct as needed. Creating and testing hypotheses every couple of weeks is a cheaper and easier way to build a course-correcting and iterative approach to creating product value .

What Is Hypothesis Testing in Product Management?

While shipping a feature to users, it is imperative to validate assumptions about design and features in order to understand their impact in the real world.

This validation is traditionally done through product hypothesis testing , during which the experimenter outlines a hypothesis for a change and then defines success. For instance, if a data product manager at Amazon has a hypothesis that showing bigger product images will raise conversion rates, then success is defined by higher conversion rates.

One of the key aspects of hypothesis testing is the isolation of different variables in the product experience in order to be able to attribute success (or failure) to the changes made. So, if our Amazon product manager had a further hypothesis that showing customer reviews right next to product images would improve conversion, it would not be possible to test both hypotheses at the same time. Doing so would result in failure to properly attribute causes and effects; therefore, the two changes must be isolated and tested individually.

Thus, product decisions on features should be backed by hypothesis testing to validate the performance of features.

Different Types of Hypothesis Testing

A/b testing.

A/B testing in product hypothesis testing

One of the most common use cases to achieve hypothesis validation is randomized A/B testing, in which a change or feature is released at random to one-half of users (A) and withheld from the other half (B). Returning to the hypothesis of bigger product images improving conversion on Amazon, one-half of users will be shown the change, while the other half will see the website as it was before. The conversion will then be measured for each group (A and B) and compared. In case of a significant uplift in conversion for the group shown bigger product images, the conclusion would be that the original hypothesis was correct, and the change can be rolled out to all users.

Multivariate Testing

Multivariate testing in product hypothesis testing

Ideally, each variable should be isolated and tested separately so as to conclusively attribute changes. However, such a sequential approach to testing can be very slow, especially when there are several versions to test. To continue with the example, in the hypothesis that bigger product images lead to higher conversion rates on Amazon, “bigger” is subjective, and several versions of “bigger” (e.g., 1.1x, 1.3x, and 1.5x) might need to be tested.

Instead of testing such cases sequentially, a multivariate test can be adopted, in which users are not split in half but into multiple variants. For instance, four groups (A, B, C, D) are made up of 25% of users each, where A-group users will not see any change, whereas those in variants B, C, and D will see images bigger by 1.1x, 1.3x, and 1.5x, respectively. In this test, multiple variants are simultaneously tested against the current version of the product in order to identify the best variant.

Before/After Testing

Sometimes, it is not possible to split the users in half (or into multiple variants) as there might be network effects in place. For example, if the test involves determining whether one logic for formulating surge prices on Uber is better than another, the drivers cannot be divided into different variants, as the logic takes into account the demand and supply mismatch of the entire city. In such cases, a test will have to compare the effects before the change and after the change in order to arrive at a conclusion.

Before/after testing in product hypothesis testing

However, the constraint here is the inability to isolate the effects of seasonality and externality that can differently affect the test and control periods. Suppose a change to the logic that determines surge pricing on Uber is made at time t , such that logic A is used before and logic B is used after. While the effects before and after time t can be compared, there is no guarantee that the effects are solely due to the change in logic. There could have been a difference in demand or other factors between the two time periods that resulted in a difference between the two.

Time-based On/Off Testing

Time-based on/off testing in product hypothesis testing

The downsides of before/after testing can be overcome to a large extent by deploying time-based on/off testing, in which the change is introduced to all users for a certain period of time, turned off for an equal period of time, and then repeated for a longer duration.

For example, in the Uber use case, the change can be shown to drivers on Monday, withdrawn on Tuesday, shown again on Wednesday, and so on.

While this method doesn’t fully remove the effects of seasonality and externality, it does reduce them significantly, making such tests more robust.

Test Design

Choosing the right test for the use case at hand is an essential step in validating a hypothesis in the quickest and most robust way. Once the choice is made, the details of the test design can be outlined.

The test design is simply a coherent outline of:

  • The hypothesis to be tested: Showing users bigger product images will lead them to purchase more products.
  • Success metrics for the test: Customer conversion
  • Decision-making criteria for the test: The test validates the hypothesis that users in the variant show a higher conversion rate than those in the control group.
  • Metrics that need to be instrumented to learn from the test: Customer conversion, clicks on product images

In the case of the product hypothesis example that bigger product images will lead to improved conversion on Amazon, the success metric is conversion and the decision criteria is an improvement in conversion.

After the right test is chosen and designed, and the success criteria and metrics are identified, the results must be analyzed. To do that, some statistical concepts are necessary.

When running tests, it is important to ensure that the two variants picked for the test (A and B) do not have a bias with respect to the success metric. For instance, if the variant that sees the bigger images already has a higher conversion than the variant that doesn’t see the change, then the test is biased and can lead to wrong conclusions.

In order to ensure no bias in sampling, one can observe the mean and variance for the success metric before the change is introduced.

Significance and Power

Once a difference between the two variants is observed, it is important to conclude that the change observed is an actual effect and not a random one. This can be done by computing the significance of the change in the success metric.

In layman’s terms, significance measures the frequency with which the test shows that bigger images lead to higher conversion when they actually don’t. Power measures the frequency with which the test tells us that bigger images lead to higher conversion when they actually do.

So, tests need to have a high value of power and a low value of significance for more accurate results.

While an in-depth exploration of the statistical concepts involved in product management hypothesis testing is out of scope here, the following actions are recommended to enhance knowledge on this front:

  • Data analysts and data engineers are usually adept at identifying the right test designs and can guide product managers, so make sure to utilize their expertise early in the process.
  • There are numerous online courses on hypothesis testing, A/B testing, and related statistical concepts, such as Udemy , Udacity , and Coursera .
  • Using tools such as Google’s Firebase and Optimizely can make the process easier thanks to a large amount of out-of-the-box capabilities for running the right tests.

Using Hypothesis Testing for Successful Product Management

In order to continuously deliver value to users, it is imperative to test various hypotheses, for the purpose of which several types of product hypothesis testing can be employed. Each hypothesis needs to have an accompanying test design, as described above, in order to conclusively validate or invalidate it.

This approach helps to quantify the value delivered by new changes and features, bring focus to the most valuable features, and deliver incremental iterations.

  • How to Conduct Remote User Interviews [Infographic]
  • A/B Testing UX for Component-based Frameworks
  • Building an AI Product? Maximize Value With an Implementation Framework

Further Reading on the Toptal Blog:

  • Evolving UX: Experimental Product Design with a CXO
  • How to Conduct Usability Testing in Six Steps
  • 3 Product-led Growth Frameworks to Build Your Business
  • A Product Designer’s Guide to Competitive Analysis

Understanding the basics

What is a product hypothesis.

A product hypothesis is an assumption that some improvement in the product will bring an increase in important metrics like revenue or product usage statistics.

What are the three required parts of a hypothesis?

The three required parts of a hypothesis are the assumption, the condition, and the prediction.

Why do we do A/B testing?

We do A/B testing to make sure that any improvement in the product increases our tracked metrics.

What is A/B testing used for?

A/B testing is used to check if our product improvements create the desired change in metrics.

What is A/B testing and multivariate testing?

A/B testing and multivariate testing are types of hypothesis testing. A/B testing checks how important metrics change with and without a single change in the product. Multivariate testing can track multiple variations of the same product improvement.

Kumara Raghavendra

Dubai, United Arab Emirates

Member since August 6, 2019

About the author

World-class articles, delivered weekly.

By entering your email, you are agreeing to our privacy policy .

Toptal Product Managers

  • Artificial Intelligence Product Managers
  • Blockchain Product Managers
  • Business Systems Analysts
  • Cloud Product Managers
  • Data Science Product Managers
  • Directors of Product
  • E-commerce Product Managers
  • Enterprise Product Managers
  • Enterprise Resource Planning Product Managers
  • Interim CPOs
  • Jira Product Managers
  • Kanban Product Managers
  • Lean Product Managers
  • Mobile Product Managers
  • Product Consultants
  • Product Development Managers
  • Product Owners
  • Product Portfolio Managers
  • Product Tour Consultants
  • Robotic Process Automation Product Managers
  • Robotics Product Managers
  • SaaS Product Managers
  • Salesforce Product Managers
  • Scrum Product Owner Contractors
  • Web Product Managers
  • View More Freelance Product Managers

Join the Toptal ® community.

Advisory boards aren’t only for executives. Join the LogRocket Content Advisory Board today →

LogRocket blog logo

  • Product Management
  • Solve User-Reported Issues
  • Find Issues Faster
  • Optimize Conversion and Adoption

4 types of product assumptions and how to test them

hypothesis statement of product

Understanding, identifying, and testing product assumptions is a cornerstone of product development.

4 Types Of Product Assumptions And How To Test Them

To some extent, it’s the primary responsibility of a product manager to handle assumptions well to drive product outcomes.

Let’s dive deep into what assumptions are, why they are critical, the common types of assumptions, and, most importantly, how to test them.

What are product assumptions?

Product assumptions are preconceived beliefs or hypotheses that product managers establish during the product development cycle, providing an initial framework for decision-making. These assumptions, which can involve features, user behaviors, market trends, or technical feasibility, are integral to the iterative process of product creation and validation.

Assumptions guide the prototyping, testing, and adjustment stages, allowing the team to refine and improve the product in response to real-world feedback.

Leveraging product assumptions effectively is a cornerstone of risk management in product development because it aids in reducing uncertainty, saving resources, and accelerating time to market. Remember, a key part of a product manager’s role is to continuously challenge and validate product assumptions to ensure the product remains aligned with consumer needs and market dynamics.

Whatever you do, you don’t do it without a reason. For example, if you are building a retention-focused feature to drive revenue, you automatically assume that the feature will improve your revenue metrics and that it’ll deliver enough value for users that they’ll retain better.

In short, assumptions are all the beliefs you have when pursuing a particular idea, whether validated or not.

Why are assumptions important for product managers?

You can’t overemphasize the importance of assumptions in product management. For PMs, they are the building block of everything we do.

Ultimately, our job is to drive product outcomes by pursuing various initiatives we believe will contribute to the outcome. We decide which initiatives to pursue based on the beliefs we hold:

Product Assumptions Diagram

If our assumptions are correct, the initiative is a success, and there should be a tangible impact on the outcome. If they turn out wrong, we might fail to drive the impact we hope to see. We may even do more harm than good.

Because one initiative is often based on numerous assumptions, and various solutions can share the same assumptions, testing individual hypotheses is faster and cheaper than testing whole initiatives:

Validating Product Assumptions About Potential Solutions

Moreover, testing an initiative with multiple unvalidated assumptions makes it hard to distinguish which hypotheses contributed to its success and which didn’t. Testing shared assumptions can help us raise confidence in multiple solutions simultaneously.

hypothesis statement of product

Over 200k developers and product managers use LogRocket to create better digital experiences

hypothesis statement of product

In most cases, you’re better off focusing on testing individual assumptions first than jumping straight into solution development.

4 types of product assumptions

There are various types of assumptions. However, as a product manager, there are four important assumptions that you must understand and learn how to test:

  • Desirability assumptions
  • Viability assumptions
  • Feasibility assumptions
  • Usability assumptions

1. Desirability assumptions

When you assume solution desirability, you are trying to answer the question, “Do our users want this solution?”

After all, in the vast majority of cases, there’s no reason to pursue an initiative that isn’t interesting for your end-users.

Desirability assumptions include questions such as:

  • Does this problem solve a painful enough problem?
  • Is the problem we are solving relevant to enough users?
  • Is our proposed way of solving the problem optimal?
  • Will users understand the value they can get from this solution?

2. Viability assumptions

Viability determines whether the initiative makes sense from a business perspective.

Delivering value for users is great, but to be truly successful, an initiative must also deliver enough ROI for the business to grow and prosper. Of course, you might work for an NGO that doesn’t care about the revenue.

Viability assumptions include questions such as:

  • Will we see a positive impact on business metrics?
  • Does this initiative fit our current business model?
  • Does the solution align with our long-term product strategy?
  • Can we expect a satisfactory return on investment?

3. Feasibility assumptions

Even the most desirable and viable solutions are only relevant if they are possible to build, implement, and maintain.

Before committing to any direction, ensure you can deliver the initiative within your current constraints.

You can assess feasibility by answering questions such as:

  • Does our current technology stack allow such an implementation?
  • Do we have the resources and skillset to proceed with this initiative?
  • Do we have means of maintaining the initiative?
  • Can we handle the technical complexity of this solution?

4. Usability assumptions

Even after you implement a desirable, viable, and feasible solution, it won’t drive the expected results if users don’t understand how to use it.

The more usable the solution is, the more optimal outcomes it’ll yield.

Focus on answering questions such as:

  • Are our users aware that the new solution exists?
  • Do they understand what value they can get from it?
  • Is it clear how to find and use the solution?
  • Is there friction or needless complexity that might prevent users from adopting the solution?

How to use an assumption map

An assumption map is a powerful technique that can help you identify, organize, and prioritize assumptions you make with your initiatives.

Check out our assumption mapping article for more details if that sounds valuable.

For the purpose of this article, I’ll assume you’ve already identified and prioritized your assumptions.

Testing product assumptions

Now let’s take a look at some ways you can test your assumptions. While the best method depends heavily on the type of assumption you are testing, this library should be a solid starting point:

Testing desirability

Testing viability, testing feasibility, testing usability.

There’s no way to test desirability without interacting with your users. Get out of the door, one way or another, and see if the solution is something your users truly want.

Techniques for assessing the desirability of a solution include:

Landing pages

Crowdfunding, alpha and beta testing.

One of the fastest and most insightful desirability validation techniques is to interview your target users .

You don’t want to ask users upfront because doing so produces skewed answers. Instead, you want to understand the user’s problem, how they describe it, and the most significant pain points they have. You can then look at your proposed solution and judge whether it could potentially solve the problems users mentioned.

You can create a product landing page even if you don’t yet have the product. By monitoring the engagement on the site, you can gauge the overall interest in the solution; if users bounce from the site after a few seconds, they are probably not interested.

You can take it a step further and include the option to subscribe to a waitlist. Signing up would be a powerful signal that users are genuinely interested.

If you are building a B2B solution, you can try to actually sell it to potential clients. There are three ways to approach this:

  • Mock sales — A sales simulation when you try to sell the solution but don’t commit to an actual sale
  • Letter of intent — You ask your potential client to sign a letter of intent to buy the solution once it’s live
  • Actual sale — In some cases, you might be able to finalize the sale before the product is even live, with an option to revert the sale if you decide not to pursue the direction after all

If people are willing to pay for the solution before it is even created, the desirability is really high.

Crowdfunding is a presale option for mass B2C consumers. However, it’s viable mostly for brand-new products.

By promoting your idea on sites like Kickstarter, you can not only gauge overall desirability but also capture funding to improve the viability of the idea.

The most powerful yet expensive way of testing desirability is to build a minimal version of the solution. You can then conduct alpha and beta tests to see actual user engagement and gather real-time feedback on the further direction.

Due to the cost, this method is recommended after you have some initial confirmation with other validation techniques.

You can test the viability of assumptions by taking a closer look at the business side of things to evaluate whether the initiative fits well or contradicts with other areas.

Techniques for testing the viability of your product include:

Business model review

Strategy canvas, business case.

The first step in assessing initiative viability is to review your current business model and see how it would fit there:

Business Model Review Template

Does the solution connect well to your current value proposition and distribution channel? Do you have key resources and partners to pull it off? Does it sync well with key activities you are performing?

Ideally, your initiative will not only not disrupt your business model but also contribute to it as a whole.

A viable solution helps you build a competitive advantage in the market. One way to evaluate viability is to map a strategy canvas of your competitive alternatives and judge whether the initiative will help you strengthen your advantage or reduce your weaknesses:

Strategy Canvas Example

A great solution helps you maintain and expand your competitive edge on the market.

With basic viability tested, it’s worth investing some time to build a robust business case.

Gather all relevant input and try to build well-informed projections:

  • How many people can you reach?
  • How expensive the solution is going to be?
  • What’s the expected long-term revenue gain and maintenance cost?
  • What is the anticipated ROI over time?

A strong business case will also help you pitch the idea to key stakeholders and compare the business viability of various initiatives and solutions to choose the most impactful one.

Validating whether a solution is possible to implement usually requires a team of subject matter experts to do a deep dive into potential implementation details. Two common approaches are

Technical research

Proof of concept (poc).

This step includes researching various implementation methods and limitations to determine whether a solution is feasible.

For example, suppose you are considering a range of trial lengths for various user segments in your mobile product. In that case, you might need to review app store policy and limitations to see if it’s allowed out of the box or if any external solution is necessary.

If an external solution is needed, you might investigate whether there’s an SDK supporting that or it requires building from scratch (thus increasing complexity and reducing the viability of the solution).

For more complex initiatives, you might need to develop a proof ofconcept. One could call it a “technical MVP”. It includes building the minimal version of the most uncertain part of the solution and evaluating if it even works. Proof of concept might vary from a few lines of code for simple tests to a fully-fledged development for the most complex initiatives.

Usability is the most straightforward thing to test. You want to put the solution in front of the user to see if they understand how to use it and what potential friction points are.

There are two common ways to do this:

Analytics review

Prototypes are at the forefront of usability testing. Build a simulation of the experience you want to provide, ask the user to finish a specific task, and observe how they interact with the product.

Depending on the level of uncertainty and the investment you want to make, prototypes can vary from quick-and-dirty paper prototypes to fully interactive, no-code solutions.

If you are already at an MVP stage, you have the benefit of having actual data on how the solution is used. Analyze this data closely to evaluate how discoverable the product is, how much time it takes for users to complete specific tasks, and what are the most common dropout moments.

Combining quantitative data review with qualitative insights from prototypes will help you validate most of your usability assumptions.

Every initiative you pursue is based on a set of underlying assumptions — that is, a set of preconceived beliefs we have when deciding which direction to pursue.

Validating these beliefs is a critical part of product management. After all, it’s easier and cheaper to test individual assumptions than to test solutions as a whole.

Make sure you identify your main desirability, viability, feasibility, and usability assumptions and test them before committing to a fully-fledged solution.

I recommend you store the insights from assumptions tests for future reference. Many solutions tend to share similar assumptions, so the insights might help you speed up your validation process in the future.

Featured image source: IconScout

LogRocket generates product insights that lead to meaningful action

Get your teams on the same page — try LogRocket today.

Share this:

  • Click to share on Twitter (Opens in new window)
  • Click to share on Reddit (Opens in new window)
  • Click to share on LinkedIn (Opens in new window)
  • Click to share on Facebook (Opens in new window)
  • #product strategy

hypothesis statement of product

Stop guessing about your digital experience with LogRocket

Recent posts:.

hypothesis statement of product

How to implement the Zone to Win framework

People need to work on problems that have an impact or else they won’t be intrinsically motivated to sustain an incubation effort.

hypothesis statement of product

Leader Spotlight: Optimizing touch points in the user journey, with Natalie Shaddick

Natalie Shaddick discusses having a “united front” for branding and the importance of maintaining a cohesive message across touch points.

hypothesis statement of product

A guide to product testing

Product testing evaluates a product’s performance, safety, quality, and compliance with established standards and set goals.

hypothesis statement of product

Leader Spotlight: Differentiating digital experiences, with Scott Lux

Scott Lux talks about the future of ecommerce and its need for innovation and excellent customer experience.

Leave a Reply Cancel reply

  • Product Management Tutorial
  • What is Product Management
  • Product Life Cycle
  • Product Management Process
  • General Availability
  • Product Manager
  • PM Interview Questions
  • Courses & Certifications
  • Project Management Tutorial
  • Agile Methodology
  • Software Engineering Tutorial
  • Software Development Tutorial
  • Software Testing Tutorial

How do you define and measure your product hypothesis?

Hypothesis in product management is like making an educated guess or assumption about something related to a product, such as what users need or how a new feature might work. It’s a statement that you can test to see if it’s true or not, usually by trying out different ideas and seeing what happens. By testing hypotheses, product managers can figure out what works best for the product and its users, helping to make better decisions about how to improve and develop the product further.

Table of Content

What Is a Hypothesis in Product Management?

How does the product management hypothesis work, how to generate a hypothesis for a product, how to make a hypothesis statement for a product, how to validate hypothesis statements:, the process explained what comes after hypothesis validation, final thoughts on product hypotheses, product management hypothesis example, conclusion: product hypothesis, faqs: product hypothesis.

In product management, a hypothesis is a proposed explanation or assumption about a product, feature, or aspect of the product’s development or performance. It serves as a statement that can be tested, validated, or invalidated through experimentation and data analysis. Hypotheses play a crucial role in guiding product managers’ decision-making processes, informing product development strategies , and prioritizing initiatives. In summary, hypotheses in product management serve as educated guesses or assertions about the relationship between product changes and their impact on user behaviour or business outcomes.

Product management hypotheses work by guiding product managers through a structured process of identifying problems, proposing solutions, and testing assumptions to drive product development and improvement. Here’s how the process typically works:

How-does-the-product-management-hypothesis-work

How does the product management hypothesis work

  • Identifying Problems : Product managers start by identifying potential problems or opportunities for improvement within their product. This could involve gathering feedback from users, analyzing data, conducting market research, or observing user behaviour.
  • Formulating Hypotheses : Based on the identified problems or opportunities, product managers formulate hypotheses that articulate their assumptions about the causes of these issues and potential solutions. Hypotheses are typically written as clear, testable statements that specify what the expected outcomes will be if the hypothesis is true.
  • Designing Experiments : Product managers design experiments or tests to validate or invalidate their hypotheses. This could involve implementing changes to the product, such as introducing new features, modifying existing functionalities, or adjusting user experiences. Experiments may also involve collecting data through surveys, interviews, user testing, or analytics tools.
  • Setting Success Metrics : Product managers define success metrics or key performance indicators (KPIs) that will be used to measure the effectiveness of the experiments. These metrics should be aligned with the goals of the hypothesis and provide quantifiable insights into whether the proposed solution is achieving the desired outcomes.
  • Executing Experiments : Product managers implement the planned changes or interventions in the product and monitor their impact on the defined success metrics. This could involve conducting A/B tests, where different versions of the product are presented to different groups of users, or running pilot programs to gather feedback from a subset of users.

Generating a hypothesis for a product involves systematically identifying potential problems, proposing solutions, and formulating testable assumptions about how changes to the product could address user needs or improve performance. Here’s a step-by-step process for generating hypotheses:

How-to-Generate-a-Hypothesis-for-a-Product

How to Generate a Hypothesis for a Product

  • Start by gaining a deep understanding of your target users and their needs, preferences, and pain points. Conduct user research, including surveys, interviews, usability tests, and behavioral analysis, to gather insights into user behavior and challenges they face when using your product.
  • Review qualitative and quantitative data collected from user interactions, analytics tools, customer support inquiries, and feedback channels. Look for patterns, trends, and recurring issues that indicate areas where the product may be falling short or where improvements could be made.
  • Clarify the goals and objectives you want to achieve with your product. This could include increasing user engagement, improving retention rates, boosting conversion rates, or enhancing overall user satisfaction. Align your hypotheses with these objectives to ensure they are focused and actionable.
  • Brainstorm potential solutions or interventions that could address the identified user needs or pain points. Encourage creativity and divergent thinking within your product team to generate a wide range of ideas. Consider both incremental improvements and more radical changes to the product.
  • Evaluate and prioritize the potential solutions based on factors such as feasibility, impact on user experience, alignment with strategic goals, and resource constraints. Focus on solutions that are likely to have the greatest impact on addressing user needs and achieving your objectives.

To make a hypothesis statement for a product, follow these steps:

  • Identify the Problem : Begin by identifying a specific problem or opportunity for improvement within your product. This could be based on user feedback, data analysis, market research, or observations of user behavior.
  • Define the Proposed Solution : Determine what change or intervention you believe could address the identified problem or opportunity. This could involve introducing a new feature, improving an existing functionality, changing the user experience, or addressing a specific user need.
  • Formulate the Hypothesis : Write a clear, specific, and testable statement that articulates your assumption about the relationship between the proposed solution and its expected impact on user behavior or business outcomes. Your hypothesis should follow the structure: If [proposed solution], then [expected outcome].
  • Specify Success Metrics : Define the key metrics or performance indicators that will be used to measure the success of your hypothesis. These metrics should be aligned with your objectives and provide quantifiable insights into whether the proposed solution is achieving the desired outcomes.
  • Consider Constraints and Assumptions : Take into account any constraints or assumptions that may affect the validity of your hypothesis. This could include technical limitations, resource constraints, dependencies on external factors, or assumptions about user behavior.

Validating hypothesis statements in product management involves testing the proposed solutions or interventions to determine whether they achieve the desired outcomes. Here’s a step-by-step guide on how to validate hypothesis statements:

  • Design Experiments or Tests : Based on your hypothesis statement, design experiments or tests to evaluate the proposed solution’s effectiveness. Determine the experimental setup, including the control group (no changes) and the experimental group (where the proposed solution is implemented).
  • Define Success Metrics : Specify the key metrics or performance indicators that will be used to measure the success of your hypothesis. These metrics should be aligned with your objectives and provide quantifiable insights into whether the proposed solution is achieving the desired outcomes.
  • Collect Baseline Data : Before implementing the proposed solution, collect baseline data on the identified metrics from both the control group and the experimental group. This will serve as a reference point for comparison once the experiment is conducted.
  • Implement the Proposed Solution : Implement the proposed solution or intervention in the experimental group while keeping the control group unchanged. Ensure that the implementation is consistent with the hypothesis statement and that any necessary changes are properly documented.
  • Monitor and Collect Data : Monitor the performance of both the control group and the experimental group during the experiment. Collect data on the defined success metrics, track user behavior, and gather feedback from users to assess the impact of the proposed solution.

After hypothesis validation in product management , the process typically involves several key steps to leverage the findings and insights gained from the validation process. Here’s what comes after hypothesis validation:

  • Data Analysis and Interpretation : Once the hypothesis has been validated (or invalidated), product managers analyze the data collected during the experiment to gain deeper insights into user behavior, product performance, and the impact of the proposed solution. This involves interpreting the results in the context of the hypothesis statement and the defined success metrics.
  • Documentation of Findings : Document the findings of the hypothesis validation process, including the outcomes of the experiment, key insights gained, and any lessons learned. This documentation serves as a valuable reference for future decision-making and helps ensure that knowledge is shared across the product team and organization.
  • Knowledge Sharing and Communication : Communicate the results of the hypothesis validation process to relevant stakeholders, including product team members, leadership, and other key decision-makers. Share insights, lessons learned, and recommendations for future action to ensure alignment and transparency within the organization.
  • Iterative Learning and Adaptation : Use the insights gained from hypothesis validation to inform future iterations of the product development process . Apply learnings from the experiment to refine the product strategy, adjust feature priorities, and make data-driven decisions about product improvements.
  • Further Experimentation and Testing : Based on the validated hypothesis and the insights gained, identify new areas for experimentation and testing. Continuously test new ideas, features, and hypotheses to drive ongoing product innovation and improvement. This iterative process of experimentation and learning helps product managers stay responsive to user needs and market dynamics.

product hypotheses serve as a cornerstone of the product management process, guiding decision-making, fostering innovation, and driving continuous improvement. Here are some final thoughts on product hypotheses:

  • Foundation for Experimentation : Hypotheses provide a structured framework for formulating, testing, and validating assumptions about product changes and their impact on user behavior and business outcomes. By systematically testing hypotheses, product managers can gather valuable insights, mitigate risks, and make data-driven decisions.
  • Focus on User-Centricity : Effective hypotheses are rooted in a deep understanding of user needs, preferences, and pain points. By prioritizing user-centric hypotheses, product managers can ensure that product development efforts are aligned with user expectations and deliver meaningful value to users.
  • Iterative and Adaptive : The process of hypothesis formulation and validation is iterative and adaptive, allowing product managers to learn from experimentation, refine their assumptions, and iterate on their product strategies over time. This iterative approach enables continuous innovation and improvement in the product.
  • Data-Driven Decision Making : Hypothesis validation relies on empirical evidence and data analysis to assess the impact of proposed changes. By leveraging data to validate hypotheses, product managers can make informed decisions, mitigate biases, and prioritize initiatives based on their expected impact on key metrics.
  • Collaborative and Transparent : Formulating and validating hypotheses is a collaborative effort that involves input from cross-functional teams, stakeholders, and users. By fostering collaboration and transparency, product managers can leverage diverse perspectives, align stakeholders, and build consensus around product priorities.

Here’s an example of a hypothesis statement in the context of product management:

  • Problem: Users are abandoning the onboarding process due to confusion about how to set up their accounts.
  • Proposed Solution: Implement a guided onboarding tutorial that walks users through the account setup process step-by-step.
  • Hypothesis Statement: If we implement a guided onboarding tutorial that walks users through the account setup process step-by-step, then we will see a decrease in the dropout rate during the onboarding process and an increase in the percentage of users completing account setup.
  • Percentage of users who complete the onboarding process
  • Time spent on the onboarding tutorial
  • Feedback ratings on the effectiveness of the tutorial

Experiment Design:

  • Control Group: Users who go through the existing onboarding process without the guided tutorial.
  • Experimental Group: Users who go through the onboarding process with the guided tutorial.
  • Duration: Run the experiment for two weeks to gather sufficient data.
  • Data Collection: Track the number of users who complete the onboarding process, the time spent on the tutorial, and collect feedback ratings from users.

Expected Outcome: We anticipate that users who go through the guided onboarding tutorial will have a higher completion rate and spend more time on the tutorial compared to users who go through the existing onboarding process without guidance.

By testing this hypothesis through an experiment and analyzing the results, product managers can validate whether implementing a guided onboarding tutorial effectively addresses the identified problem and improves the user experience.

In conclusion, hypothesis statements are invaluable tools in the product management process, providing a structured approach to identifying problems, proposing solutions, and validating assumptions. By formulating clear, testable hypotheses, product managers can drive innovation, mitigate risks, and make data-driven decisions that ultimately lead to the development of successful products.

Q. What is the lean product hypothesis?

Lean hypothesis testing is a strategy within agile product development aimed at reducing risk, accelerating the development process, and refining product-market fit through the creation and iterative enhancement of a minimal viable product (MVP).

Q. What is the product value hypothesis?

The value hypothesis centers on the worth of your product to customers and is foundational to achieving product-market fit. This hypothesis is applicable to both individual products and entire companies, serving as a crucial element in determining alignment with market needs.

Q. What is the hypothesis for a minimum viable product?

Hypotheses for minimum viable products are testable assumptions supported by evidence. For instance, one hypothesis to validate could be whether people will be interested in the product at a certain price point; if not, adjusting the price downwards may be necessary.

Please Login to comment...

Similar reads.

  • Dev Scripter
  • Product Management
  • Dev Scripter 2024

Improve your Coding Skills with Practice

 alt=

What kind of Experience do you want to share?

hypothesis statement of product

My product management toolkit (5): assumptions and hypotheses

MAA1

Problem statements was the last product management tool I wrote about. Once you’ve defined and understood the problem(s) that you’re looking to solve, the next step is to validate ways to solve the problem. As a product manager, there’s a risk of jumping straight into solutions or features, without really evaluating the best way to tackle a problem.

I learned a lot from the “Lean UX” approach to things, as introduced by Jeff Gothelf and Josh Seiden . The key point to Lean UX is the definition and validation of assumptions and hypotheses. Ultimately, this approach is all about risk management; instead of one ‘big bang’ product release, you constantly iterate and learn from actual customer usage of the product. Some people refer to this approach as the “velocity of learning.”

Tool 5 — Assumptions and hypotheses

What are assumptions? — When thinking about problems and solutions we often make a lot of assumptions. For example, I like how Alan Klement points out that when we create user stories there’s a risk of making lots of assumptions (see below). I believe that the biggest problem isn’t so much in the assumptions themselves but more in not validating one’s assumptions before designing a product or service. What I love about the “Lean UX” approach is that it exposes assumptions early on and provides a way to validate these assumptions early and often.

Alan Klement — Taken from: https://medium.com/the-job-to-be-done/replacing-the-user-story-with-the-job-story-af7cdee10c27#.vlrixzuk2

What are hypotheses? — Hypotheses are a great way to test your assumptions. A hypothesis statement is a more granular version of the original assumption and is formulated in a such way that it’s easy to test and measure specific desired outcomes:

Josh Seiden — Taken from: http://www.slideshare.net/UXSTRAT/ux-strat-2013-josh-seiden-lean-ux-ux-strat

You can use these hypotheses to test a specific product area or workflow. The key thing with assumptions and hypotheses is their focus on behavioural outcomes or changes, not just on the feature or solution in its own right. The other important thing I learned about assumptions is to validate your riskiest assumptions first. I’ve benefited enormously from using simple prototypes to validate risky assumptions such as “this feature will definitely solve my customer’s problem” or “customers will definitely pay for this service” before committing lots of time, money and effort to solving a problem.

Taken from: https://medium.com/@mwambach1/hypotheses-driven-ux-design-c75fbf3ce7cc#.bk8p1zvky

What assumptions and hypotheses aren’t? — I’ve seen some people falling in the trap of treating assumptions/hypotheses as absolute truths. The whole point of having assumptions and hypotheses is being transparent upfront about educated guesses, unknowns or risks.

Main learning point: Working with assumptions and hypothesis is essential in my opinion. It’s all about risk management. Instead of building something for many months before getting any customer feedback, I’d always recommend identifying and validating your riskiest assumptions first, using an iterative approach to learn ‘early and often.’

Related links for further learning:

  • https://medium.com/the-job-to-be-done/replacing-the-user-story-with-the-job-story-af7cdee10c27#.7kn2clk0y
  • https://marcabraham.wordpress.com/2013/04/05/book-review-lean-ux/
  • https://www.uie.com/brainsparks/2014/05/21/josh-seiden-hypothesis-based-design-within-lean-ux/
  • https://medium.com/@mwambach1/hypotheses-driven-ux-design-c75fbf3ce7cc#.bk8p1zvky

MAA1

Written by MAA1

Product person, author of "My Product Management Toolkit" and “Managing Product = Managing Tension” — see https://bit.ly/3gH2dOD .

Text to speech

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • How to Write a Strong Hypothesis | Steps & Examples

How to Write a Strong Hypothesis | Steps & Examples

Published on May 6, 2022 by Shona McCombes . Revised on November 20, 2023.

A hypothesis is a statement that can be tested by scientific research. If you want to test a relationship between two or more variables, you need to write hypotheses before you start your experiment or data collection .

Example: Hypothesis

Daily apple consumption leads to fewer doctor’s visits.

Table of contents

What is a hypothesis, developing a hypothesis (with example), hypothesis examples, other interesting articles, frequently asked questions about writing hypotheses.

A hypothesis states your predictions about what your research will find. It is a tentative answer to your research question that has not yet been tested. For some research projects, you might have to write several hypotheses that address different aspects of your research question.

A hypothesis is not just a guess – it should be based on existing theories and knowledge. It also has to be testable, which means you can support or refute it through scientific research methods (such as experiments, observations and statistical analysis of data).

Variables in hypotheses

Hypotheses propose a relationship between two or more types of variables .

  • An independent variable is something the researcher changes or controls.
  • A dependent variable is something the researcher observes and measures.

If there are any control variables , extraneous variables , or confounding variables , be sure to jot those down as you go to minimize the chances that research bias  will affect your results.

In this example, the independent variable is exposure to the sun – the assumed cause . The dependent variable is the level of happiness – the assumed effect .

Prevent plagiarism. Run a free check.

Step 1. ask a question.

Writing a hypothesis begins with a research question that you want to answer. The question should be focused, specific, and researchable within the constraints of your project.

Step 2. Do some preliminary research

Your initial answer to the question should be based on what is already known about the topic. Look for theories and previous studies to help you form educated assumptions about what your research will find.

At this stage, you might construct a conceptual framework to ensure that you’re embarking on a relevant topic . This can also help you identify which variables you will study and what you think the relationships are between them. Sometimes, you’ll have to operationalize more complex constructs.

Step 3. Formulate your hypothesis

Now you should have some idea of what you expect to find. Write your initial answer to the question in a clear, concise sentence.

4. Refine your hypothesis

You need to make sure your hypothesis is specific and testable. There are various ways of phrasing a hypothesis, but all the terms you use should have clear definitions, and the hypothesis should contain:

  • The relevant variables
  • The specific group being studied
  • The predicted outcome of the experiment or analysis

5. Phrase your hypothesis in three ways

To identify the variables, you can write a simple prediction in  if…then form. The first part of the sentence states the independent variable and the second part states the dependent variable.

In academic research, hypotheses are more commonly phrased in terms of correlations or effects, where you directly state the predicted relationship between variables.

If you are comparing two groups, the hypothesis can state what difference you expect to find between them.

6. Write a null hypothesis

If your research involves statistical hypothesis testing , you will also have to write a null hypothesis . The null hypothesis is the default position that there is no association between the variables. The null hypothesis is written as H 0 , while the alternative hypothesis is H 1 or H a .

  • H 0 : The number of lectures attended by first-year students has no effect on their final exam scores.
  • H 1 : The number of lectures attended by first-year students has a positive effect on their final exam scores.
Research question Hypothesis Null hypothesis
What are the health benefits of eating an apple a day? Increasing apple consumption in over-60s will result in decreasing frequency of doctor’s visits. Increasing apple consumption in over-60s will have no effect on frequency of doctor’s visits.
Which airlines have the most delays? Low-cost airlines are more likely to have delays than premium airlines. Low-cost and premium airlines are equally likely to have delays.
Can flexible work arrangements improve job satisfaction? Employees who have flexible working hours will report greater job satisfaction than employees who work fixed hours. There is no relationship between working hour flexibility and job satisfaction.
How effective is high school sex education at reducing teen pregnancies? Teenagers who received sex education lessons throughout high school will have lower rates of unplanned pregnancy teenagers who did not receive any sex education. High school sex education has no effect on teen pregnancy rates.
What effect does daily use of social media have on the attention span of under-16s? There is a negative between time spent on social media and attention span in under-16s. There is no relationship between social media use and attention span in under-16s.

If you want to know more about the research process , methodology , research bias , or statistics , make sure to check out some of our other articles with explanations and examples.

  • Sampling methods
  • Simple random sampling
  • Stratified sampling
  • Cluster sampling
  • Likert scales
  • Reproducibility

 Statistics

  • Null hypothesis
  • Statistical power
  • Probability distribution
  • Effect size
  • Poisson distribution

Research bias

  • Optimism bias
  • Cognitive bias
  • Implicit bias
  • Hawthorne effect
  • Anchoring bias
  • Explicit bias

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

hypothesis statement of product

A hypothesis is not just a guess — it should be based on existing theories and knowledge. It also has to be testable, which means you can support or refute it through scientific research methods (such as experiments, observations and statistical analysis of data).

Null and alternative hypotheses are used in statistical hypothesis testing . The null hypothesis of a test always predicts no effect or no relationship between variables, while the alternative hypothesis states your research prediction of an effect or relationship.

Hypothesis testing is a formal procedure for investigating our ideas about the world using statistics. It is used by scientists to test specific predictions, called hypotheses , by calculating how likely it is that a pattern or relationship between variables could have arisen by chance.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

McCombes, S. (2023, November 20). How to Write a Strong Hypothesis | Steps & Examples. Scribbr. Retrieved August 12, 2024, from https://www.scribbr.com/methodology/hypothesis/

Is this article helpful?

Shona McCombes

Shona McCombes

Other students also liked, construct validity | definition, types, & examples, what is a conceptual framework | tips & examples, operationalization | a guide with examples, pros & cons, "i thought ai proofreading was useless but..".

I've been using Scribbr for years now and I know it's a service that won't disappoint. It does a good job spotting mistakes”

  • Business Essentials
  • Leadership & Management
  • Credential of Leadership, Impact, and Management in Business (CLIMB)
  • Entrepreneurship & Innovation
  • Digital Transformation
  • Finance & Accounting
  • Business in Society
  • For Organizations
  • Support Portal
  • Media Coverage
  • Founding Donors
  • Leadership Team

hypothesis statement of product

  • Harvard Business School →
  • HBS Online →
  • Business Insights →

Business Insights

Harvard Business School Online's Business Insights Blog provides the career insights you need to achieve your goals and gain confidence in your business skills.

  • Career Development
  • Communication
  • Decision-Making
  • Earning Your MBA
  • Negotiation
  • News & Events
  • Productivity
  • Staff Spotlight
  • Student Profiles
  • Work-Life Balance
  • AI Essentials for Business
  • Alternative Investments
  • Business Analytics
  • Business Strategy
  • Business and Climate Change
  • Creating Brand Value
  • Design Thinking and Innovation
  • Digital Marketing Strategy
  • Disruptive Strategy
  • Economics for Managers
  • Entrepreneurship Essentials
  • Financial Accounting
  • Global Business
  • Launching Tech Ventures
  • Leadership Principles
  • Leadership, Ethics, and Corporate Accountability
  • Leading Change and Organizational Renewal
  • Leading with Finance
  • Management Essentials
  • Negotiation Mastery
  • Organizational Leadership
  • Power and Influence for Positive Impact
  • Strategy Execution
  • Sustainable Business Strategy
  • Sustainable Investing
  • Winning with Digital Platforms

A Beginner’s Guide to Hypothesis Testing in Business

Business professionals performing hypothesis testing

  • 30 Mar 2021

Becoming a more data-driven decision-maker can bring several benefits to your organization, enabling you to identify new opportunities to pursue and threats to abate. Rather than allowing subjective thinking to guide your business strategy, backing your decisions with data can empower your company to become more innovative and, ultimately, profitable.

If you’re new to data-driven decision-making, you might be wondering how data translates into business strategy. The answer lies in generating a hypothesis and verifying or rejecting it based on what various forms of data tell you.

Below is a look at hypothesis testing and the role it plays in helping businesses become more data-driven.

Access your free e-book today.

What Is Hypothesis Testing?

To understand what hypothesis testing is, it’s important first to understand what a hypothesis is.

A hypothesis or hypothesis statement seeks to explain why something has happened, or what might happen, under certain conditions. It can also be used to understand how different variables relate to each other. Hypotheses are often written as if-then statements; for example, “If this happens, then this will happen.”

Hypothesis testing , then, is a statistical means of testing an assumption stated in a hypothesis. While the specific methodology leveraged depends on the nature of the hypothesis and data available, hypothesis testing typically uses sample data to extrapolate insights about a larger population.

Hypothesis Testing in Business

When it comes to data-driven decision-making, there’s a certain amount of risk that can mislead a professional. This could be due to flawed thinking or observations, incomplete or inaccurate data , or the presence of unknown variables. The danger in this is that, if major strategic decisions are made based on flawed insights, it can lead to wasted resources, missed opportunities, and catastrophic outcomes.

The real value of hypothesis testing in business is that it allows professionals to test their theories and assumptions before putting them into action. This essentially allows an organization to verify its analysis is correct before committing resources to implement a broader strategy.

As one example, consider a company that wishes to launch a new marketing campaign to revitalize sales during a slow period. Doing so could be an incredibly expensive endeavor, depending on the campaign’s size and complexity. The company, therefore, may wish to test the campaign on a smaller scale to understand how it will perform.

In this example, the hypothesis that’s being tested would fall along the lines of: “If the company launches a new marketing campaign, then it will translate into an increase in sales.” It may even be possible to quantify how much of a lift in sales the company expects to see from the effort. Pending the results of the pilot campaign, the business would then know whether it makes sense to roll it out more broadly.

Related: 9 Fundamental Data Science Skills for Business Professionals

Key Considerations for Hypothesis Testing

1. alternative hypothesis and null hypothesis.

In hypothesis testing, the hypothesis that’s being tested is known as the alternative hypothesis . Often, it’s expressed as a correlation or statistical relationship between variables. The null hypothesis , on the other hand, is a statement that’s meant to show there’s no statistical relationship between the variables being tested. It’s typically the exact opposite of whatever is stated in the alternative hypothesis.

For example, consider a company’s leadership team that historically and reliably sees $12 million in monthly revenue. They want to understand if reducing the price of their services will attract more customers and, in turn, increase revenue.

In this case, the alternative hypothesis may take the form of a statement such as: “If we reduce the price of our flagship service by five percent, then we’ll see an increase in sales and realize revenues greater than $12 million in the next month.”

The null hypothesis, on the other hand, would indicate that revenues wouldn’t increase from the base of $12 million, or might even decrease.

Check out the video below about the difference between an alternative and a null hypothesis, and subscribe to our YouTube channel for more explainer content.

2. Significance Level and P-Value

Statistically speaking, if you were to run the same scenario 100 times, you’d likely receive somewhat different results each time. If you were to plot these results in a distribution plot, you’d see the most likely outcome is at the tallest point in the graph, with less likely outcomes falling to the right and left of that point.

distribution plot graph

With this in mind, imagine you’ve completed your hypothesis test and have your results, which indicate there may be a correlation between the variables you were testing. To understand your results' significance, you’ll need to identify a p-value for the test, which helps note how confident you are in the test results.

In statistics, the p-value depicts the probability that, assuming the null hypothesis is correct, you might still observe results that are at least as extreme as the results of your hypothesis test. The smaller the p-value, the more likely the alternative hypothesis is correct, and the greater the significance of your results.

3. One-Sided vs. Two-Sided Testing

When it’s time to test your hypothesis, it’s important to leverage the correct testing method. The two most common hypothesis testing methods are one-sided and two-sided tests , or one-tailed and two-tailed tests, respectively.

Typically, you’d leverage a one-sided test when you have a strong conviction about the direction of change you expect to see due to your hypothesis test. You’d leverage a two-sided test when you’re less confident in the direction of change.

Business Analytics | Become a data-driven leader | Learn More

4. Sampling

To perform hypothesis testing in the first place, you need to collect a sample of data to be analyzed. Depending on the question you’re seeking to answer or investigate, you might collect samples through surveys, observational studies, or experiments.

A survey involves asking a series of questions to a random population sample and recording self-reported responses.

Observational studies involve a researcher observing a sample population and collecting data as it occurs naturally, without intervention.

Finally, an experiment involves dividing a sample into multiple groups, one of which acts as the control group. For each non-control group, the variable being studied is manipulated to determine how the data collected differs from that of the control group.

A Beginner's Guide to Data and Analytics | Access Your Free E-Book | Download Now

Learn How to Perform Hypothesis Testing

Hypothesis testing is a complex process involving different moving pieces that can allow an organization to effectively leverage its data and inform strategic decisions.

If you’re interested in better understanding hypothesis testing and the role it can play within your organization, one option is to complete a course that focuses on the process. Doing so can lay the statistical and analytical foundation you need to succeed.

Do you want to learn more about hypothesis testing? Explore Business Analytics —one of our online business essentials courses —and download our Beginner’s Guide to Data & Analytics .

hypothesis statement of product

About the Author

hypothesis statement of product

Hypothesis testing in product development

Stephanos Theodotou

Stephanos Theodotou

How to write an effective hypotheses

Consider an assumption you hold about how one factor could affect another in the context of your product. For example, for a fictional product manager at the imaginary CurioCity SaaS, an assumption might be that:

“adding generative-AI capability in our app will increase the engagement of our customer base.”

However, unlike a simple assumption, a hypothesis should be a testable proposition that is formulated based on existing knowledge, theory, or observations. Hypotheses need to be specific, measurable, and defined in a way that allows them to be tested through empirical research — meaning by experience and active observation.

One of the reasons for the above assumption not being a hypothesis is that it needs to be more specific. While generative AI sounds excellent, it doesn’t specify what exactly we are trying to observe and therefore measure. Is it the number of prompts a user sends AI or time spent on other AI-enabled features?

A compelling hypothesis should involve at least two specific variables to observe and measure. Consider the following one:

“During free trial, the total number of additional colleagues that a signed up user invites to the app, significantly affects their chance of converting to a paid subscription.

The variables in this case are:

a) the number of colleagues a user invites

b) the conversion event.

Both can be measured and observed. In Hypothesis testing, the first variable is called the independent variable (the number of colleagues) and the second variable is the dependent one (the one being affected by a change in the independent variable).

We’ve phrased this hypothesis in a specific enough way to allow us to prepare our experiments and investigate further. I call a hypothesis with this level of specificity a Level-1 type of hypothesis.

Refining hypotheses during product discovery with data analysis and other inputs

Now let’s consider a more complex hypothesis about the same variables:

“For every additional colleague a user invites to their free trial, the likelihood of conversion to a paid plan increases by 12%”.

Our hypothesis has been refined and is now even more specific than before. Previously we hypothesised that the number of invited colleagues affects conversions but had yet to specify whether it negatively or positively did so. In this refined hypothesis, however, we are being very explicit about the direction of the relationship between the two variables: We expect the conversion probability to increase as the number of invited colleagues increases. We are also being specific about the amount of change we expect to observe. As opposed to the basic, Level-1 hypothesis we saw earlier, I call hypotheses with this additional level of specificity, Level-2 type hypotheses.

However, how can we get as specific as this and how do we know whether these are justified inferences to be made in the first place? Why do we expect conversions to increase by 12% and not something else?

While we could make an educated guess based on our and our colleague’s experience of our users and products, these numbers weren’t chosen at random. As discussed above, defining hypotheses must follow a scientific approach informed by existing knowledge, theory, or observations. In this case, suppose that our product team has made specific inferences after observing patterns in our historical dataset and statistically analysing the impact of three factors on conversion (one being the number of invited colleagues). Based on these observations, we could extrapolate a hypothesis that we could expect the likelihood of conversion to a paid plan to increase by ~12% for every additional invited user. You can read more about how we came up with this number in Part I .

Reaching Level-2 specificity will help us assess whether the formed hypothesis is worth exploring in the first place. Especially when compared to other priorities we might have. Depending on the business context, a 12% increase in conversions might be less or more important than a 9% reduction in churn rates proposed by other hypotheses on our backlog.

Does this mean we can’t propose a hypothesis until this sort of analysis is in place? Not at all. Even a basic, Level-1 hypothesis — as long as it’s well-framed, can be more than enough to help us investigate. Refining a hypothesis further to a level-2 will inevitably need to be part of our product discovery flow. The more specific we can make a hypothesis, the easier it will be to compare against others, ultimately helping us prioritise amongst multiple, potentially equally interesting pursuits.

Before starting to define level-1 hypotheses, there is another type of hypothesis you should always begin defining first. I could call this Level-0, Hypothesis but I don’t need to; in Hypothesis testing, we already have the concept of “Null” Hypotheses and they can help us better prioritise our focus; let’s discuss why.

Why Product Managers should first formulate a Null Hypothesis

A Null Hypothesis simply proposes that there will be no change or effect between the factors we are interested in. Framing the previous example as a Null Hypothesis looks like this:

“The number of additional colleagues a user invites during their free trial has no effect on conversion.”

Suppose you accept this hypothesis as true (i.e., no effect exists). In that case, it means that if you test it (which you should do since all hypotheses need to be testable), any observed differences or effects that you find must only be due to chance or random variation rather than a real relationship or effect between the variables invited users and conversions.

The null hypothesis typically acts as the default assumption until shown otherwise, and the goal of testing it is to determine whether there is enough evidence to indicate an effect beyond random chance or variation.

An alternate hypothesis contradicts the null hypothesis. The hypotheses we’ve discussed so far have all been alternate hypotheses because they stipulated that a significant enough relationship (i.e. not coincidental) between the two variables must exist.

Since the alternate and null are two sides of the same coin (one expecting no effect and the other some effect), you might be tempted to think that the null hypothesis is obsolete in the presence of an alternate. Why would you establish a null hypothesis when you could formulate an alternate one and try to validate that one straight away? Here’s why:

Let’s say you define a Null hypothesis, you then test it and find evidence supporting the null hypothesis; for example, that an increase in invited users doesn’t affect conversions. This information is actually very valuable for us because it is confirming that the current way in which things work in our app is not a concern (i.e. as far as this variable is concerned, there is no effect on conversions whether negative or positive).

In other words, we can decide to allocate resources elsewhere if a proposed change isn’t statistically predicted to be better than the default situation (presented by the null hypothesis). So where, we would be planning, design and developing tactics to increase the number of invited users, we can instead shift our time and attention towards exploring other ways in which to impact conversions.

As an example, in Part I , we couldn’t validate that the marketing channel (“paid” or “organic”) via which users landed on our app, , had any predicted effect on whether users eventually purchased our product (converted). We didn’t need to investigate an alternate hypothesis further for that variable because the null hypothesis was accepted and that was enough. So instead of spending time on investigating the effect of this variable on conversions further, we focused on other variables: The time it takes for users to onboard and the number of invited users, a change in which seemed to be more statistically associated with the odds of conversion.

However, let’s say that we did observe an effect between independent variable X and conversions. How can we know if an effect we observe during testing is not because of random chance or coincidence? This is critical questions because if the effect is proven to be coincidental then we need to accept the null hypothesis (that no statistical effect exists) whilst if it’s proven to not be coincidental then it means that the independent variable seems to have an effect on conversion and we could accept the alternate hypothesis (that independent variable X affects conversions). To solve this, we can use a hypothesis test; let’s discuss one particular type of hypothesis testing, the Z-Test.

Simple product development hypothesis testing using a Z-test

There are a few statistical hypothesis tests we could implement. A common one is a Z-Test. It allows us to take and test data samples and check if the observed differences deviate from what we would expect given the hypothesis. Let’s look at an example:

From past data, you know that the average conversion rate of your newsletter’s signup form has been at 69% Suppose that you’ve recently made an improvement to your newsletter signup form. Then, in the last couple of months, you’ve observed an increase of conversions to 71%. How can you know whether this increase was coincidental or whether your improvements had actually affected conversions? All we need to figure this out is a null hypothesis to test with the Z-Test:

Null hypothesis:

“The specific change has no effect on the conversion rate.”

Z-Score = (X — μ *) / σ.*

Let’s break the formula down: Denoted by the Greek letter μ. is the expected population average under the hypothesis we have proposed. The population average for our case is the average conversion rate which amounts to 69%. X on the other hand, is the observed average , in this case, the 71% conversion rate we have observed recently.

Lastly, σ stands for the standard error, which measures the variability of our measurements. This number involves knowing the standard deviation and sample size (for example the total number of form submissions which could be 50). Scribbr has a great article on calculating the Standard Error , but for this article, we will assume we have calculated a standard error of 0.75 already. Now let’s ploteverything in the formula:

Z-Score = (71–69) / 0.75

The z-score results in 2.7 and indicates that the observed difference in conversions is 2.7 (which means 2.7 standard deviations), away from what we’d expect under the null hypothesis. Ok but what does this mean? How do we know whether the 2.7 z-score is a large or small score? How do we know if being 2.7 standard deviations away from the typical average suggests that the change in conversion was because of coincidence or because of our work?We can’t infer that directly from this z-score. However, after obtaining the z-score, we can use software to find the corresponding p-value which will tell us if the change to the expected mean was significant or just coincidental .

A p-value represents the probability of getting a z-score as extreme or more extreme than 2.7, given that the null hypothesis is true (that The specific change has no effect on the conversion rate) . Suppose the p-value we get from the software is below a 0.05. A p-value below a certain threshold (often 0.05 or 0.01), means that our observation (the 2% increase in conversions) is unlikely to have risen by chance alone. In other words, there is less than a 5% chance of observing a z-score such as this or higher. Therefore, it’s more likely for the independent variable to have caused the change rather than to have happened coincidentally.

To calculate the p-value, we can rely on multiple tools ranging from Excel to BI, depending on the complexity of the use case. In our simplistic scenario, the z-score of 2.7 results in a p-value of 0.003, much less than the threshold of 0.05 (you can use this online calculator to calculate it as well). Now that we have run a Z-Test and gotten a p-value for the z-score, we can more confidently say that the change change we made is statistically associated with increased conversions in the last couple of months.

Other ways to test hypotheses

The Z-Test is only one of a few statistical methods we can use to validate hypotheses. We will need to choose different tests depending on the research question and factors like the size of the sample population. In the previous example, the type of research question and information we had available made using a Z-test more applicable.

A Z-test typically requires a research question comparing two means (for example whether the average conversion after our changes significantly deviates from the current average conversion ). As a rule of thumb, a Z-test also requires a large sample size of over 30 observations and that the standard deviation is known (remember that the standard deviation was part of calculating the Standard Error in the denominator, which we didn’t cover). Check this article to learn more about the various testing methods and when to use each one.

Incorporating hypothesis testing in product development flows

Setting up effective experiments is the cornerstone of our data-driven decision-making as Product Managers. Hypothesis testing provides a vital framework, empowering us to form clear assumptions and rigorously validate them through observation and measurement. In practice, this requires thinking about product development as a set of “experiments”.

However, instead of experiments, we often juggle multiple responsibilities, from organising product requirement documents (PRDs) to prioritising backlogs. In an upcoming article, we will explore how to frame experiments in practice, seamlessly integrating them into our product development flow.

Meantime, beyond a hypothesis, experiments require a test. The following article will discuss a practical implementation of an A/B test, from setup to observation, to gain hands-on experience conducting experiments.

By embracing experimentation in practice, we can enhance our ability to make informed choices and optimise our products through continuous learning and improvement.

Stephanos Theodotou

Written by Stephanos Theodotou

I'm a web developer and product manager merging code with prose and writing about fascinating things I learn.

Text to speech

Product Talk

Make better product decisions.

The 5 Components of a Good Hypothesis

November 12, 2014 by Teresa Torres

Continuous Discovery Habits book cover

Update: I’ve since revised this hypothesis format. You can find the most current version in this article:

  • How to Improve Your Experiment Design (And Build Trust in Your Product Experiments)

“My hypothesis is …”

These words are becoming more common everyday. Product teams are starting to talk like scientists. Are you?

The internet industry is going through a mindset shift. Instead of assuming we have all the right answers, we are starting to acknowledge that building products is hard. We are accepting the reality that our ideas are going to fail more often than they are going to succeed.

Rather than waiting to find out which ideas are which after engineers build them, smart product teams are starting to integrate experimentation into their product discovery process. They are asking themselves, how can we test this idea before we invest in it?

This process starts with formulating a good hypothesis.

These Are Not the Hypotheses You Are Looking For

When we are new to hypothesis testing, we tend to start with hypotheses like these:

  • Fixing the hard-to-use comment form will increase user engagement.
  • A redesign will improve site usability.
  • Reducing prices will make customers happy.

There’s only one problem. These aren’t testable hypotheses. They aren’t specific enough.

A good hypothesis can be clearly refuted or supported by an experiment. – Tweet This

To make sure that your hypotheses can be supported or refuted by an experiment, you will want to include each of these elements:

  • the change that you are testing
  • what impact we expect the change to have
  • who you expect it to impact
  • by how much
  • after how long

The Change:  This is the change that you are introducing to your product. You are testing a new design, you are adding new copy to a landing page, or you are rolling out a new feature.

Be sure to get specific. Fixing a hard-to-use comment form is not specific enough. How will you fix it? Some solutions might work. Others might not. Each is a hypothesis in its own right.

Design changes can be particularly challenging. Your hypothesis should cover a specific design not the idea of a redesign.

In other words, use this:

  • This specific design will increase conversions.
  • Redesigning the landing page will increase conversions.

The former can be supported or refuted by an experiment. The latter can encompass dozens of design solutions, where some might work and others might not.

The Expected Impact:  The expected impact should clearly define what you expect to see as a result of making the change.

How will you know if your change is successful? Will it reduce response times, increase conversions, or grow your audience?

The expected impact needs to be specific and measurable. – Tweet This

You might hypothesize that your new design will increase usability. This isn’t specific enough.

You need to define how you will measure an increase in usability. Will it reduce the time to complete some action? Will it increase customer satisfaction? Will it reduce bounce rates?

There are dozens of ways that you might measure an increase in usability. In order for this to be a testable hypothesis, you need to define which metric you expect to be affected by this change.

Who Will Be Impacted: The third component of a good hypothesis is who will be impacted by this change. Too often, we assume everyone. But this is rarely the case.

I was recently working with a product manager who was testing a sign up form popup upon exiting a page.

I’m sure you’ve seen these before. You are reading a blog post and just as you are about to navigate away, you get a popup that asks, “Would you like to subscribe to our newsletter?”

She A/B tested this change by showing it to half of her population, leaving the rest as her control group. But there was a problem.

Some of her visitors were already subscribers. They don’t need to subscribe again. For this population, the answer to this popup will always be no.

Rather than testing with her whole population, she should be testing with just the people who are not currently subscribers.

This isn’t easy to do. And it might not sound like it’s worth the effort, but it’s the only way to get good results.

Suppose she has 100 visitors. Fifty see the popup and fifty don’t. If 45 of the people who see the popup are already subscribers and as a result they all say no, and of the five remaining visitors only 1 says yes, it’s going to look like her conversion rate is 1 out of 50, or 2%. However, if she limits her test to just the people who haven’t subscribed, her conversion rate is 1 out of 5, or 20%. This is a huge difference.

Who you test with is often the most important factor for getting clean results. – Tweet This

By how much: The fourth component builds on the expected impact. You need to define how much of an impact you expect your change to have.

For example, if you are hypothesizing that your change will increase conversion rates, then you need to estimate by how much, as in the change will increase conversion rate from x% to y%, where x is your current conversion rate and y is your expected conversion rate after making the change.

This can be hard to do and is often a guess. However, you still want to do it. It serves two purposes.

First, it helps you draw a line in the sand. This number should determine in black and white terms whether or not your hypothesis passes or fails and should dictate how you act on the results.

Suppose you hypothesize that the change will improve conversion rates by 10%, then if your change results in a 9% increase, your hypothesis fails.

This might seem extreme, but it’s a critical step in making sure that you don’t succumb to your own biases down the road.

It’s very easy after the fact to determine that 9% is good enough. Or that 2% is good enough. Or that -2% is okay, because you like the change. Without a line in the sand, you are setting yourself up to ignore your data.

The second reason why you need to define by how much is so that you can calculate for how long to run your test.

After how long:  Too many teams run their tests for an arbitrary amount of time or stop the results when one version is winning.

This is a problem. It opens you up to false positives and releasing changes that don’t actually have an impact.

If you hypothesize the expected impact ahead of time than you can use a duration calculator to determine for how long to run the test.

Finally, you want to add the duration of the test to your hypothesis. This will help to ensure that everyone knows that your results aren’t valid until the duration has passed.

If your traffic is sporadic, “how long” doesn’t have to be defined in time. It can also be defined in page views or sign ups or after a specific number of any event.

Putting It All Together

Use the following examples as templates for your own hypotheses:

  • Design x [the change] will increase conversions [the impact] for search campaign traffic [the who] by 10% [the how much] after 7 days [the how long].
  • Reducing the sign up steps from 3 to 1 will increase signs up by 25% for new visitors after 1,000 visits to the sign up page.
  • This subject line will increase open rates for daily digest subscribers by 15% after 3 days.

After you write a hypothesis, break it down into its five components to make sure that you haven’t forgotten anything.

  • Change: this subject line
  • Impact: will increase open rates
  • Who: for daily digest subscribers
  • By how much: by 15%
  • After how long: After 3 days

And then ask yourself:

  • Is your expected impact specific and measurable?
  • Can you clearly explain why the change will drive the expected impact?
  • Are you testing with the right population?
  • Did you estimate your how much based on a baseline and / or comparable changes? (more on this in a future post)
  • Did you calculate the duration using a duration calculator?

It’s easy to give lip service to experimentation and hypothesis testing. But if you want to get the most out of your efforts, make sure you are starting with a good hypothesis.

Did you learn something new reading this article? Keep learning. Subscribe to the Product Talk mailing list to get the next article in this series delivered to your inbox.

Get the latest from Product Talk right in your inbox.

Join 41,000+ product people. Never miss an article.

Avatar for Saaransh Mehta

May 21, 2017 at 2:11 am

Interesting article, I am thinking about making forming a hypothesis around my product, if certain customers will find a proposed value useful. Can you kindly let me know if I’m on the right track.

“Certain customer segment (AAA) will find value in feature (XXX), to tackle their pain point ”

Change: using a feature (XXX)/ product Impact: will reduce monetary costs/ help solve a problem Who: for certain customers segment (AAA) By how much: by 5% After how long: 10 days

Avatar for GG

April 4, 2020 at 12:33 pm

Hi! Could you throw a little light on this: “Suppose you hypothesize that the change will improve conversion rates by 10%, then if your change results in a 9% increase, your hypothesis fails.”

I understood the rationale behind having a number x (10% in this case) associated with “by how much”, but could you explain with an example of how to ballpark a figure like this?

Avatar for GG

Popular Resources

  • Product Discovery Basics: Everything You Need to Know
  • Product Trios: What They Are, Why They Matter, and How to Get Started
  • Visualize Your Thinking with Opportunity Solution Trees
  • Shifting from Outputs to Outcomes: Why It Matters and How to Get Started
  • Customer Interviews: How to Recruit, What to Ask, and How to Synthesize What You Learn
  • Assumption Testing: Everything You Need to Know to Get Started

Recent Posts

  • Product in Practice: Why Ramsey Solutions Rotates Engineers in Their Product Trios
  • Tools of the Trade: Switching from Miro to Jira Product Discovery for Opportunity Solution Trees
  • Join 4 Upcoming Events on Continuous Discovery with Teresa Torres in August 2024
  • Ask Teresa: What Do You Do with Atypical Customer Stories?
  • 01 212 461 178 0

Startseite » Newsroom » Blog » Product development through hypotheses: formulating hypotheses

Blogserie Hypothesen-getriebene Produktentwicklung

Product development through hypotheses: formulating hypotheses

16. February 2018

Product development is confronted with the constant challenge of supplying the customer with a product that exactly meets his needs. In our new blog series, etventure’s product managers provide an insight into their work and approach. The focus is on hypothesis-driven product development. In the first part of the series, we show why and how to define a verifiable hypothesis as the starting point for an experiment.

For the development of new products, features and services as well as the development of start-ups, we at etventure rely on a hypothesis-driven method that is strongly oriented towards the “Lean Startup” 1  philosophy. Having already revealed our remedy for successful product development last week, we now want to take a closer look at the first step of an experiment – the formulation of the hypothesis.

“Done is better than perfect.” – Sheryl Sandberg

Where do hypotheses come from?

Scientists observe nature and ask many questions that lead to hypotheses. Product teams can also be inspired by observations, personal opinions, previous experiences or the discovery of patterns and outliers in data. These observations are often associated with a number of problems and open questions.

  • Who is our target group?
  • Why does X do this and not that?
  • How can person X be motivated to take action Y?
  • How can we encourage potential users to sign up for our service?

First of all, it is important that the team meets for brainstorming and becomes creative. Subsequently, those ideas are selected that are “true” from the team’s point of view and are therefore referred to as hypotheses.

What makes a good hypothesis?

Unlike science, we cannot afford to spend too much time on a hypothesis. Nevertheless, one of the key qualifications of every product developer is to recognize a well-formulated hypothesis. The following checklist serves as a basis for this:

A good hypothesis…

  • is something we believe to be true, but we don’t know for sure yet
  • is a prediction we expect to arrive
  • can be easily tested
  • may be true or false
  • includes the target group
  • is clear and measurable

Assumption  ≠ Fact

An assumption may be true, but it may also be false. A fact is always true and can be proven by evidence. Therefore, an assumption always offers an opportunity to learn something. If we already have strong evidence of what we believe in, we don’t need to test it again – there is nothing new to learn. However, we never accept anything as a fact until it has been validated. Awareness of this difference is essential for our product decisions. That’s why we keep asking ourselves questions: Do we have proof of our assumptions, are they facts, or does it end with assumption? In other words: Is it objectively measurable?

Human behaviour is often “predictably irrational”. 2 This is because our brain uses shortcuts when processing information to save time and energy. 3 This is also true in product development: We often tend to ignore evidence that our assumption might be wrong. Instead, we feel confirmed in existing beliefs. The good news is that these distortions are consistent and well known, so we can design systems to correct them. In order to avoid misinterpretations of the test results, it helps, for example, to make the following prediction: What would happen if my assumption was confirmed?

In order for hypotheses to be validated, it must be possible to test them in at least one, but preferably in different scenarios. Since both temporal and monetary resources are usually very limited, hypotheses must always be testable as easy as possible and with justifiable effort.

Testability and falsification

Learning means finding answers to questions. In product development, we want to know whether our assumption is true or not. When testing our ideas, we have to assume that both could happen. What is important is that both results are correct, both mean progress. This concept, is derived from science 4 and helps to avoid an always applicable hypothesis such as “Tomorrow it will either rain or not”.

Target group

Product development should mainly focus on the customer’s needs. Therefore, the target group must be included in the formulation of the hypothesis. This prevents distortion and makes the hypotheses more specific. During development, hypotheses can be refined or the target audience can be adapted.

Clarity and measurability

And last but not least, a hypothesis must always be clear and measurable. Complex hypotheses are not uncommon in science, but in practice it must be immediately clear what is at stake. Product developers should be able to explain their hypotheses within 30 seconds to someone who has never heard of the subject.

Why formulate hypotheses?

Product teams benefit in many ways if they take the time to formulate a hypothesis.

  • Impartial decisions: Hypotheses reduce the influence of prejudices on our decision-making.
  • Team orientation: Similar to a common vision, a hypothesis strengthens team thinking and prevents conflicts in the experimental phase.
  • Focus: Testing without hypothesis is like sailing without a goal. A hypothesis helps to focus and control the experimental design.

How can good hypotheses be formulated?

Various blogs and articles provide a series of templates that help to formulate hypotheses quickly and easily. Most of them differ only slightly from each other. Product teams can freely decide which format they like – as long as the final hypothesis meets the above criteria. We have put together a selection of the most important templates:

  • We believe that [this ability] will lead to [this result]. We will know that we have succeeded when [we see a measurable sign].
  • I believe that [target group] will [execute this repeatable action/use this solution], which for [this reason] will lead to [an expected measurable result].
  • If [cause], then [effect], because [reason].
  • If [I do], then [thing] will happen.
  • We believe that with [activity] for [these people] [this result / this effect] will happen.

The following hypotheses have actually been used by us in the past weeks and months. During the test phase some of them could be validated, others were rejected.

  • After 1,000 visits to the registration page, the reduction of registration steps from 3 to 1 increases the registration rate for new visitors by 25%.
  • This subject line increases the opening rates for newsletter subscribers by 15% after 3 days.
  • If we offer online training to our customers, the number of training sessions will increase by 35% within the next 2 weeks.
  • We believe that the sale of a machine-optimized packaging material to our customers will lead to a higher demand for our packaging material. We will know that we have been successful if we have sold 50% more packaging material within the next 4 weeks.

How to turn hypotheses into experiments?

Formulating good hypotheses is essential for successful product development. And yet it is only the first step in a multi-step development and testing process. In our next article you will learn how hypotheses become experiments.

Further links:

1  Eric Ries: The Lean Startup

2  Predictably Irrational: The Hidden Forces that Shape Our Decisions

3  Cognitive Bias Cheat Sheet

4  Karl Popper

You have a question or an opinion about the article? Share it with us! Cancel reply

Your email address will not be published. Required fields are marked *.

Display a Gravatar image next to my comments.

Ich habe die Hinweise zum Datenschutz gelesen und akzeptiere diese. *

* Required field

' src=

Autor Kristopher Berks

Product Manager bei etventure

Visit us at

You might also be interested in.

wavespace_Berlin

Does Artificial Intelligence always make the better decision?

Toolbox "Digitale Transformation"

Toolbox “Digital Transformation” – 7 steps to the digital business model

hypothesis statement of product

#DIGITALLEARNING 7 – Agile Leadership

offer

Writing a Strong Hypothesis Statement

hypothesis statement of product

All good theses begins with a good thesis question. However, all great theses begins with a great hypothesis statement. One of the most important steps for writing a thesis is to create a strong hypothesis statement. 

What is a hypothesis statement?

A hypothesis statement must be testable. If it cannot be tested, then there is no research to be done.

Simply put, a hypothesis statement posits the relationship between two or more variables. It is a prediction of what you think will happen in a research study. A hypothesis statement must be testable. If it cannot be tested, then there is no research to be done. If your thesis question is whether wildfires have effects on the weather, “wildfires create tornadoes” would be your hypothesis. However, a hypothesis needs to have several key elements in order to meet the criteria for a good hypothesis.

In this article, we will learn about what distinguishes a weak hypothesis from a strong one. We will also learn how to phrase your thesis question and frame your variables so that you are able to write a strong hypothesis statement and great thesis.

What is a hypothesis?

A hypothesis statement posits, or considers, a relationship between two variables.

As we mentioned above, a hypothesis statement posits or considers a relationship between two variables. In our hypothesis statement example above, the two variables are wildfires and tornadoes, and our assumed relationship between the two is a causal one (wildfires cause tornadoes). It is clear from our example above what we will be investigating: the relationship between wildfires and tornadoes.

A strong hypothesis statement should be:

  • A prediction of the relationship between two or more variables

A hypothesis is not just a blind guess. It should build upon existing theories and knowledge . Tornadoes are often observed near wildfires once the fires reach a certain size. In addition, tornadoes are not a normal weather event in many areas; they have been spotted together with wildfires. This existing knowledge has informed the formulation of our hypothesis.

Depending on the thesis question, your research paper might have multiple hypothesis statements. What is important is that your hypothesis statement or statements are testable through data analysis, observation, experiments, or other methodologies.

Formulating your hypothesis

One of the best ways to form a hypothesis is to think about “if...then” statements.

Now that we know what a hypothesis statement is, let’s walk through how to formulate a strong one. First, you will need a thesis question. Your thesis question should be narrow in scope, answerable, and focused. Once you have your thesis question, it is time to start thinking about your hypothesis statement. You will need to clearly identify the variables involved before you can begin thinking about their relationship.

One of the best ways to form a hypothesis is to think about “if...then” statements . This can also help you easily identify the variables you are working with and refine your hypothesis statement. Let’s take a few examples.

If teenagers are given comprehensive sex education, there will be fewer teen pregnancies .

In this example, the independent variable is whether or not teenagers receive comprehensive sex education (the cause), and the dependent variable is the number of teen pregnancies (the effect).

If a cat is fed a vegan diet, it will die .

Here, our independent variable is the diet of the cat (the cause), and the dependent variable is the cat’s health (the thing impacted by the cause).

If children drink 8oz of milk per day, they will grow taller than children who do not drink any milk .

What are the variables in this hypothesis? If you identified drinking milk as the independent variable and growth as the dependent variable, you are correct. This is because we are guessing that drinking milk causes increased growth in the height of children.

Refining your hypothesis

Do not be afraid to refine your hypothesis throughout the process of formulation.

Do not be afraid to refine your hypothesis throughout the process of formulation. A strong hypothesis statement is clear, testable, and involves a prediction. While “testable” means verifiable or falsifiable, it also means that you are able to perform the necessary experiments without violating any ethical standards. Perhaps once you think about the ethics of possibly harming some cats by testing a vegan diet on them you might abandon the idea of that experiment altogether. However, if you think it is really important to research the relationship between a cat’s diet and a cat’s health, perhaps you could refine your hypothesis to something like this:

If 50% of a cat’s meals are vegan, the cat will not be able to meet its nutritional needs .

Another feature of a strong hypothesis statement is that it can easily be tested with the resources that you have readily available. While it might not be feasible to measure the growth of a cohort of children throughout their whole lives, you may be able to do so for a year. Then, you can adjust your hypothesis to something like this:

I f children aged 8 drink 8oz of milk per day for one year, they will grow taller during that year than children who do not drink any milk .

As you work to narrow down and refine your hypothesis to reflect a realistic potential research scope, don’t be afraid to talk to your supervisor about any concerns or questions you might have about what is truly possible to research. 

What makes a hypothesis weak?

We noted above that a strong hypothesis statement is clear, is a prediction of a relationship between two or more variables, and is testable. We also clarified that statements, which are too general or specific are not strong hypotheses. We have looked at some examples of hypotheses that meet the criteria for a strong hypothesis, but before we go any further, let’s look at weak or bad hypothesis statement examples so that you can really see the difference.

Bad hypothesis 1: Diabetes is caused by witchcraft .

While this is fun to think about, it cannot be tested or proven one way or the other with clear evidence, data analysis, or experiments. This bad hypothesis fails to meet the testability requirement.

Bad hypothesis 2: If I change the amount of food I eat, my energy levels will change .

This is quite vague. Am I increasing or decreasing my food intake? What do I expect exactly will happen to my energy levels and why? How am I defining energy level? This bad hypothesis statement fails the clarity requirement.

Bad hypothesis 3: Japanese food is disgusting because Japanese people don’t like tourists .

This hypothesis is unclear about the posited relationship between variables. Are we positing the relationship between the deliciousness of Japanese food and the desire for tourists to visit? or the relationship between the deliciousness of Japanese food and the amount that Japanese people like tourists? There is also the problematic subjectivity of the assessment that Japanese food is “disgusting.” The problems are numerous.

The null hypothesis and the alternative hypothesis

The null hypothesis, quite simply, posits that there is no relationship between the variables.

What is the null hypothesis?

The hypothesis posits a relationship between two or more variables. The null hypothesis, quite simply, posits that there is no relationship between the variables. It is often indicated as H 0 , which is read as “h-oh” or “h-null.” The alternative hypothesis is the opposite of the null hypothesis as it posits that there is some relationship between the variables. The alternative hypothesis is written as H a or H 1 .

Let’s take our previous hypothesis statement examples discussed at the start and look at their corresponding null hypothesis.

H a : If teenagers are given comprehensive sex education, there will be fewer teen pregnancies .
H 0 : If teenagers are given comprehensive sex education, there will be no change in the number of teen pregnancies .

The null hypothesis assumes that comprehensive sex education will not affect how many teenagers get pregnant. It should be carefully noted that the null hypothesis is not always the opposite of the alternative hypothesis. For example:

If teenagers are given comprehensive sex education, there will be more teen pregnancies .

These are opposing statements that assume an opposite relationship between the variables: comprehensive sex education increases or decreases the number of teen pregnancies. In fact, these are both alternative hypotheses. This is because they both still assume that there is a relationship between the variables . In other words, both hypothesis statements assume that there is some kind of relationship between sex education and teen pregnancy rates. The alternative hypothesis is also the researcher’s actual predicted outcome, which is why calling it “alternative” can be confusing! However, you can think of it this way: our default assumption is the null hypothesis, and so any possible relationship is an alternative to the default.

Step-by-step sample hypothesis statements

Now that we’ve covered what makes a hypothesis statement strong, how to go about formulating a hypothesis statement, refining your hypothesis statement, and the null hypothesis, let’s put it all together with some examples. The table below shows a breakdown of how we can take a thesis question, identify the variables, create a null hypothesis, and finally create a strong alternative hypothesis.

Does the quality of sex education in public schools impact teen pregnancy rates? Comprehensive sex education in public schools will lower teen pregnancy ratesThe quality of sex education in public schools has no effect on teen pregnancy rates
Do wildfires that burn for more than 2 weeks have an impact on local weather systems? Wildfires that burn for more than two weeks cause tornadoes because the heat they give off impacts wind patternsWildfires have no impact on local weather systems
Will a cat remain in good health on a vegan diet? A cat’s health will suffer if it is only fed a vegan diet because cats are obligate carnivoresA cat’s diet has no impact on its health
Does walking for 30 minutes a day impact human health? Walking for 30 minutes a day will improve cardiovascular health and brain function in humansWalking for 30 minutes a day will neither improve or harm human health

Once you have formulated a solid thesis question and written a strong hypothesis statement, you are ready to begin your thesis in earnest. Check out our site for more tips on writing a great thesis and information on thesis proofreading and editing services.

Editor’s pick

Get free updates.

Subscribe to our newsletter for regular insights from the research and publishing industry!

Review Checklist

Start with a clear thesis question

Think about “if-then” statements to identify your variables and the relationship between them

Create a null hypothesis

Formulate an alternative hypothesis using the variables you have identified

Make sure your hypothesis clearly posits a relationship between variables

Make sure your hypothesis is testable considering your available time and resources

What makes a hypothesis strong? +

A hypothesis is strong when it is testable, clear, and identifies a potential relationship between two or more variables.

What makes a hypothesis weak? +

A hypothesis is weak when it is too specific or too general, or does not identify a clear relationship between two or more variables.

What is the null hypothesis? +

The null hypothesis posits that the variables you have identified have no relationship.

How to Write a Hypothesis in 6 Steps, With Examples

Matt Ellis

A hypothesis is a statement that explains the predictions and reasoning of your research—an “educated guess” about how your scientific experiments will end. As a fundamental part of the scientific method, a good hypothesis is carefully written, but even the simplest ones can be difficult to put into words. 

Want to know how to write a hypothesis for your academic paper ? Below we explain the different types of hypotheses, what a good hypothesis requires, the steps to write your own, and plenty of examples.

Write with confidence Grammarly helps you polish your academic writing Write with Grammarly  

What is a hypothesis? 

One of our 10 essential words for university success , a hypothesis is one of the earliest stages of the scientific method. It’s essentially an educated guess—based on observations—of what the results of your experiment or research will be. 

Some hypothesis examples include:

  • If I water plants daily they will grow faster.
  • Adults can more accurately guess the temperature than children can. 
  • Butterflies prefer white flowers to orange ones.

If you’ve noticed that watering your plants every day makes them grow faster, your hypothesis might be “plants grow better with regular watering.” From there, you can begin experiments to test your hypothesis; in this example, you might set aside two plants, water one but not the other, and then record the results to see the differences. 

The language of hypotheses always discusses variables , or the elements that you’re testing. Variables can be objects, events, concepts, etc.—whatever is observable. 

There are two types of variables: independent and dependent. Independent variables are the ones that you change for your experiment, whereas dependent variables are the ones that you can only observe. In the above example, our independent variable is how often we water the plants and the dependent variable is how well they grow. 

Hypotheses determine the direction and organization of your subsequent research methods, and that makes them a big part of writing a research paper . Ultimately the reader wants to know whether your hypothesis was proven true or false, so it must be written clearly in the introduction and/or abstract of your paper. 

7 examples of hypotheses

Depending on the nature of your research and what you expect to find, your hypothesis will fall into one or more of the seven main categories. Keep in mind that these categories are not exclusive, so the same hypothesis might qualify as several different types. 

1 Simple hypothesis

A simple hypothesis suggests only the relationship between two variables: one independent and one dependent. 

  • If you stay up late, then you feel tired the next day. 
  • Turning off your phone makes it charge faster. 

2 Complex hypothesis

A complex hypothesis suggests the relationship between more than two variables, for example, two independents and one dependent, or vice versa. 

  • People who both (1) eat a lot of fatty foods and (2) have a family history of health problems are more likely to develop heart diseases. 
  • Older people who live in rural areas are happier than younger people who live in rural areas. 

3 Null hypothesis

A null hypothesis, abbreviated as H 0 , suggests that there is no relationship between variables. 

  • There is no difference in plant growth when using either bottled water or tap water. 
  • Professional psychics do not win the lottery more than other people. 

4 Alternative hypothesis

An alternative hypothesis, abbreviated as H 1 or H A , is used in conjunction with a null hypothesis. It states the opposite of the null hypothesis, so that one and only one must be true. 

  • Plants grow better with bottled water than tap water. 
  • Professional psychics win the lottery more than other people. 

5 Logical hypothesis

A logical hypothesis suggests a relationship between variables without actual evidence. Claims are instead based on reasoning or deduction, but lack actual data.  

  • An alien raised on Venus would have trouble breathing in Earth’s atmosphere. 
  • Dinosaurs with sharp, pointed teeth were probably carnivores. 

6 Empirical hypothesis

An empirical hypothesis, also known as a “working hypothesis,” is one that is currently being tested. Unlike logical hypotheses, empirical hypotheses rely on concrete data. 

  • Customers at restaurants will tip the same even if the wait staff’s base salary is raised. 
  • Washing your hands every hour can reduce the frequency of illness. 

7 Statistical hypothesis

A statistical hypothesis is when you test only a sample of a population and then apply statistical evidence to the results to draw a conclusion about the entire population. Instead of testing everything , you test only a portion and generalize the rest based on preexisting data. 

  • In humans, the birth-gender ratio of males to females is 1.05 to 1.00.  
  • Approximately 2% of the world population has natural red hair. 

What makes a good hypothesis?

No matter what you’re testing, a good hypothesis is written according to the same guidelines. In particular, keep these five characteristics in mind: 

Cause and effect

Hypotheses always include a cause-and-effect relationship where one variable causes another to change (or not change if you’re using a null hypothesis). This can best be reflected as an if-then statement: If one variable occurs, then another variable changes. 

Testable prediction

Most hypotheses are designed to be tested (with the exception of logical hypotheses). Before committing to a hypothesis, make sure you’re actually able to conduct experiments on it. Choose a testable hypothesis with an independent variable that you have absolute control over. 

Independent and dependent variables

Define your variables in your hypothesis so your readers understand the big picture. You don’t have to specifically say which ones are independent and dependent variables, but you definitely want to mention them all. 

Candid language

Writing can easily get convoluted, so make sure your hypothesis remains as simple and clear as possible. Readers use your hypothesis as a contextual pillar to unify your entire paper, so there should be no confusion or ambiguity. If you’re unsure about your phrasing, try reading your hypothesis to a friend to see if they understand. 

Adherence to ethics

It’s not always about what you can test, but what you should test. Avoid hypotheses that require questionable or taboo experiments to keep ethics (and therefore, credibility) intact.

How to write a hypothesis in 6 steps

1 ask a question.

Curiosity has inspired some of history’s greatest scientific achievements, so a good place to start is to ask yourself questions about the world around you. Why are things the way they are? What causes the factors you see around you? If you can, choose a research topic that you’re interested in so your curiosity comes naturally. 

2 Conduct preliminary research

Next, collect some background information on your topic. How much background information you need depends on what you’re attempting. It could require reading several books, or it could be as simple as performing a web search for a quick answer. You don’t necessarily have to prove or disprove your hypothesis at this stage; rather, collect only what you need to prove or disprove it yourself. 

3 Define your variables

Once you have an idea of what your hypothesis will be, select which variables are independent and which are dependent. Remember that independent variables can only be factors that you have absolute control over, so consider the limits of your experiment before finalizing your hypothesis. 

4 Phrase it as an if-then statement

When writing a hypothesis, it helps to phrase it using an if-then format, such as, “ If I water a plant every day, then it will grow better.” This format can get tricky when dealing with multiple variables, but in general, it’s a reliable method for expressing the cause-and-effect relationship you’re testing. 

5  Collect data to support your hypothesis

A hypothesis is merely a means to an end. The priority of any scientific research is the conclusion. Once you have your hypothesis laid out and your variables chosen, you can then begin your experiments. Ideally, you’ll collect data to support your hypothesis, but don’t worry if your research ends up proving it wrong—that’s all part of the scientific method. 

6 Write with confidence

Last, you’ll want to record your findings in a research paper for others to see. This requires a bit of writing know-how, quite a different skill set than conducting experiments. 

That’s where Grammarly can be a major help; our writing suggestions point out not only grammar and spelling mistakes , but also new word choices and better phrasing. While you write, Grammarly automatically recommends optimal language and highlights areas where readers might get confused, ensuring that your hypothesis—and your final paper—are clear and polished.

hypothesis statement of product

Language selection

  • Français fr

Statement by Minister Ng on U.S. Department of Commerce fifth review of duties on Canadian softwood lumber

From: Global Affairs Canada

The Honourable Mary Ng, Minister of Export Promotion, International Trade and Economic Development, today issued the following statement on the final results of the U.S. Department of Commerce’s fifth administrative review of anti-dumping and countervailing duties on certain Canadian softwood lumber products:

August 13, 2024 - Ottawa, Ontario - Global Affairs Canada

“Canada is extremely disappointed that the U.S. Department of Commerce has significantly increased its unfair and unwarranted duties on softwood lumber from Canada, from 8.05% to 14.54%.

“Baseless and unfair U.S. duties on softwood lumber unjustifiably harm consumers and producers on both sides of the border. This latest measure will negatively impact workers and their communities. U.S. consumers and businesses that need Canadian lumber will bear the burden of these duties, making housing even less affordable for Americans.

“It is in the best interests of both Canada and the United States to find a lasting resolution to this long-standing dispute. We will always fight for the best interest of Canadians and continue to use all available avenues to vigorously defend the workers, businesses, and communities who rely on softwood lumber for their livelihoods. These include litigation under NAFTA and the Canada-United States-Mexico Agreement, at the U.S. Court of International Trade and at the WTO.”

Quick facts

  • The U.S. Department of Commerce (Commerce) conducts a yearly review of its anti-dumping (AD) and countervailing duty (CVD) orders.
  • Commerce initiated the fifth administrative reviews of its softwood lumber AD and CVD duty orders on March 14, 2023. It issued the preliminary results of these reviews on February 1, 2024.
  • Commerce issued its final results on August 13, 2024. The new combined duty rate that will apply to most softwood lumber exports is 14.54% compared to the previous rate of 8.05% from the fourth administrative reviews.
  • The new rate will be applied retroactively to exports made in 2022 and will apply to new exports of softwood lumber products to the United States from companies that were subject to the fifth administrative reviews.

Associated links

  • Minister Ng welcomes NAFTA dispute panel ruling regarding U.S. countervailing duties on Canadian softwood lumber
  • Statement by Minister Ng on U.S. Department of Commerce preliminary review of duties on Canadian softwood lumber
  • Statement by Minister Ng on U.S. duty rates on Canadian softwood lumber
  • Frequently asked questions: Softwood lumber
  • Background: Canada-United States softwood lumber trade

Huzaif Qaisar Press Secretary Office of the Minister of Export Promotion, International Trade and Economic Development 343-576-4365 [email protected]

Media Relations Office Global Affairs Canada [email protected] Follow us on X (Twitter):  @CanadaTrade Like us on Facebook:  Canada’s international trade - Global Affairs Canada

Page details

U.S. flag

  • Questions and Answers Related to the FDIC’s Part 328 Final Rule
  • Deposit Insurance
  • Understanding Deposit Insurance
  • Deposit Insurance FAQs
  • Are My Deposit Accounts Insured by the FDIC?
  • Financial Products That Are Not Insured by the FDIC
  • Deposit Insurance for Accounts Held by Government Depositors
  • Banker Webinar
  • Deposit Insurance At A Glance
  • Your Insured Deposits
  • Financial Institution Employee’s Guide to Deposit Insurance
  • Assessment Methodology & Rates
  • Assessment Calculators
  • Assessment Regulations by Subject Area
  • Fund Management
  • Deposit Insurance Fund Background Materials
  • Congressionally Required Studies
  • Dodd-Frank Wall Street Reform and Consumer Protection Act of 2010
  • Federal Deposit Insurance Reform Act of 2005
  • Historical Designated Reserve Ratio
  • Deposit Insurance Fund Contact Information

Below are answers to a collection of questions about the FDIC Official Signs and Advertising Requirements, False Advertising, Misrepresentation of Insured Status, and Misuse of the FDIC’s Name or Logo Final Rule (“final rule”), 12 CFR part 328 (“part 328”). This collection of questions and answers will be periodically updated on the FDIC’s website.

I. Physical Premises

A. FDIC Official Sign

Answer: If a banker at a new accounts desk “usually and normally” receives and processes deposits (e.g., processes a check deposit at the new accounts desk), then the official sign must be posted at the new accounts desk. In a scenario where the banker at the new accounts desk always walks the initial deposit over to the teller line, then the teller is “receiving” the deposit and the official sign posted at the teller window is sufficient; therefore, in that situation, an official sign would not be required at the new accounts desk.

Answer: No. Part 328 does not require IDIs to provide “initial disclosures” about FDIC coverage before account opening.

Answer: No. The official sign is required wherever deposits are usually and normally received. FDIC staff does not view that deposits are usually and normally received by the IDI when placed in a night depository.

Answer: No. The final rule does not modify color requirements for the physical FDIC official sign. The rule continues to require use of the standard, in-branch official FDIC sign, which is 7x3 inches in size with black lettering and gold background. Upon request, the FDIC will continue to provide the official sign at no cost to IDIs. As has been the case traditionally, IDIs may, at their expense, procure from commercial suppliers signs that vary from the official sign in size, color, or material. An IDI may display signs that vary from the official sign in size, color, or material at any location where display of the official sign is required or permitted. However, any such varied sign that is displayed in locations where display of the official sign is required must not be smaller in size than the official sign, must have the same color for the text and graphics, and include the same content. 12 CFR § 328.3(b)(4).

B. Non-Deposit Signs

Answer: Yes. Under 12 CFR § 328.3(c)(2), an IDI must continuously, clearly, and conspicuously display the required non-deposit signage at each location within the premises where non-deposit products are offered. If non-deposit products are offered in individual offices, the non-deposit sign should be visible in those offices.

II. Digital Channels (e.g., Websites or Apps)

A. Placement and Display of Official Digital Sign

Answer: Under part 328, the FDIC official digital sign must be displayed on “digital deposit taking channels,” which includes IDIs’ “websites and web-based or mobile applications that offer the ability to make deposits electronically and provide access to deposits at insured depository institutions.” 12 CFR § 328.5(a). If an IDI’s website is purely informational, with no ability to make deposits or access deposits, it would not be a digital deposit-taking channel.

Answer: No. Under part 328, the FDIC official digital sign must be displayed on the (1) initial or homepage of the IDI’s website or application, (2) landing or login pages, and (3) pages where a customer may transact with deposits. For example, the FDIC official digital sign should be displayed where a mobile application allows customers to deposit checks remotely, because this electronic space is in effect a digital teller window. 12 CFR § 328.5(d).

Answer: The final rule requires IDIs to display the official digital sign in a clear, continuous, and conspicuous manner. In general, the FDIC would expect to see official digital signs displayed on the applicable pages (see answer II. A. 2.above for the applicable pages) in a manner that is clearly legible to all consumers to ensure they can read it easily. The official digital sign could be displayed above the IDI’s name, to the right of the IDI’s name or below the IDI’s name, but under all circumstances, the official digital sign continuously displayed near the top of the relevant page or screen and in close proximity to the IDI’s name would meet the clear and conspicuous standard under the rule.

Below is an example of the FDIC official digital sign near the top of the page and in close proximity to the IDI’s name:

image of Anytown Bank USA Website with FDIC Digital Sign

Answer: No. The final rule requires that the official digital sign be displayed “in a continuous manner,” which means it must remain visible on the (1) initial page or homepage of the website or application, (2) landing or login pages, and (3) pages where the customer may transact with deposits. 12 CFR § 328.5(d). The final rule, however, does not require the official digital sign to continue to follow the user as they scroll up or down the screen.

Answer: No. Part 328 does not require the official digital sign to be linked to the FDIC’s website. However, it may be helpful to consumers if IDIs link the official digital sign to the FDIC’s optional online BankFind tool, so that consumers can more easily confirm that the bank is FDIC-insured. This would help consumers better differentiate IDIs from non-banks. Optional, downloadable versions of the FDIC official digital sign are accessible to, and available for, bankers on FDICconnect , a secure website operated by the FDIC that FDIC-insured institutions can use to exchange information with the FDIC.

Answer: Generally, the FDIC official digital sign should be displayed as presented (shown below) in the final rule at 12 CFR § 328.5(b), with no alteration to the text except for color variation as noted in the regulation text.

FDIC Official Digital Sign for web and mobile

However, if the image does not fit a particular device or screen, the official digital sign can be scaled, “wrapped,” or “stacked” to fit the relevant screen and may satisfy the “clear and conspicuous” requirement.

Anytown Bank USA FDIC Digital Sign Wrapped Mobile

Answer: Under the final rule, IDIs are required to display the FDIC official digital sign “clearly and conspicuously” in a continuous manner; the official digital sign continuously displayed near the top of the relevant page or screen and in close proximity to the IDI’s name would meet the clear and conspicuous standard under the rule. 12 CFR § 328.5(f). IDIs are not required to display the FDIC official digital sign every time the IDI’s name appears, such as in the footer of the website.

Answer: No. For purposes of satisfying the final rule, IDIs are required to display the official digital sign in a clear, continuous, and conspicuous manner. The official digital sign continuously displayed near the top of the relevant page or screen and in close proximity to the IDI’s name would meet the clear and conspicuous standard under the rule. 12 CFR § 328.5(f). Therefore, placing the official digital sign in a footer of an IDI’s webpage would not meet the clear, conspicuous, and continuous display requirement.

Answer: It depends in part on what type of transaction is being completed. The FDIC official digital sign must be displayed on the (1) initial or homepage of the bank’s website or application, (2) landing or login pages, and (3) pages where a customer may transact with deposits. 12 CFR § 328.5(d). For example, the FDIC official digital sign should be displayed where an IDI’s mobile application allows customers to deposit checks remotely, because this is an electronic space where a customer is transacting with deposits.

However, if a consumer is completing a transaction by using an embedded third-party payment platform that consumers: (a) access after logging into their IDI’s website; and (b) utilize to initiate payments/move funds out of the IDI, then the official digital sign should not be posted on those pages.

Answer: Examples of “pages where the customer may transact with deposits” that require the display of the FDIC official digital sign include, but are not limited to: mobile application pages that allow customers to deposit checks remotely; and, pages where customers may transfer deposits between deposit accounts held within the same IDI (e.g., checking to savings or vice versa). On the other hand, the FDIC would not expect an IDI to display the FDIC official digital sign on pages where a customer is transferring money from a deposit account to a non-deposit account.

Answer: With respect to which specific webpages the non-deposit signs must be displayed, when an IDI offers both access to deposits and non-deposit products on its digital deposit-taking channels, it must display a non-deposit sign indicating that non-deposit products: are not insured by the FDIC; are not deposits; and may lose value. This non-deposit sign must be displayed clearly, conspicuously, and continuously on each page relating to non-deposit products.

With respect to whether the non-deposit signs can be placed in the footer, although there is no requirement for the non-deposit sign to be displayed near the top of the relevant page or screen, placing the non-deposit sign in a footer of an IDI’s webpage would generally not meet the clear, conspicuous, and continuous display requirement. In addition, the non-deposit products sign may not be displayed in close proximity to the FDIC official digital sign. 12 CFR § 328.5(g)(1).

Anytown Bank USA FDIC Digital Sign on Landing Page

Answer: No. In general, a “dashboard” or “portal” is an account summary webpage or screen on an app that typically displays a customer’s financial information regarding various products after logging in but where a customer does not transact with deposits. For example, a dashboard may provide an overview of an IDI customer’s checking, savings, mortgage, investment, and retirement account balances. For the purposes of part 328, a dashboard or portal as described here is not an initial or homepage, landing or login page, or a page where a customer may transact with deposits. Accordingly, an IDI is not required to display the official FDIC digital sign on such a dashboard or portal. As shown in the example below, the official FDIC digital sign is not displayed in this app version of a dashboard:

Anytown Bank USA FDIC Digital Sign Not Displayed on Mobile

Answer: The final rule provides that an official digital sign be continuously displayed near the top of the relevant page or screen and in close proximity to the IDI’s name would be considered clear and conspicuous. 12 CFR § 328.5(f)

If an IDI displays its full name, a partial name of the IDI, or the logo of the IDI (or any similar symbol used to identify the IDI) near the top of the page or screen, and continuously displays the official digital sign in close proximity to it, this approach would be considered clear and conspicuous.

Answer: The final rule does not supersede or alter the requirements of IDIs to comply with ADA’s digital accessibility rules. As IDIs implement the final rule’s requirements, IDIs should take steps ensure that their web content is fully compliant with other laws and regulations.

Answer: Part 328 requires IDIs to display the official digital sign on their digital deposit-taking channels on the following pages or screens: initial or homepage of the website or application, landing or login pages, and pages where the customer may transact with deposits. Downloadable content that is available from an IDI’s website would not likely be viewed as a page or screen where the official digital sign would be required.

Answer: No. An IDI is not required to post the official digital sign in the app store where its app is available for download.

Answer: An IDI must clearly and conspicuously display a non-deposit sign on each page relating to non-deposit products if the IDI offers both access to deposits and non-deposit products. This signage must be displayed continuously on each page relating to non-deposit products.

Regarding placement of the sign, an example of clear and conspicuous placement of the non-deposit sign is to place it in close proximity to where access to a non-deposit product is provided on each page relating to non-deposit products.

Anytown Bank USA FDIC Digital Sign Non-Deposit Product

Answer: If an IDI’s digital deposit-taking channel, such as a website, offers access to non-deposit products from a non-bank third party’s online platform, and a logged-in IDI customer attempts to access such non-deposit products, an IDI must provide a one-time per web-session notification on the IDI’s deposit-taking channel before the customer leaves the IDI’s digital deposit-taking channel. The one-time notification could include, for example, an IDI using a “pop-up”, “speedbump”, or “overlay” that must be dismissed by an action of the IDI’s customer before initially accessing the third party’s online platform.

The notification must clearly and conspicuously indicate that the third party’s non-deposit products: are not insured by the FDIC; are not deposits; and may lose value. 12 CFR § 328.5(g)(2). The IDI may include additional disclosures in the notification that may help prevent consumer confusion, including, for example, that the IDI customer is leaving the IDI’s website.

Answer: No. If an IDI customer who is logged into an IDI’s digital deposit-taking channel (such as an IDI’s website and app) clicks on a hyperlink that takes them to a third-party's website and then the customer clicks on the same hyperlink again in the same session, the IDI would only be required to provide the one-time notification the first time the customer clicks the link, and not each subsequent time that the customer clicks on the same link in the same session.

On the other hand, the notification should be given twice during the same session if a consumer clicks on a link to access a third-party website and then in the same session clicks on a different link that takes them to a different third party’s website.

Answer: A shortened version of the Retail Non-Deposit Investments disclosure (“not FDIC insured; no bank guarantee; may lose value”) can be substituted for the non-deposit sign (“are not insured by the FDIC; are not deposits; and may lose value”), as long as the language is displayed clearly, conspicuously, and continuously on each webpage relating to non-deposit products and other applicable requirements of the rule are adhered to.

For example, this signage may not be displayed in close proximity to the official FDIC digital sign. In addition, placing the sign in a footer of an IDI’s webpage would not meet the clear, conspicuous, and continuous display requirement.

Answer: Yes. If an IDI has satisfied the final rule’s requirements, it may provide additional or supplemental clarifying disclosures to consumers on its digital channels.

C. Use of Advertising Statement on Digital Channels

Answer: The advertising statement (e.g., “Member FDIC”) must be displayed on advertisements, consistent with 12 CFR § 328.6. It is not intended to overlap with the official digital sign, and the advertising statement is not required on web pages where an IDI displays the official digital sign, such as the bank’s homepage. However, an IDI is not prohibited from displaying the advertising statement on a page that also includes the official digital sign, so long as the use of the advertising statement on that page is otherwise consistent with the official advertising statement requirements in 12 CFR § 328.6.

Answer: Advertising pages (i.e., commercial messages, in any medium, that is designed to attract public attention or patronage to a product or business) must adhere to the requirements of 12 CFR § 328.6, which requires the inclusion of an official advertising statement, either “Member of the Federal Deposit Insurance Corporation”, or a short title of “Member of FDIC”, “Member FDIC”, “FDIC-insured”, or a reproduction of the FDIC’s symbol. IDIs are not required to display the official FDIC digital sign on advertising pages.

D. Automated Teller Machines or Like Devices

Answer: IDIs are not required to take down physical FDIC official signs attached to ATMs. For an IDI’s ATM or like device that receives deposits but does not offer access to non-deposit products, except as described below, the final rule provides flexibility to meet the signage requirement by either (1) displaying the FDIC official digital sign electronically on ATM screens (consistent with the image as described in 12 CFR § 328.5), or (2) displaying the physical official sign by attaching or posting it to the ATM.

However, IDIs’ ATMs or like devices that accept deposits and are put into service after January 1, 2025, must display the official digital sign electronically (with no option to satisfy the requirement through display of the physical official sign). 12 CFR § 328.4(e).

Answer: For ATMs currently in service, or that will be put into service on or before January 1, 2025, under 12 CFR § 328.4 (b), “an insured depository institution’s automated teller machine or like device that receives deposits” but “does not offer access to non-deposit products” may comply with the official sign requirement in one of two ways.

The first option is to post or attach the physical official FDIC sign (as described in 12 CFR § 328.2) on the ATMs. For IDIs that select this option, it is worth noting that a “degraded or defaced physical official sign” would not satisfy the “clearly, continuously, and conspicuously” requirement for purposes of 12 CFR § 328.4(b)(1). See 12 CFR § 328.4(f).

The second option is to display the FDIC official digital sign as described in § 328.5 on its ATMs home pages or screens and on each transaction page or screen relating to deposits.

The regulation does not provide an exhaustive list of what constitutes “each transaction page or screen relating to deposits.” However, to provide an example, if an IDI’s customer is depositing funds at the IDI’s ATMs, then the final rule requires display of the FDIC official digital sign on pages related to that transaction. Similarly, if an IDI’s customer is transferring funds between deposit accounts at their IDI’s ATM, then the digital official sign is required on those transaction pages or screens.

In contrast, if an IDI customer is only checking their balance, those pages/screens are not “transaction” pages for purposes of Part 328; therefore, a digital official sign is not required on those non-transaction pages.

Answer: In general, an IDI’s ATM or like device that receives deposits and offers access to non-deposit products must clearly, continuously, and conspicuously display the official FDIC digital sign (as described in 328.5) on its home page or screen and on each transaction page or screen relating to deposits. An insured depository institution’s ATM or like device that receives deposits and does not offer access to non-deposit products may comply with the official sign requirement by either: (1) displaying the physical official sign on the ATM; or (2) displaying the FDIC official digital sign.

However, in some cases an IDI’s ATM may allow a non-customer to use a debit card or credit card from another financial institution (including other IDIs, credit unions, or other financial entities), which allows the non-customer to check their balance, withdraw funds or add funds to their accounts. For such circumstances, the IDI’s ATM may be unable to identify or verify non-customer information, including whether the non-customer is accessing FDIC-insured deposit accounts, or non-deposit products. In this scenario, if the IDI is unable to identify or verify the non-customer information, the IDI’s ATM is not required to display the official FDIC digital sign or the non-deposit sign after the non-customer uses their card and PIN (or similar credential) to access the ATM (i.e., status as a non-customer is determined).

Answer: In general, under 12 CFR § 328.4(c), an IDI’s ATM or like device that receives deposits for an IDI and offers access to non-deposit products must clearly, continuously, and conspicuously display the FDIC official digital sign as described in § 328.5 on its homepage or screen and on each transaction page or screen relating to deposits. As noted in 12 CFR § 328.5(f), an official digital sign “continuously displayed near the top of the relevant page or screen and in close proximity to the IDI's name would be considered clear and conspicuous.”

If the ATM does not offer access to non-deposit products, the final rule provides flexibility to meet the signage requirement, allowing the IDI to display the physical FDIC official sign by physically attaching it to the ATM instead of using the electronic sign on its homepage or screen. This is only an option for deposit taking ATMs or like devices, which do not offer access to non-deposit products, and that were put into service before January 1, 2025.

E. Social Media

Answer: No. IDIs are not required to display the FDIC official digital sign on its social media advertisements. IDIs should ensure that social media advertisements are compliant with the official advertising statement requirements contained in 12 CFR § 328.6.

III. Technical Assistance

Answer: The FDIC posted the slides from the FDIC’s banker webinar on Part 328 on its website .

The FDIC issued a Financial Institutions Letter and press release addressing the issuance of the final rule.

The final rule was published in the Federal Register on January 18th, 2024.

Answer: The FDIC has made optional versions of the official digital sign available for IDIs on FDICconnect , a secure website operated by the FDIC that FDIC-insured institutions can use to exchange information with the FDIC. The requirement to display the new FDIC official digital sign only applies to IDIs. Display of the FDIC official digital sign by any non-bank third party would improperly imply that the non-bank is FDIC-insured and would constitute a misrepresentation under part 328 subpart B.

Answer: The non-English equivalent of the FDIC’s official advertising statement (Member of FDIC, Member FDIC, FDIC-Insured) may be used in an advertisement only if the translation has received the prior written approval of the FDIC. 12 C.F.R. 328.6(f).

IDIs can send an email requesting prior written approval to translate the FDIC’s advertising statement to [email protected] .

IV. Compliance and Effective Dates

The compliance date for the amendments made to the final rule is January 1, 2025. No changes had to be implemented by April 1, 2024, but IDIs can begin posting the official digital sign and implementing other aspects of the regulation prior to January 1, 2025. For example, some IDIs have already posted the new official digital sign on their websites. Similarly, some non-bank entities have updated their disclosures consistent with the amendments to part 328 subpart B.

V. Advertising for Non-Deposit Products

Answer: For the purposes of part 328, safe deposit boxes and credit products are excluded from the definition of “non-deposit product.” Therefore, there is no requirement under part 328 for an IDI to include such a disclosure in marketing material for these products.

FDIC Official Signs and Advertising Requirements, False Advertising, Misrepresentations of Insured Status, and Misuse of the FDIC’s Name or Logo

FDIC Finalizes Rule to Modernize Official Signs and Advertising Statement Requirements for Insured Depository Institutions

Last Updated: August 13, 2024

U.S. flag

An official website of the United States government

Here’s how you know

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Take action

  • Report an antitrust violation
  • File adjudicative documents
  • Find banned debt collectors
  • View competition guidance
  • Competition Matters Blog

Slow the Roll-up: Help Shine a Light on Serial Acquisitions

View all Competition Matters Blog posts

We work to advance government policies that protect consumers and promote competition.

View Policy

Search or browse the Legal Library

Find legal resources and guidance to understand your business responsibilities and comply with the law.

Browse legal resources

  • Find policy statements
  • Submit a public comment

hypothesis statement of product

Vision and Priorities

Memo from Chair Lina M. Khan to commission staff and commissioners regarding the vision and priorities for the FTC.

Technology Blog

Avoiding outages and preventing widespread system failures.

View all Technology Blog posts

Advice and Guidance

Learn more about your rights as a consumer and how to spot and avoid scams. Find the resources you need to understand how consumer protection law impacts your business.

  • Report fraud
  • Report identity theft
  • Register for Do Not Call
  • Sign up for consumer alerts

Get Business Blog updates

  • Get your free credit report
  • Find refund cases
  • Order bulk publications
  • Consumer Advice
  • Shopping and Donating
  • Credit, Loans, and Debt
  • Jobs and Making Money
  • Unwanted Calls, Emails, and Texts
  • Identity Theft and Online Security
  • Business Guidance
  • Advertising and Marketing
  • Credit and Finance
  • Privacy and Security
  • By Industry
  • For Small Businesses
  • Browse Business Guidance Resources
  • Business Blog

Servicemembers: Your tool for financial readiness

Visit militaryconsumer.gov

Get consumer protection basics, plain and simple

Visit consumer.gov

Learn how the FTC protects free enterprise and consumers

Visit Competition Counts

Looking for competition guidance?

  • Competition Guidance

News and Events

Latest news, federal trade commission announces final rule banning fake reviews and testimonials.

View News and Events

Upcoming Event

Ex parte meeting on unfair or deceptive fees nprm between commissioner bedoya’s office and expedia.

View more Events

Sign up for the latest news

Follow us on social media

gaming controller illustration

Playing it Safe: Explore the FTC's Top Video Game Cases

Learn about the FTC's notable video game cases and what our agency is doing to keep the public safe.

Latest Data Visualization

Visualization of FTC Refunds to Consumers

FTC Refunds to Consumers

Explore refund statistics including where refunds were sent and the dollar amounts refunded with this visualization.

About the FTC

Our mission is protecting the public from deceptive or unfair business practices and from unfair methods of competition through law enforcement, advocacy, research, and education.

Learn more about the FTC

Lina M. Khan

Meet the Chair

Lina M. Khan was sworn in as Chair of the Federal Trade Commission on June 15, 2021.

Chair Lina M. Khan

Looking for legal documents or records? Search the Legal Library instead.

  • Cases and Proceedings
  • Premerger Notification Program
  • Merger Review
  • Anticompetitive Practices
  • Competition and Consumer Protection Guidance Documents
  • Warning Letters
  • Consumer Sentinel Network
  • Criminal Liaison Unit
  • FTC Refund Programs
  • Notices of Penalty Offenses
  • Advocacy and Research
  • Advisory Opinions
  • Cooperation Agreements
  • Federal Register Notices
  • Public Comments
  • Policy Statements
  • International
  • Office of Technology Blog
  • Military Consumer
  • Consumer.gov
  • Bulk Publications
  • Data and Visualizations
  • Stay Connected
  • Commissioners and Staff
  • Bureaus and Offices
  • Budget and Strategy
  • Office of Inspector General
  • Careers at the FTC

Don’t waste your energy on a solar scam

Facebook

Solar energy and other high efficiency home improvements can help reduce energy consumption and lower utility costs for homeowners. As going solar or using clean or renewable energy gets more popular, bad actors have joined the movement, too. Be aware of solar energy scams – everything from scammers pretending to be affiliated with the government or utility company to businesses misrepresenting the cost of improvements, savings, and financing options. If your company offers solar energy, remember that claims must not only be truthful, but also comply with established consumer protection laws, including the FTC Act and the FTC’s new  Impersonation Rule .

Be truthful . Every clean energy company has a responsibility to be honest and upfront with consumers. Be transparent about what you’re offering. Disclose the total cost for your product or service, be clear about financing options, and don’t overpromise cost savings that might come through tax credits, rebates, or incentives. Legitimate businesses help consumers make informed decisions about whether powering with solar or clean energy is right for them. So share  FTC resources about protecting against deceptive practices, point people to the Department of Energy’s  guide for homeowners and commonly asked questions , or share the Department of Treasury’s guidance on clean energy . Remind prospective customers that while tax credits, rebates, and incentives might be available for solar purchasers who qualify, offers for “free” or “no cost” solar panels are scams.

Comply with the law: old and new . Reputable companies know the importance of being honest about what they’re offering – and how much it costs. It’s not only good business, it’s the law. That’s one lesson from the case the FTC and the state of California brought against  Ygrene Energy Fund , a company providing home improvement financing   through Property Assessed Clean Energy (PACE) loans. The FTC and California alleged Ygrene deceived homeowners about financing home improvements, trapping them with liens that made it hard to sell their homes. The settlement required Ygrene to dedicate $3 million dollars to help remove those liens placed on without consumers’ consent and provide monetary relief to the people impacted. That’s a reminder to all businesses selling clean energy systems and offering related financing: violations come with a price. And, in addition to existing laws, pay attention to new regulations and initiatives like the CFPB’s  Residential Property Assessed Clean Energy Financing Proposed Rule to ensure sensible safeguards apply for consumers seeking PACE and other clean energy loans.

Report solar and clean energy imposters . The FTC’s  Impersonation Rule  is good news for legitimate businesses and consumers, alike. The rule applies not only to government imposters – like those who misrepresent their affiliation with the government and tell tall tales about free or no cost solar energy to make sales – but also to those who misrepresent that they’re affiliated with, endorsed, or sponsored by legitimate businesses. If you spot imposters like these, or scams of any kind related to clean energy systems,  tell the FTC . 

image of electric plug, house, sun, solar panels and wind turbines

  • Consumer Protection
  • Bureau of Consumer Protection
  • Energy Savings

Add new comment

Read our privacy act statement.

It is your choice whether to submit a comment. If you do, you must create a user name, or we will not post your comment. The Federal Trade Commission Act authorizes this information collection for purposes of managing online comments. Comments and user names are part of the Federal Trade Commission’s (FTC) public records system, and user names also are part of the FTC’s  computer user records  system. We may routinely use these records as described in the FTC’s  Privacy Act system notices . For more information on how the FTC handles information that we collect, please read our privacy policy .

Read Our Comment Policy

The purpose of this blog and its comments section is to inform readers about Federal Trade Commission activity, and share information to help them avoid, report, and recover from fraud, scams, and bad business practices. Your thoughts, ideas, and concerns are welcome, and we encourage comments. But keep in mind, this is a moderated blog. We review all comments before they are posted, and we won’t post comments that don’t comply with our commenting policy. We expect commenters to treat each other and the blog writers with respect.

  • We won’t post off-topic comments, repeated identical comments, or comments that include sales pitches or promotions.
  • We won’t post comments that include vulgar messages, personal attacks by name, or offensive terms that target specific people or groups.
  • We won’t post threats, defamatory statements, or suggestions or encouragement of illegal activity.
  • We won’t post comments that include personal information, like Social Security numbers, account numbers, home addresses, and email addresses. To file a detailed report about a scam, go to ReportFraud.ftc.gov.

We don't edit comments to remove objectionable content, so please ensure that your comment contains none of the above. The comments posted on this blog become part of the public domain. To protect your privacy and the privacy of other people, please do not include personal information. Opinions in comments that appear in this blog belong to the individuals who expressed them. They do not belong to or represent views of the Federal Trade Commission.

More from the Business Blog

We’ll pay you to give our new rule a good review, ftc says carshield shielded consumers from the truth about limitations of its vehicle service contracts, $43.5 million in redress and debt cancellation to servicemembers, military spouses, and other consumers due to multiple missteps by education provider career step, how is a student debt relief outfit allegedly misleading consumers let us count the ways..

AMD Embedded Processors Vulnerabilities – Aug 2024

Bulletin ID:    AMD-SB-5002 Potential Impact: Varies by CVE, see descriptions below Severity: Varies by CVE, see descriptions below

Potential vulnerabilities in AMD Embedded processors were reported, and mitigations are being provided through Platform Initialization (PI) firmware packages.

CVE Details

Refer to Glossary for explanation of terms

CVE-2022-23815

7.5 (High)

AV:L/AC:H/PR:H/UI:N/S:C/C:H/I:H/A:H

Improper bounds checking in APCB firmware may allow an attacker to perform an out of bounds write, corrupting the APCB entry potentially leading to arbitrary code execution.

CVE-2023-20578

7.5 (High)

AV:L/AC:H/PR:H/UI:N/S:C/C:H/I:H/A:H

A TOCTOU (Time-Of-Check-Time-Of-Use) in SMM may allow an attacker with ring0 privileges and access to the BIOS menu or UEFI shell to modify the communications buffer, potentially resulting in arbitrary code execution.

CVE-2021-26344

7.2 (High)

AV:L/AC:H/PR:H/UI:R/S:C/C:H/I:H/A:H

An out of bounds memory write when processing the AMD PSP1 Configuration Block (APCB) could allow an attacker with access the ability to modify the BIOS image, and the ability to sign the resulting image, to potentially modify the APCB block resulting in arbitrary code execution.

CVE-2022-23817

7.0 (High) AV:L/AC:H/PR:L/UI:N/S:U/C:H/I:H/A:H

Insufficient checking of memory buffer in ASP Secure OS may allow an attacker with a malicious TA to read/write to the ASP Secure OS kernel virtual address space, potentially leading to privilege escalation.

CVE-2023-20591

6.5 (Medium)

AV:N /AC:H/PR:N/UI:N/S:C/ C:L/I:L/A:L

Improper re-initialization of IOMMU during the DRTM event may permit an untrusted platform configuration to persist, allowing an attacker to read or modify hypervisor memory, potentially resulting in loss of confidentiality, integrity, and availability.

CVE-2021-26367

5.7 (Medium)

AV:L/AC:H/PR:H/UI:N/S:U/C:N/I:H/A:H

A malicious attacker in x86 can misconfigure the Trusted Memory Regions (TMRs), which may allow the attacker to set an arbitrary address range for the TMR, potentially leading to a loss of integrity and availability.

CVE-2024-21981

5.7 (Medium)

AV:L/AC:H/PR:H/UI:N/S:U/C:H/I:H/A:N

Improper key usage control in AMD Secure Processor (ASP) may allow an attacker with local access who has gained arbitrary code execution privilege in ASP to extract ASP cryptographic keys, potentially resulting in loss of confidentiality and integrity.

CVE-2021-46746

5.2 (Medium)

AV:L/AC:H/PR:H/UI:N/S:U/C:L/I:L/A:H

Lack of stack protection exploit mechanisms in ASP Secure OS Trusted Execution Environment (TEE) may allow a privileged attacker with access to AMD signing keys to corrupt the return address, causing a stack-based buffer overrun, potentially leading to a denial of service.

CVE-2021-26387

3.9 (Low)

AV:L/AC:H/PR:H/UI:N/S:C/C:N/I:L/A:L

Insufficient access controls in ASP kernel may allow a privileged attacker with access to AMD signing keys and the BIOS menu or UEFI shell to map DRAM regions in protected areas, potentially leading to a loss of platform integrity.

CVE-2021-46772

3.9 (Low)

AV:L/AC:H/PR:H/UI:N/S:C/C:N/I:L/A:L

Insufficient input validation in the ABL may allow a privileged attacker with access to the BIOS menu or UEFI shell to tamper with the structure headers in SPI ROM causing an out of bounds memory read and write, potentially resulting in memory corruption or denial of service.

CVE-2023-20518

1.6 (Low)

AV:L/AC:H/PR:H/UI:N/S:U/C:L/I:N/A:N

Incomplete cleanup in the ASP may expose the Master Encryption Key (MEK) to a privileged attacker with access to the BIOS menu or UEFI shell and a memory exfiltration vulnerability, potentially resulting in loss of confidentiality.

Affected Products and Mitigation

 AMD recommends updating to the Platform Initialization (PI) firmware version indicated below. 


to mitigate all listed CVEs

SnowyOwl PI

1.1.0.A

(2023-07-31)

EmbRomePI-SP3

1.0.0.A

(2023-07-31)

EmbMilanPI-SP3

1.0.0.7

(2023-07-31)

EmbGenoaPI-SP5

1.0.0.3

(2023-09-15)

CVE-2021-26344

High

No fix planned

 

EmbRomePI-SP3

1.0.0.6

(2021-10-29)

EmbMilanPI-SP3

1.0.0.2

(2021-10-29)

Not affected

CVE-2022-23815

High

Not affected

Not affected

Not affected

Not affected

CVE-2022-23817

High

Not affected

Not affected

Not affected

Not affected

CVE-2023-20578

 

High

SnowyOwl PI

1.1.0.A

(2023-07-31)

EmbRomePI-SP3

1.0.0.A

(2023-07-31)

EmbMilanPI-SP3

1.0.0.7

(2023-07-31)

EmbGenoaPI-SP5

1.0.0.0

(2023-04-28)

CVE-2021-26367

Medium

Not affected

Not affected

Not affected

Not affected

CVE-2021-46746

Medium

No fix planned

No fix planned

No fix planned

No fix planned

CVE-2023-20591

Medium

Not affected

Not affected

EmbMilanPI-SP3

1.0.0.7

(2023-07-31)

EmbGenoaPI-SP5

1.0.0.3

(2023-09-15)

CVE-2024-21981

Medium

No fix planned

No fix planned

No fix planned

No fix planned

CVE-2021-26387

Low

No fix planned

No fix planned

No fix planned

No fix planned

CVE-2021-46772  

Low

Not affected

EmbRomePI-SP3

1.0.0.8

(2022-07-29)

EmbMilanPI-SP3 1.0.0.5

(2022-07-29)

Not affected

CVE-2023-20518

Low

Not affected

Not affected

Not affected

Not affected



Minimum version to mitigate all listed CVEs

EmbeddedPI-FP5

1.2.0.A

(2023-07-31)

EmbeddedPI-FP5

1.0.0.2

(2023-07-31)

EmbAM4PI

1.0.0.5

(2024-4-15)

EmbeddedAM5PI

1.0.0.0

(2023-11-30)

CVE-2021-26344

High

No fix planned

No fix planned

No fix planned

Not affected

CVE-2022-23815

7.5 (High)

EmbeddedPI-FP5

1.2.0.A

(2023-07-31)

EmbeddedR2KPI-FP5 1.0.0.2

(2023-07-31)

Not affected

Not affected

CVE-2022-23817

High

EmbeddedPI-FP5

1.2.0.A

(2023-07-31)

EmbeddedR2KPI-FP5 1.0.0.2

(2023-07-31)

EmbAM4PI

1.0.0.2

(2022-10-31)

EmbeddedAM5PI

1.0.0.0

(2023-11-30)

CVE-2023-20578

High

Not affected

Not affected

Not affected

EmbeddedAM5PI

1.0.0.0

(2023-11-30)

CVE-2021-26367

Medium

EmbeddedPI-FP5

1.2.0.A

(2023-07-31)

EmbeddedR2KPI-FP5 1.0.0.2

(2023-07-31)

Not affected

Not affected

CVE-2021-46746

Medium

EmbeddedPI-FP5

1.2.0.A

(2023-07-31)

EmbeddedR2KPI-FP5 1.0.0.2

(2023-07-31)

EmbAM4PI

1.0.0.2

(2022-10-31)

EmbeddedAM5PI

1.0.0.0

(2023-11-30)

CVE-2023-20591

Medium

Not affected

Not affected

Not affected

Not affected

CVE-2024-21981

Medium

No fix planned

No fix planned

No fix planned

Not affected

CVE-2021-26387

Low

EmbeddedPI-FP5

1.2.0.A

(2023-07-31)

EmbeddedR2KPI-FP5 1.0.0.2

(2023-07-31)

EmbAM4PI

1.0.0.2

(2022-10-31)

Not affected

CVE-2021-46772  

Low

No fix planned

No fix planned

No fix planned

Not affected

CVE-2023-20518

Low

EmbeddedPI-FP5

1.2.0.A

(2023-07-31)

EmbeddedR2KPI-FP5 1.0.0.2

(2023-07-31)

EmbAM4PI

1.0.0.3

(2023-07-31)

EmbeddedAM5PI

1.0.0.0

(2023-11-30)


Minimum version to mitigate all listed CVEs

All V1000 OPNs excluding YE1500C4T4MFH

YE1500C4T4MFH

EmbeddedPI-FP6

1.0.0.9

(2024-04-15)

EmbeddedPI-FP7r2

1.0.0.9

(2024-04-01)

EmbeddedPI-FP5

1.2.0.A

(2023-07-31)

CVE-2021-26344

High

No fix planned

No fix planned

No fix planned

EmbeddedPI-FP7r2

1.0.0.4

(2023-04-28)

CVE-2022-23815

High

EmbeddedPI-FP5

1.2.0.A

(2023-07-31)

EmbeddedPI-FP5

1.2.0.A

(2023-07-31)

Not affected

Not affected

CVE-2022-23817

High

EmbeddedPI-FP5

1.2.0.A

(2023-07-31)

EmbeddedPI-FP5

1.2.0.A

(2023-07-31)

EmbeddedPI-FP6

1.0.0.8

(2023-07-31)

EmbeddedPI-FP7r2

1.0.0.2

(2022-10-31)

CVE-2023-20578

High

Not affected

Not affected

Not affected

EmbeddedPI-FP7r2

1.0.0.8

(2024-01-15)

CVE-2021-26367

Medium

EmbeddedPI-FP5

1.2.0.A

(2023-07-31)

EmbeddedPI-FP5

1.2.0.A

(2023-07-31)

EmbeddedPI-FP6

1.0.0.6

(2022-04-29)

Not affected

CVE-2021-46746

Medium

EmbeddedPI-FP5

1.2.0.A

(2023-07-31)

EmbeddedPI-FP5

1.2.0.A

(2023-07-31)

EmbeddedPI-FP6

1.0.0.6

(2022-04-29)

EmbeddedPI-FP7r2

1.0.0.2

(2022-10-31)

CVE-2023-20591

Medium

Not affected

Not affected

Not affected

Not affected

CVE-2024-21981

Medium

No fix planned

No fix planned

Not affected

Not affected

CVE-2021-26387

Low

EmbeddedPI-FP5

1.2.0.A

(2023-07-31)

EmbeddedPI-FP5

1.2.0.A

(2023-07-31)

EmbeddedPI-FP6

1.0.0.6

(2022-04-29)

EmbeddedPI-FP7r2

1.0.0.9

(2024-04-01)

CVE-2021-46772  

Low

No fix planned

No fix planned

No fix planned

EmbeddedPI-FP7r2

1.0.0.4

(2023-04-28)

CVE-2023-20518

Low

No fix planned

No fix planned

EmbeddedPI-FP6

1.0.0.8

(2023-07-31)

EmbeddedPI-FP7r2

1.0.0.5

(2023-07-31)

Acknowledgement

AMD thanks the following for reporting these issues and engaging in coordinated vulnerability disclosure:

Cfir Cohen, Jann Horn, Mark Brand of Google - CVE-2021-26344 Mahdi Braik of the Apple Media Products RedTeam- CVE-2022-23817

Internally found: CVE-2021-26387, CVE-2021-46772, CVE-2021-26367, CVE-2021-46746, CVE-2023-20518, CVE-2023-20591,

Hugo Magalhaes, Oracle Security Researcher - CVE-2024-21981  

Revision Date  

Description  

2024-08-13

Initial publication  

The information contained herein is for informational purposes only and is subject to change without notice. While every precaution has been taken in the preparation of this document, it may contain technical inaccuracies, omissions and typographical errors, and AMD is under no obligation to update or otherwise correct this information. Advanced Micro Devices, Inc. makes no representations or warranties with respect to the accuracy or completeness of the contents of this document, and assumes no liability of any kind, including the implied warranties of noninfringement, merchantability or fitness for particular purposes, with respect to the operation or use of AMD hardware, software or other products described herein. Any computer system has risks of security vulnerabilities that cannot be completely prevented or mitigated. No license, including implied or arising by estoppel, to any intellectual property rights is granted by this document. Terms and limitations applicable to the purchase or use of AMD’s products are as set forth in a signed agreement between the parties or in AMD's Standard Terms and Conditions of Sale.

AMD, the AMD Arrow logo, EPYC, Ryzen and combinations thereof are trademarks of Advanced Micro Devices, Inc. CVE and the CVE logo are registered trademarks of The MITRE Corporation. Other product names used in this publication are for identification purposes only and may be trademarks of their respective companies.

Third party content may be licensed to you directly by the third party that owns the content and is not licensed to you by AMD. ALL LINKED THIRD-PARTY CONTENT IS PROVIDED ‘AS IS’ WITHOUT A WARRANTY OF ANY KIND. USE OF SUCH THIRD-PARTY CONTENT IS DONE AT YOUR SOLE DISCRETION AND UNDER NO CIRCUMSTANCES WILL AMD BE LIABLE TO YOU FOR ANY THIRD-PARTY CONTENT. YOU ASSUME ALL RISK AND ARE SOLELY RESPONSIBILITY FOR ANY DAMAGES THAT MAY ARISE FROM YOUR USE OF THIRD-PARTY CONTENT.

© 2024 Advanced Micro Devices, Inc. All rights reserved.

  • The mitigation of this vulnerability on older architecture could cause undesirable side effects and potentially have a more adverse impact than the exploit itself.
  • Attacker first needs to gain arbitrary privilege code execution within ASP exploiting a separate ASP FW vulnerability that has been fixed already.
  • This is a defense in depth vulnerability.  To leverage this vulnerability, a more impactful event would need to occur; specifically, the loss of ASP signing keys.  If a key escaped, attacking the ASP would have a trivial impact and there would be no benefit to exploiting this vulnerability.

Official websites use .gov

Secure .gov websites use HTTPS

Logo for U.S. Department of Defense

Readout of Secretary of Defense Lloyd J. Austin III's Call With Israeli Minister of Defense Yoav Gallant

Pentagon Press Secretary Maj. Gen. Pat Ryder provided the following readout:

Secretary of Defense Lloyd J. Austin III spoke with Israeli Minister of Defense Yoav Gallant today.

Secretary Austin reiterated the United States' commitment to take every possible step to defend Israel and noted the strengthening of U.S. military force posture and capabilities throughout the Middle East in light of escalating regional tensions. Reinforcing this commitment, Secretary Austin has ordered the USS ABRAHAM LINCOLN Carrier Strike Group, equipped with F-35C fighters, to accelerate its transit to the Central Command area of responsibility, adding to the capabilities already provided by the USS THEODORE ROOSEVELT Carrier Strike Group.

Additionally, the Secretary has ordered the USS Georgia (SSGN 729) guided missile submarine to the Central Command region.

Secretary Austin and Minister Gallant also discussed Israel's operations in Gaza and the importance of mitigating civilian harm, progress towards securing a ceasefire and the release of hostages held in Gaza, and our efforts to deter aggression by Iran, Lebanese Hizballah, and other Iran-aligned groups across the region.

Subscribe to Defense.gov Products

Choose which Defense.gov products you want delivered to your inbox.

Defense.gov

Helpful links.

  • Live Events
  • Today in DOD
  • For the Media
  • DOD Resources
  • DOD Careers
  • Help Center
  • DOD / Military Websites
  • Agency Financial Report
  • Value of Service
  • Taking Care of Our People
  • FY 2025 Defense Budget
  • National Defense Strategy

U.S. Department of Defense logo

The Department of Defense provides the military forces needed to deter war and ensure our nation's security.

Ukraine sees no sign of Belarusian military buildup near border

  • Medium Text

Positions of Ukrainian servicemen near the border with Belarus

Sign up here.

Reporting by Anastasiia Malenko; editing by Tom Balmforth and Angus MacSwan

Our Standards: The Thomson Reuters Trust Principles. , opens new tab

Gas leak at Nord Stream 2 as seen from the Danish F-16 interceptor on Bornholm

Greece tackles last of wildfire near Athens, assesses damage

Firefighters battled on Wednesday to extinguish the remnants of a wildfire near Athens that killed a woman, torched buildings, devoured woodland and forced thousands of people to flee their homes.

A laboratory nurse takes a sample from a child declared a suspected case Mpox at a treatment centre in Munigi, North Kivu province, Democratic Republic of the Congo

IMAGES

  1. Product Hypotheses: How to Generate and Validate Them

    hypothesis statement of product

  2. Forming Experimental Product Hypotheses

    hypothesis statement of product

  3. Lean UX Hypothesis Template for Product Managers

    hypothesis statement of product

  4. How to Write a Hypothesis

    hypothesis statement of product

  5. how to make a statement of hypothesis

    hypothesis statement of product

  6. 13 Different Types of Hypothesis (2024)

    hypothesis statement of product

COMMENTS

  1. How to Generate and Validate Product Hypotheses

    Across organizations, product hypothesis statements might vary in their subject, tone, and precise wording. But some elements never change. As we mentioned earlier, a hypothesis statement must always have two or more variables and a connecting factor. 1. Identify variables. Since these components form the bulk of a hypothesis statement, let's ...

  2. Product Hypotheses: How to Generate and Validate Them

    Product Hypothesis Examples. To demonstrate how to formulate your assumptions clearly, here are several more apart from the example of a hypothesis statement given above: Adding a wishlist feature to the cart with the possibility to send a gift hint to friends via email will increase the likelihood of making a sale and bring in additional sign-ups.

  3. Hypothesis-driven product management

    A product hypothesis is an assumption made within a limited understanding of a specific product-related situation. It further needs validation to determine if the assumption would actually deliver the predicted results or add little to no value to the product. ... Building a good hypothesis statement based on your users' pain points, testing ...

  4. Forming Experimental Product Hypotheses

    Product Hypothesis statements can come in many different forms so pick what's most comfortable for the team and business to understand. However they should always include the following key details:

  5. Product Hypothesis Validation: Best Practices & Examples

    A product hypothesis is a statement expressing an assumption, used as a tool to test and validate ideas about your customers' wants, needs, and/or values, and how your product can deliver all of them. In general, hypotheses are used by product managers to make or discard market decisions and prioritize activities based on their impact on the ...

  6. How to create product design hypotheses: a step-by-step guide

    Imagination and intuition are essential and have a very important role to play. Use them to diverge, and create as many possible solution ideas as possible. These are your hypotheses. The actual Step-by-Step Guide starts here…. Step 1: Imagine the change you want, and write it down.

  7. Product Hypothesis

    Types of product hypothesis 1. Counter-hypothesis. A counter-hypothesis is an alternative proposition that challenges the initial hypothesis. It's used to test the robustness of the original hypothesis and make sure that the product development process considers all possible scenarios.

  8. How to Generate and Test Hypotheses in Product Development

    A product hypothesis is a statement that proposes a connection between two or more variables and is crucially testable. When creating a product, we generate hypotheses to validate assumptions about customer behavior, market needs, or the potential impact of product changes.

  9. From Theory to Practice: The Role of Hypotheses in Product Development

    Data-Based Hypothesis: "Increasing the number of product recommendations based on user preferences will increase the average order value by 15%." This hypothesis is grounded in real shopping preferences, making it more likely to succeed. To successfully work with hypotheses, carefully analyze data.

  10. How to write an effective hypothesis

    How to write an effective hypothesis. Hypothesis validation is the bread and butter of product discovery. Understanding what should be prioritized and why is the most important task of a product manager. It doesn't matter how well you validate your findings if you're trying to answer the wrong question. A question is as good as the answer ...

  11. How to Pick a Product Hypothesis

    A good product hypothesis is falsifiable, measurable and actionable. Falsifiable. Falsifiable means that the hypothesis can be proved false by a simple contradictory observation. Using a Yelp ...

  12. How to write a better hypothesis as a Product Manager?

    A hypothesis is nothing but just a statement made with limited evidence and to validate the same we need to test it to make sure we build the right product. If you can't test it, then your ...

  13. A Guide to Product Hypothesis Testing

    A/B Testing. One of the most common use cases to achieve hypothesis validation is randomized A/B testing, in which a change or feature is released at random to one-half of users (A) and withheld from the other half (B). Returning to the hypothesis of bigger product images improving conversion on Amazon, one-half of users will be shown the ...

  14. 4 types of product assumptions and how to test them

    Product assumptions are preconceived beliefs or hypotheses that product managers establish during the product development cycle, providing an initial framework for decision-making. These assumptions, which can involve features, user behaviors, market trends, or technical feasibility, are integral to the iterative process of product creation and ...

  15. How do you define and measure your product hypothesis?

    In product management, a hypothesis is a proposed explanation or assumption about a product, feature, or aspect of the product's development or performance. It serves as a statement that can be tested, validated, or invalidated through experimentation and data analysis. Hypotheses play a crucial role in guiding product managers' decision ...

  16. My product management toolkit (5): assumptions and hypotheses

    The key point to Lean UX is the definition and validation of assumptions and hypotheses. Ultimately, this approach is all about risk management; instead of one 'big bang' product release, you constantly iterate and learn from actual customer usage of the product. Some people refer to this approach as the "velocity of learning.".

  17. How to Write a Strong Hypothesis

    Developing a hypothesis (with example) Step 1. Ask a question. Writing a hypothesis begins with a research question that you want to answer. The question should be focused, specific, and researchable within the constraints of your project. Example: Research question.

  18. A Beginner's Guide to Hypothesis Testing in Business

    A hypothesis or hypothesis statement seeks to explain why something has happened, or what might happen, under certain conditions. It can also be used to understand how different variables relate to each other. Hypotheses are often written as if-then statements; for example, "If this happens, then this will happen.".

  19. How to test hypotheses as a product manager

    Simple product development hypothesis testing using a Z-test. There are a few statistical hypothesis tests we could implement. A common one is a Z-Test. It allows us to take and test data samples and check if the observed differences deviate from what we would expect given the hypothesis. Let's look at an example:

  20. The 5 Components of a Good Hypothesis

    Can you kindly let me know if I'm on the right track. "Certain customer segment (AAA) will find value in feature (XXX), to tackle their pain point ". Change: using a feature (XXX)/ product. Impact: will reduce monetary costs/ help solve a problem. Who: for certain customers segment (AAA) By how much: by 5%.

  21. Product development through hypotheses: formulating hypotheses

    Product development should mainly focus on the customer's needs. Therefore, the target group must be included in the formulation of the hypothesis. This prevents distortion and makes the hypotheses more specific. During development, hypotheses can be refined or the target audience can be adapted.

  22. Writing a Strong Hypothesis Statement

    In our hypothesis statement example above, the two variables are wildfires and tornadoes, and our assumed relationship between the two is a causal one (wildfires cause tornadoes). ... Presence or absence of animal products in the diet; A cat's health will suffer if it is only fed a vegan diet because cats are obligate carnivores:

  23. How to Write a Hypothesis in 6 Steps, With Examples

    4 Alternative hypothesis. An alternative hypothesis, abbreviated as H 1 or H A, is used in conjunction with a null hypothesis. It states the opposite of the null hypothesis, so that one and only one must be true. Examples: Plants grow better with bottled water than tap water. Professional psychics win the lottery more than other people. 5 ...

  24. Statement by Minister Ng on U.S. Department of Commerce fifth review of

    August 13, 2024 - Ottawa, Ontario - Global Affairs Canada. The Honourable Mary Ng, Minister of Export Promotion, International Trade and Economic Development, today issued the following statement on the final results of the U.S. Department of Commerce's fifth administrative review of anti-dumping and countervailing duties on certain Canadian softwood lumber products:

  25. Questions and Answers Related to the FDIC's Part 328 Final Rule

    Answer: If a banker at a new accounts desk "usually and normally" receives and processes deposits (e.g., processes a check deposit at the new accounts desk), then the official sign must be posted at the new accounts desk. In a scenario where the banker at the new accounts desk always walks the initial deposit over to the teller line, then the teller is "receiving" the deposit and the ...

  26. Don't waste your energy on a solar scam

    Disclose the total cost for your product or service, be clear about financing options, and don't overpromise cost savings that might come through tax credits, rebates, or incentives. ... defamatory statements, or suggestions or encouragement of illegal activity. We won't post comments that include personal information, like Social Security ...

  27. AMD Embedded Processors Vulnerabilities

    AMD Website Accessibility Statement. Products Processors Accelerators Graphics Adaptive SoCs, FPGAs, & SOMs Software, Tools, & Apps . Processors . Servers. EPYC; ... Terms and limitations applicable to the purchase or use of AMD's products are as set forth in a signed agreement between the parties or in AMD's Standard Terms and Conditions of ...

  28. Readout of Secretary of Defense Lloyd J. Austin III's Call With Israeli

    Official websites use .gov. A .gov website belongs to an official government organization in the United States.

  29. Health of detained Binance executive deteriorates in Nigerian prison

    The health of detained Binance executive, Tigran Gambaryan, has deteriorated in a Nigerian prison where he faces money laundering charges, his wife said on Monday.

  30. Ukraine sees no sign of Belarusian military buildup near border

    Demchenko said in televised remarks that Belarus had made the statement about a buildup to "play along with Russia" to contribute to an information campaign to try to apply pressure on Ukraine.