Definition: MaxDiff analysis is a survey-based research technique used to quantify preferences. A MaxDiff question shows respondents a set of items, asking them to choose what is most and least important. When the results are displayed, each item is scored, indicating the order of preference. MaxDiff, short for maximum difference scaling, is sometimes referred to as best-worst scaling.
Basic Concept: MaxDiff goes beyond a standard rating question. It forces respondents to pick the most and least important item from a list, helping to identify what your audience truly values. MaxDiff items are sometimes to referred to as features or attributes.
Set 1 of 3
|Least Important||Most Important|
|Least Important||Most Important|
|Least Important||Most Important|
MaxDiff is best used when you want to identify a preference. A real estate developer could use the above sample question to determine what resort features (attributes) would be most preferred for an upcoming project. To maximize the budget, the company should focus on areas that are most important to potential guests. When respondents evaluate this question, features are compared against one another, and a preference can be identified.
If the real estate developer used a matrix or separate rating questions instead of MaxDiff, respondents would likely rate all features as important. In that scenario, the developer wouldn't have the data needed to maximize the budget; the developer would use resources in areas that guests don't truly value.
Looking at the sample results below, it seems that "mattress comfort" is the most important, and a hotel gym is the least important. Traditional question types would not have drawn this conclusion. For example, if you asked, "How important is a hotel gym when choosing a resort?" many people would likely rate a gym as important. But when a gym is compared to other features, it becomes less important overall.
Another way to collect preference data is with conjoint analysis, which is a cousin of MaxDiff. Conjoint analysis is used when collecting multi-level preference data. MaxDiff is best used to collect single-level preference data, like in this real estate example.
To create a MaxDiff survey, create a survey as normal, and then add a MaxDiff question where you see fit. You can add an unlimited amount of attributes for respondents to evaluate. You can display up to fifty sets (50), or you can display all attributes inside one single set. The more sets you show, the more times individual features will be compared against one another.
To avoid survey fatigue, it is best to show roughly five attributes per set. To ensure attributes are evaluated evenly, you would want to show each attribute roughly three to five times per question.
The MaxDiff calculator below will help you determine how many sets to show:
The above calculator uses the following equation to come up with the number of sets required. Variable PR is how many times the MaxDiff Question will show an attribute to a respondent. (per respondent)
You would want to collect a minimum of one hundred (100) responses for a MaxDiff question. Two hundred (200) or more responses will produce even better data, as there would be more variation in the sets and attribute combinations. If you wanted to filter your MaxDiff results by a subgroup, for example, by gender, you would want to collect a minimum of 100 responses for both males and females. The response requirements would be the same for each additional sub-group you wish to study.
By default, SurveyKing randomizes attributes when showing multiple sets and has a system to ensure attributes display as evenly as possible.
Some research projects require you to define the attributes in each set. For example, maybe you want to compare “Mattress Comfort,” “Room Cleanliness,” and “Hotel Gym” in the first set.
A feature unique to SurveyKing, is the ability to define the attributes displayed for each set. To access this feature, click “Define set attributes” within the question editor. The editor will show the attributes you want to display in the top section, and the editor will show the attributes to choose from in the bottom section. Drag from the bottom section to the top section to define the attributes displayed in the set. You can use the “Next” and “Previous” buttons to toggle the sets.
To ensure respondents evaluate attributes against those in the given set, we do not include a button to go back. If you would like to give respondents the ability to start over, enable the "Reset Button" option inside the question editor. This button will remove all answers from the MaxDiff question and reset the display back to the first set.
MaxDiff analysis is, in effect, viewing the results for a MaxDiff survey question. When you go to the results page, you will see a data table with each attribute along with the percentage and count of the times it was ranked as most appealing, least appealing, or not chosen.
The attributes in the data table will be ranked based on the score, which is computed using the below formula:
If you download the results to a spreadsheet, a column header for most important and least important per set and will be displayed and then the respondent's selection below that.
From the sample data, we can see "Mattress Comfort" has the highest score and is by far the most important attribute when choosing a hotel resort. "Mattress Comfort" is more than double the next positive attribute, "Room cleanliness." The resort could quantify the "All-inclusive package" as neutral; some respondents think it's important, while an almost equal number of respondents don't. "Customer service" and "Hotel gym" ended up with negative scores, indicating that respondents do not think these attributes are important overall.
While this sample data is straightforward, it shows you how MaxDiff goes beyond other standard question types to identify what your audience truly values. This type of data is what should drive decisions by your organization.
Another feature unique to SurveyKing is the ability to create a segment report for a MaxDiff survey question. This report is useful to drill down into the MaxDiff analysis and find hidden relationships. For example, you might include a question in your survey that asks for the respondents' gender. You could then create a segment report (or a cross-tabulation report) by gender. The results would include the MaxDiff scoring for "Male" and "Female" in two different tables. You may notice "Males" prefer a certain attribute that females do not prefer or vice versa.
MaxDiff analysis can also use regression for the output. Because MaxDiff uses categorical data (a rating such as "Most Important") instead of continuous data (like a number rating), a particular type of regression is used called logistic regression. Here is an introduction to logistic regression, as well as a video that explains the general concept.
When doing any regression with survey data, the coefficients of each independent variable are the driving factors. In MaxDiff, the independent variable would be each attribute, and the dependent variable would be if an attribute is chosen as "Most Important." Many statistical programs use different models to calculate these coefficients, but the outcome would be similar.
Regression can be used to calculate utilities and used to create a latent class analysis report. Simple utilities for MaxDiff are usually highly correlated with the results of a simple best/worst count. This research paper by Sawooth Software has an explanation.
Because best/worst counts are highly correlated with regression, SurveyKing only uses regression when computing latent classes. The latent class module is a future addition to the platform.
The table below is an example of a utility report you would see on other platforms from the regression output. Utilities have no scale compared to other MaxDiff projects you run. They only matter in the context of the current MaxDiff question you are analyzing.
We could interpret this as "Mattress Comfort" gives .37 units of happiness to our respondents, while "Hotel Gym" takes away .52 levels of happiness. It's not that a hotel gym is bad; all things being equal, the gym isn't adding to the happiness in a way other features are. We could also say "Room cleanliness" at .24, gives double the happiness that the "All-Inclusive package" does at .12.
Lean more about utility scores.
Latent classes analysis group similar MaxDiff responses together in what are called "classes." Latent class analysis is similar to cluster analysis. For example, the software might give us Class #1, which on average are respondents who ranked attributes in roughly this order: "Mattress comfort" > "Room cleanliness" > "All-inclusive package." The ">" symbol here means greater than.
Statistical software will ask you how many classes you want. The software will compute the classes, and each attribute will have a regression coefficient or utility calculated. Once the coefficients are calculated, a probability is displayed to show how likely an attribute is picked as most important. The below table would be an example output.
Learn more about latent class analysis.