Deeply understanding your customers isn’t just a competitive advantage — it’s essential. At Solsten, we’ve dedicated ourselves to advancing this understanding through rigorous psychological science, working alongside brands and companies of all sizes to uncover the deeper patterns that drive consumer behavior and engagement.
As fellow researchers, we believe in transparency about our methods and the science behind them. This article offers a detailed examination of our methodology, covering:
- The seven-dimensional framework we use to measure consumer psychology
- Our adaptive assessment technology and its statistical validation
- The construction and validation of our constructs and item bank using representative norming samples
- Technical implementation requirements and data collection parameters
- Empirical results from large-scale deployments
What follows is a comprehensive look at how we approach consumer psychology, our validation processes, and the technical foundation that enables accurate psychological measurement at scale.
The Measurement Framework
Our methodology measures consumer psychology across seven key dimensions, arranged along a gradient from most stable to most contextual. Understanding this gradient is necessary for research applications and insight activation:
Personality Traits represent the most stable dimension. They are general tendencies in how people interact with the world that remain largely consistent throughout their lives. These foundational characteristics drive a person’s basic interaction patterns with other people and their environment.
Intrinsic Motivators represent fundamental internal drivers that shape how people naturally want to engage with the world. Unlike external rewards or contextual factors, intrinsic motivations remain relatively stable throughout life and form core aspects of personality.
For example, two individuals might both achieve success in their career, but for different intrinsic reasons — one might be driven by an internal need for productivity (where achievement is simply the observable outcome), while another might be motivated by an innate desire for competition. Understanding these true intrinsic motivations, rather than just observing behaviors, is necessary to predict and sustaining meaningful engagement.
This distinction helps avoid the common mistake of confusing visible behaviors (like achievement) with the underlying psychological drivers that actually motivate those behaviors.
Values show moderate variability and represent what’s important to people. These characteristics tend to remain fairly stable but can change in response to major life events or life stage transitions, making them valuable for understanding a consumer’s deeper priorities and principles.
Cultural Attributes reflect how a person relates to and participates in various groups and social systems. While personality traits are more like hardware, cultural attributes function more like software systems that can be “installed” or “uninstalled.” An individual may relate to multiple cultures simultaneously, and these can remain stable over time but may also vary based on the culture of the product or experience they’re engaging with.
Usage-Based Emotions measure the emotional state a person typically experiences before using a specific product or service. Rather than capturing general emotions (which can change moment to moment), we focus on the relatively stable emotional patterns that might drive product usage. While variable, these patterns provide insight into what compels consumers to engage with particular experiences.
Affinities show high variability, representing interests and preferences that can change significantly over relatively short time periods. These reflect a consumer’s current attractions and inclinations, offering immediate insight into preference patterns.
Behaviors represent the most contextual dimension, covering specific actions and reactions within particular products or environments. While highly specific and changeable, behaviors become more predictable when understood in the context of a person’s more stable characteristics.
This gradient directly informs how we collect and analyze data. Our adaptive assessment technology specifically targets the more stable dimensions (personality traits and motivators), where sophisticated measurement is most important, while additional targeted questions gather data about the more variable dimensions.
Our Methodology
What makes this approach unique is how we measure these dimensions. At the core of our methodology is adaptive assessment — a scientifically proven approach where questions are dynamically selected based on previous responses.
This technology has been used successfully in fields like educational testing, employee assessment, and clinical psychology, where accuracy and efficiency are necessary.
Our adaptive assessment technology specifically targets personality traits and motivators, while additional questions gather data about values, affinities, and behaviors. This creates a comprehensive view of consumers that goes far beyond basic demographics.
Adaptive Assessment Methodology
Our adaptive assessment technology represents a significant advancement over traditional fixed questionnaires. Here’s how it works:
Dynamic Question Selection
Rather than presenting a fixed set of questions, our system:
- Evaluates each response in real-time
- Selects subsequent questions based on previous answers
- Focuses on areas where more information is needed
- Skips redundant or low-information questions
Efficiency and Accuracy
This approach provides several advantages compared to a fixed set of questions:
- Shorter assessment length while maintaining measurement quality
- Reduced respondent fatigue
- Higher completion rates
- More precise measurement of individual traits
Quality Assurance
Our quality assurance focuses primarily on ensuring excellence in the item development process, rather than within the adaptive assessment itself:
- We maintain rigorous standards for item selection using Item Response Theory (IRT) statistics
- We validate all items and ensure high measurement accuracy during the item generation process
- We regularly add new items to extend our item bank, with each new item carefully calibrated using representative norming samples
- Our English items are translated into German, French, Japanese, and Chinese to enable assessment across diverse cultures
Additionally, we employ comprehensive data cleaning procedures after assessment data is gathered. While not performed in real-time during the assessment itself, this post-collection analysis allows us to screen out low-quality responses and ensure the integrity of our final datasets.
Our assessment has been proven effective across millions of respondents globally, maintaining consistent quality while adapting to individual respondent characteristics.
Rather than presenting every person with the same fixed set of questions, our proprietary AI system learns from each response to select the most relevant subsequent questions.
Traditional assessments use a “one size fits all” approach, meaning everyone must answer the same questions regardless of their relevance. This often includes questions that aren’t well-suited for that particular individual, resulting in the collection of additional or irrelevant information.
In contrast, our adaptive approach allows us to maintain high measurement quality while reducing assessment length — a key factor in obtaining representative samples from actual humans.
Solsten’s adaptive assessment adjusts itself to each customer in real-time. As customers respond to questions, the system progressively learns about them, determining which aspects of their psychology have been captured well and which areas need more information. This allows us to ask only the most informative questions for each specific customer.
The result is a shorter, more engaging assessment that maintains exceptionally high measurement quality.
These measurements reveal themselves in specific behaviors. For example, we can see how personality traits manifest in daily interactions: a customer scoring high in neuroticism might react more intensely to product availability issues or become anxious about making the right choice among multiple product options. Understanding these connections lets organizations anticipate and design experiences that better serve their audience’s needs and natural tendencies.
Unlike traditional marketing or user surveys, Solsten’s approach is built on a robust theoretical foundation developed through decades of psychological research. Our assessment process is more rigorous and scientifically grounded than typical marketing surveys. Additionally, because we gather data from users during genuine moments of engagement rather than from paid survey takers, we reach a broader, more representative audience.
This means our insights reflect authentic consumer psychology rather than just the perspectives of professional survey takers.
This approach has proven effective at scale: since 2018, Solsten has enhanced the experiences of over 500 million users and customers, working with brands of all sizes, including DraftKings, Peloton, Sony, Activision, and EA.
Key Validation Points
When examining methodological validity in consumer research, it’s important to understand that apparent precision doesn’t guarantee accuracy.
Consider a camera with object recognition that confidently identifies a giraffe with 90% certainty — but it’s actually looking at a monkey. The image is crystal clear, the confidence level is high, but the fundamental identification is wrong. This illustrates a significant challenge in consumer research: without proper scientific validation, an assessment can appear highly precise while measuring something entirely different than intended.
This is particularly relevant when measuring latent characteristics — psychological traits that can’t be directly observed. Unlike demographic data or behavioral metrics, personality traits and motivations require a theoretical foundation and empirical validation to ensure we’re measuring what we intend to measure. This foundation isn’t built overnight; the Big Five personality model, for instance, emerged from decades of scientific discourse and empirical evidence.
This is why scientific validity is necessary to understand consumer psychology. Scientific validity means we can prove that we’re actually measuring what we intend to measure — in other words, that our assessment truly captures psychological traits rather than random or misleading information.
Validation and Quality Control
Our validation process meets or exceeds clinical psychology standards across multiple dimensions:
Reliability Metrics
- Internal consistency (Alphas): Range 0.7-0.95, mean 0.85, median 0.865
- Empirical Reliability: Range 0.74-0.96, mean 0.88, median 0.90
- These metrics exceed the minimum threshold (0.70) required for clinical applications
Structural Validation
- Root Mean Square Error of Approximation (RMSEA) < 0.08
- Comparative Fit Index (CFA) > 0.9
- Tucker-Lewis Index (TLI) > 0.9
- Information weighted fit (Infit) ≥ -2 (z-score) for individual items
Quality Control Systems
Our algorithms automatically validate response patterns by checking for:
- Randomized answering
- Straightlining (selecting the same response repeatedly)
- Other forms of survey manipulation
- Response bias patterns similar to clinical assessment protocols
We also include attention check items to filter out low quality responses.
These validation processes are continuously applied across our global dataset, which spans over 250 countries and millions of respondents, ensuring consistent quality regardless of cultural context or scale.
This level of accuracy wasn’t achieved overnight — it’s the result of continuous refinement and testing. While our assessment is used across millions of respondents globally, in their native languages, our reliability and validity measurements come from carefully constructed norming samples designed to be demographically representative of specific populations. This methodologically rigorous approach to validation distinguishes our assessment from typical market research surveys.
One advantage of our methodology is the context in which we gather data. Unlike traditional market research that often relies on paid survey takers — who represent a very specific and limited population — the majority of our data comes from video game players in their natural gaming environment, though our collection is not limited to games.
It’s worth noting that gaming is far from a niche interest category today — with 3.4 billion gamers worldwide (compared to six billion smartphone users globally), gaming is as common as watching TV and far more prevalent than all streaming service subscribers combined. This massive, diverse population provides an ideal foundation for psychological research. This approach provides several key benefits:
- More representative sampling: Our data is gathered when users are highly motivated, since gaming is unique in how eagerly people engage and seek to improve their experience.
- Contextual validity: Data is gathered within the actual gaming experience, not in artificial research environments
- Game-specific insights: Each assessment provides data specifically relevant to that game’s audience, enabling more precise and actionable insights
- Ecological validity: Companies can connect their KPI and behavioral data to our system to identify correlations between customer behaviors and specific traits. For example, finding that highly competitive users shop Black Friday sales.
The quality of this data is further ensured through our adaptive assessment technology, which reduces respondent fatigue while maintaining measurement accuracy.
To put this in practical terms: With Solsten’s scientifically validated approach, you can make decisions based on statistically robust insights about player psychology, backed by rigorous reliability testing and appropriate statistical power. Or, you can use it to train your own AI.
Implementation Evidence
Our methodology has been successfully implemented across various gaming contexts, from established titles to new launches. Here are specific examples:
Large-Scale Implementation
When Supercell sought to revitalize Hay Day, a game with 15M players, our assessment was implemented with minimal disruption:
- Over 35,000 validated responses collected within days
- Actionable insights delivered within two months
- Resulted in significant engagement improvements
New Game Development
Mainframe Industries used our methodology to validate their MMO concept Pax Dei:
- Successfully identified and validated target player personas
- Achieved 60% alignment with primary target persona in first closed alpha
- Additional 20% alignment with secondary target segment
- Accelerated development velocity through clear audience understanding
Strategic Pivot Support
For Languini’s game team, our assessment revealed unexpected player motivations:
- Identified altruism and interpersonal relationships as key drivers
- Guided successful feature development
- Led to over 80% player engagement with new mechanics
These implementations demonstrate our methodology’s adaptability across different game types and development stages, while maintaining consistent quality and delivering actionable insights.
Implementation Process and Technical Requirements
For Consumer Insights teams looking to implement psychological assessment in their games, here’s what you can expect:
Assessment Distribution
- Implementation occurs through interstitial in game, in app, text message, or email.
- Technical integration requires only passing player IDs as URL variables when a player clicks the prompt
- Optional incentives (e.g., in-game currency, raffle prize, etc.) can be offered for assessment completion
- Additional custom questions or creative elements can be incorporated based on your needs
Data Collection Requirements
- Assessments are distributed to games with at least 3,000 DAU
- Typically, 10% of DAU complete the assessment within the first day in a clean and valid way, based on a random sampling of the audience.
- With 10k DAU, we can achieve a statistically robust sample (95% confidence level with a margin of error of approximately 1%) that can effectively represent the game’s player base
- At 100k DAU, the assessment becomes representative of populations in the hundreds of millions
For context, most professional political polls use samples of approximately 1,000-2,000 respondents to represent entire national populations (often 200+ million eligible voters) with a margin of error of 3-4% at a 95% confidence level. Our methodology achieves similar or better statistical power while benefiting from organic engagement rather than recruited panel participants.
Integration with Existing Research
- The assessment can be integrated alongside current player research methods via Solsten’s Typing Tool
- Results are accessible through our dashboard, with one of our experts guiding you through important metrics, similarities, and differences among your players
- Our Customer Success team remains available for ongoing support and additional expert consultation
Conclusion: The Science of Player Understanding
The field of player research demands both methodological rigor and practical applicability. Our approach brings scientific validity to player understanding while maintaining the agility needed in modern game development. The combination of:
- Scientifically validated psychological measurements
- Adaptive assessment technology
- Representative sampling from actual players
- Integration with existing research methods
Creates a robust foundation for player research that goes beyond traditional demographic or behavioral analysis.
For consumer insights teams looking to enhance their research capabilities, we offer:
- Detailed technical documentation about our measurement methodology
- Access to validation studies and reliability metrics
- Consultation on integration with existing research workflows
- Ongoing support for research design and analysis
We invite you to examine our methodology in detail and discuss how psychological assessment could enhance your user research capabilities. Our team is available to discuss validation studies, or explore specific implementation requirements for your experience.
Schedule a demo today with our insights experts to learn more.