What We Do
We study all populations—citizens, customers, clients, stakeholders, experts, influencers, media, CEOs, CFOs. The list is endless.
We use every kind of social science tool:
- All types of survey fieldwork—telephone, online, face to face, paper-and-pencil, mail/faxback;
- All survey instruments—forced-choice questions, multivariate questions, open-ended items, and free-flowing, in depth interview guides;
- All project sizes—from surveys of thousands to interviews of dozens;
- All types of focus groups—feedback, psycho-analytic, dial, and experimental;
- Content analysis—objective quantitative analysis of emails, speeches, phone calls, and other ways in which people express their opinions other than by answering closed questions in surveys.
While we use every kind of tool, we don't do every kind of study. We study opinions about food but don't do taste tests. We do competitive intelligence—sometimes vital for designing good questionnaires—but we don't do mystery shopping.
How we're Different
We are enthusiastic about the science of opinion measurement and don't view human subjects as items on a grocer's shelf.
Cans of soup don't change their size, weight, or taste depending upon who does the measuring or why. But humans in surveys and focus groups can be affected by who is doing the research, the stated reason why, and any one of many small features of research design. Any one of these features can lead humans to modify their apparent likes and dislikes and change what they claim to know or be aware of:
- Using only female interviewers will drive up support for affirmative action/employment equity;
- Using happy, enthusiastic interviewers will decrease dissatisfaction with a given good or service;
- Deploying a field operation with a different name will alter survey responses in the spirit of the name change;
- Changing the stated purpose of a survey will affect what respondents say;
- Designing a questionnaire to allow respondents to feel comfortable admitting to unwise or unsocial opinions will allow respondents to be more forthcoming about their warts, their true feelings, and their private preferences.
The head of research for a multinational corporation told us recently that our greatest value lay in understanding the psychology of respondents. Understanding respondent psychology explains why we were the only pollster to forecast a Harper majority in 2011. We asked seemingly unpolitical questions about which party leader respondents would want for a friend's unit commander in Afghanistan or to manage their family's assets in a crisis. One month ahead of Election Day, Harper was so far ahead of the other leaders that it was evident to us that he was too trusted to fail.