Science, art and usefulness
26/01/2015 By Science, art and usefulness
26 January 2015
"Social science is an example of a science which is not a science. They follow the forms. You gather data, you do so and so and so forth, but they don’t get any laws, they haven’t found out anything.” Richard Feynman, Nobel Prize winning Physicist.
As a qualitative researcher I often get involved in arguments with my quantitative colleagues and with other quallies about the nature of data, insight and models.
Even in science, almost everything that is currently known can only truly be termed a model. The periodic table that I learned at school which I was told was ‘complete’ is not the same one that my daughter is learning. Science still uses terms such as ‘dark matter’ and ‘junk DNA’ which, to my mind, suggests that there might be more out there to find.
One of the points Feynman is making is that there is little or no certainty in social research. The data is simply too partial, fragmented and unstructured to provide proof in the scientific sense.
There is certainly a difference between scientific models and the models we use as researchers and marketers. Einstein’s model of general relativity has been subject to thousands of experiments, each one designed to ‘prove’ that it is incorrect. That’s how science works; if you can provide one single situation where a theory breaks down, the theory is wrong.
The models of human behaviour that we currently have, whether useful diagrammatic models such as Maslow or BCG, word models such as the marketing mix or behavioural models such as behavioural economics are all useful rather than true in the scientific sense. I recall a marketing lecturer describing Porter’s five forces as if she was describing Newton’s laws of motion, which was a little unsettling. Sometimes we kid ourselves that the models we use, or even the ones that we create, are more than models.
Segmentation is a prime example. Quantitative segmentation is based on data, on numbers, so it must be ‘true’, right? Well unless you have a segmentation that is solely based on demographics and proven behaviour, many of those data points relate to subjective responses to questioning. Even with a very solid dataset, the process of segmentation is as much art as science. There are always outliers, and there is always a segment in the middle that you don’t quite know what to call, but would rather not describe as, ‘the dump segment’.
In advertising for cosmetics, the device of the ‘lab coat’ is used to provide a not so subtle subtext that the product is scientifically produced and therefore proved to work. In quantitative research we sometimes run the risk of using graphs and charts in the same way. If it’s on a bar chart, then it must be ‘fact’. My contention is that quantitative research is not that much different from qualitative research in so far as the data ‘suggests’ and ‘implies’ findings. How we choose to express and interpret those findings is subjective. Even statistical significance is no more than a guide to greater validity when applied to research data.
If this sounds like I’m trashing quantitative research, I’m really not. It’s useful to be able to know whether people prefer product A to product B. Often, this will be a good indicator that your client will sell more of product A. It’s useful to be able to design and target messages for particular sub-groups. As market researchers, the currency we deal in is usefulness, not proof.
In qualitative research, we also deal in the useful. It’s useful to be able to know why people think and behave in certain ways. It’s useful to be able to know how people understand communications and advertising. People think differently, and it’s useful to stretch that range of thinking beyond the client or professional ‘expert’ consultants. Sometimes respondents provide greater clarity and insight simply because they don’t know any better. Sometimes they are spectacularly wrong about something, but in a way that’s interesting. It is our job as researchers to recognise these moments and distil them in a way that’s useful for our clients.
I recall a lecture once where the speaker said that very little of what he was about to tell us was true, but that if we pretended it was, we might find it useful. I wonder how many insight presentations you’ve seen or written could have begun with the same caveat?