How to stay relevant & NOT alienate people: The benefits and challenges of passive metering and surveying – Part I

It’s no secret that there has been a decline in traditional online survey response rates. Online sample sizes that were once feasible are now a challenge to deliver. In the age of mobile technology, bite-sized information consumption and shorter attention spans, market research methodologies have (mostly) not kept up with today’s consumer.

The seamless integration of mobile devices into people’s lives makes the understanding of consumer behavior more complicated than ever before. In the process of trying to understand these behaviors, data collection has mostly continued with traditional methods – expecting respondents at times to bear the cognitive load with long, self completion surveys, resulting in “bad data” and a decline in response rates.

In a world where online sample becomes a cheap commodity, we tend to forget that sample is actually people. The people whose opinions we claim to value. But do we really know and respect the effects of our practices on the people we rely on for information? Sample is to research as soil is to growing food. We need to understand how we can best maintain and enrich our soil. In that, there is something for everyone.

That's why we think it's important to look into the way market research is conducted. In this blog post, we will present the benefits and challenges of passive metering and surveying. We will explore if consumers' perception of their online activities reflects their true browsing behavior by analyzing the websites and apps they use and investigating for which purposes one data collection method is better suited than the other. Additionally, the second part of the study will look into the research experience of participants for both data collection methods.

Research Design

For this study, we used data from 485 respondents in Australia, which includes their tracked behavioral data as well as their completed post-surveys. We passively tracked their online behavior during the entire month of April 2017. We had 228 desktop participants and 103 additional smartphone participants, and the rest (154) were cross-device.

Research Results

We started the post-survey by asking the respondents a simple question: How much time do you spend on your desktop device per day for your private usage? We gave a range of times as answers, and the respondents simply had to pick one.

Time spent on desktop

We found that only 26% of our panel was able to correctly assess how much time they spent on their desktop every day. The other 74% were either over- or underestimating their usage, with over half of the people overestimating.

Time spent on mobile

We also asked our respondents the same question in regards to their mobile usage.

In this case, people were able to estimate their time spent slightly more accurately – 29% - but even so, most of the respondents were overestimating their mobile usage.

Time spent per category: desktop

Apart from asking about their general online usage, we also asked the respondents to indicate which categories they are interested in. For each of those categories, they were then asked to estimate how much time they spend on it using their desktop device.

Overall, respondents were only correct 54% of the time, meaning almost half of their answers were wrong. People also overwhelmingly overestimated how much time they spent per category.

What is interesting to note is that out of the 547 correct responses, 544 were '15 minutes or less'. This means that people are very accurate when judging their overall usage if split by 'very little time' or 'a lot of time', but when asked to provide more details, their estimates are incorrect. Only three of the correct answers were above the '15 minutes or less' category (two times 16 to 30 minutes, one time 31 to 45 minutes.) Since so many respondents chose '15 minutes or less' per category, there was very little underestimation (1%).

Time spent per category: mobile

For mobile, we again see a slightly better percentage of correct answers – 63% – however, in this case, all correct guesses were in the '15 or less minutes' category. This means that once again, people can accurately say that they spend very little time on a category, but can not estimate how much time they spend if it is above 15 minutes.

Reported vs. measured: favorite website/app

We also compared the sites which people indicated as their most used per category to what we measured their favourite to be (we defined favourite as website/app with the longest duration).

We found that, comparing the top 5 indicated vs. measured per category, only 31% of the answers were correct.

It was very obvious from our research that people use the same sites/apps for a variety of different reasons. Many websites and apps are so diverse in content that in some cases that they are difficult to classify into one category. For example, Facebook showed up as a favourite in all 15 of our categories, including animals, business, and property.

The most common answer participants gave us was either 'I don't know' or incomprehensible words and numbers, indicating that people were filling in the survey not to express their opinion, but just for the sake of filling it in. Immediately, this made 10% of our survey data unusable.


From this first part of our study, we can conclude that people can't accurately report their online behavior because it is too complex. We also found that while people can accurately assess if they spend a very short amount of time online, they can't estimate longer periods of activity. People also tend to overestimate the time they spend online.

To avoid wrong conclusions from a single source of data, it is essential to carefully select and merge the right data sources to create an accurate and complete picture of consumers.

In the next part of this study, we will examine the research experience of participants for both data collection methods.