Using data is a great way to tell a story. Be careful it doesn’t tell the wrong one.
In today’s world, data has become a critical component of our professional and personal daily lives. It helps us decide where to buy a home, what advice we give our children, and what location to visit on vacation. For marketers, data helps us decide which audience to target, what message to send, and what offer to provide. There’s ample data out there to help us make these important decisions, but it’s more crucial than ever to understand the insights—and even the human connections—that the numbers alone can’t tell us. Here are a few examples of when just the data alone isn’t telling the full story:
Big data is just data without a human lens
When Chris Ingraham, a data reporter with The Washington Post, reviewed a U.S. Department of Agriculture report that ranked the nation’s counties based on climate and topography, he mapped the USDA’s dataset and wrote an article that stated rural Red Lake County, Minnesota was “the absolute worst place to live in America.”
The people of the small town of Red Lake Falls were not fans of his article. Residents sent him scenic photos, the media got involved, state representatives and locals tweeted and emailed, and a Red Lake resident even invited Ingraham to visit. All were ready to prove to him that there’s more to a story than just the numbers.
Ingraham took them up on their invitation and ended up falling in love with the place, the people, and the respite from his faster-paced and more congested life in the D.C. metro. Fast forward six months, and he, his wife, and their two-year-old twins had moved to the rural town.
This story is a great reminder to consider the limitations of data. Numbers don’t always tell the whole story. Remember when looking at data to inform decisions, to put on your “human hat.” Numbers alone don’t make a marketing campaign successful but adding in emotional connection can make all the difference in the world.
Not all numbers are created equal
Another example of an incomplete picture of data is when the numbers provided aren’t an “apples to apples” comparison. With several lawsuits against Purdue Pharma alleging that it had downplayed the addiction risk of OxyContin, some journalists took a took a closer look at the numbers as described in a ProPublica article, “Data Touted by OxyContin Maker to Fight Lawsuits Doesn’t Tell the Whole Story.”
The article brings to light how Purdue Pharma moved to dismiss cases against them based on a Drug Enforcement Administration database showing Purdue sold only 3.3% of the prescription opioid pain pills in the U.S. from 2006 to 2012. However, those numbers don’t take into account the potency and dose of each of the pills—the 3.3% was only based on the actual volume of pills. But when you measure the amount and potency of opioid that Purdue’s pills contained (higher doses of opioids are associated with a greater risk of overdose), the recalculated numbers show that they were responsible for more than 16% of the nation’s market share. In 13 states, Purdue was responsible for 20% or more of retail opioid painkiller sales—in some states as high as 31%.
Keep in mind when you’re comparing campaigns month-over-month or year-over-year that the same campaign last year may have a number of different circumstances this year. Did you target the same segments? Did you go deeper into your database? Were there circumstances that may be affecting the numbers? Don’t just look at the numbers, but dig deeper to truly understand what can and can’t be compared and what those differences may tell you.
Data isn’t always static—nor should your insights be
The COVID-19 pandemic is the biggest public health crisis in modern history. But amid the non-stop reporting (and opinions) on COVID-19, it’s important to make sure the data you’re listening to and sharing is accurate. Continue to look at the sources of the data and anomalies that may be affecting the data, bearing in mind that things can change not only daily but hourly. As COVID-19 cases spike in a number of states, we’re all trying to understand if it’s because more tests are being administered or the state reopened too early and citizens are not adhering to social distancing guidelines—or if the spike is attributable to a combination of many factors. And these reasons may change in the blink of an eye.
As our understanding of the pandemic has increased, so too has our use of data. As cases have declined in the New York area, the states of New York, New Jersey, and Connecticut collaborated to craft a data-based set of criteria to inform whether individuals traveling to the area from out of state need to quarantine or not. The current methodology is to restrict travelers from any state in which the seven-day moving average of new cases exceeds 10 per 100,000 people per day and/or in which the positivity rate of returned tests exceeds 10%. While this approach is well-intended, crafted by experts, and science-based, this data-informed approach doesn’t provide for flexibility when anomalies in the data appear.
For example, the state of Minnesota recently experienced a largely unexplained two-day uptick in laboratory-confirmed cases. While the state’s positivity rate remained below the threshold (due to fairly widespread testing availability), the seven-day moving average of new cases now exceeds the 10 per 100,000 people per day threshold, thus landing travelers from the North Star State on the must-quarantine list. Though on the surface one could assume that the pandemic is worsening in Minnesota, many metrics indicate that the situation in the state has remained relatively stable (or even improving) for some time.
Use data to tell the whole story
With a constantly evolving situation like the COVID-19 pandemic, it’s important to remember that the data is constantly evolving, and different data can tell a variety of different—and sometimes opposed—stories. Viewing a wider set of criteria can help you understand the full picture.
Data is critical to many of the decisions you make each day. But remember to dig in and understand where the numbers come from, anomalies that may be affecting them, and the human factor that just might change how you use that data.