Skip to main content
Understanding Individual Consumers Goes “Back to the Future”

By Feather Hickox

October 21, 2015. That is Marty McFly’s future in the movie Back to the Future Part II. And while – at T-minus 19 months and counting -- we are nowhere near hoverboards and cybernetically enhanced children, “Back to the Future” is making a comeback in Big Data circles.

The phrase “back to the future” is being used to describe Big Data techniques that are enabling companies to understand their customers in an “old” way.

Local businesses used to know their customers’ needs, attitudes and propensities to buy. The corner butcher knew which cuts buyers favored and how much they generally spent on meat. Bankers knew which additional financial products might be useful to each depositor.

Today analytics is being applied to Big Data in ways that make it feasible for businesses to get to know individual customers again. Only now this knowledge comes through identifying behavior patterns found in the vast quantities of bits and bytes from store point-of-sale devices, and online and mobile digital activity.

The constant stream from such activity – abundant, timely and varied – provides the opportunity for marketers to understand their customers more fully. While respecting consumer privacy, companies can develop a more complete picture not only from what customers say (in online comments, ratings and profiles, blog posts, tweets, etc.) but also from what they do (data from in-store and online purchases, mobile payments, web searching and browsing, etc.) They can employ a variety of analytic technologies for capturing, cleaning and transforming this diverse data into useful insights. Their conversations with customers can be guided by more complex decision strategies that take this richer view into account.

In fact, “back to the future” techniques can build a cohesive view of the customer from internal and external sources. These techniques include:

  • Handling data that hasn’t gone through the normalization and structuring process required in traditional relational databases. Alternative approaches to data management (e.g., newSQL, noSQL) play an important role as well. They expand the types of data that can be analyzed to include semistructured data, such as web logs that reveal browsing activity, and unstructured data, such as text from Facebook pages, tweets and online product reviews.
  • High-velocity data streaming processing for filtering and capturing incoming data on the fly. This can more quickly capture data and turn it into insights, resulting in more responsive and relevant conversations with the customer.
  • Apache Hadoop. This open-source software framework, is being used to write Map-Reduce programs that analyze heterogeneous data streams, extracting value from that data would otherwise be difficult to mine.

These recent “back to the future” technology developments make it possible to do this high-performance computing in massively parallel processes on distributed architectures of inexpensive, standardized hardware and open source software – bringing down the cost of working with Big Data. Virtual infrastructures, available through cloud-based SaaS providers, are also lowering the entry barriers and reducing startup times. These services enable companies to take advantage of the latest advances in nontraditional data management and processing technologies, while avoiding the time and cost of building them into enterprise infrastructures. All without the need for a souped-up Delorean.

For more information on this topic, check out our Insights paper (registration required).

related posts