Predictive data science is still in its infancy. Not because the computational power or incisiveness of the modeling is lacking, but because the enterprise is still trying to wrap its arms around the massive volume of data it is collecting and figure out what exactly to do with it.
With this in mind, and another year of making data useful to clients under our collective belt, we wanted to share some of our big learnings from the year with you:
1. Don’t Build a Bridge to Nowhere. Earlier this year, we read a fascinating article about disappointment with Watson, IBM’s uber data analytics platform, and thought this a good starting point for our observations. First and foremost, while we ourselves also sell a predictive data platform, we’ve learned from engagement that many enterprises really don’t want to buy another platform – at least not initially.
We recently spoke with a prospect on the receiving end of a hard sell from a different Fortune500 data analytics platform provider, and here’s what that prospect had to say: “We asked the vendor if their platform really worked with sensors and equipment from other companies, and if their platform required their people to run it or if we could run it ourselves.” In both cases, this potential buyer of predictive data science capabilities felt the vendor’s offering was nothing more than an expensive and proprietary trap.
We hear this a lot. Many platform vendors ask prospects to sign a multi-million-dollar contract for a predictive analytics infrastructure that requires open-heart surgery on their IT systems and creating copies of their data sources, all with the promise of solving problems in the future.
So, we offer a speed boat. With little if any IT disruption, we show clients the possibilities data holds in store for them – and exactly how predictive data science will work for their company – by targeting finite business problems with measurable value. For one client, we scoured their historical work orders and built a sophisticated predictive model for pricing future jobs, saving them millions of dollars annually with zero IT involvement.
The client, in this case a large electric utility, loved the results, and brought to us a succession of other discrete projects. With similar focus, we solved each one, keeping their cost low, their benefit high, and their operations running undisturbed.
Those successes have led to an enterprise deployment of predictive data science, which requires platform scale (something we have in spades: for one of our clients we run millions of individual reports, crunching billions of data points, in just 30 minutes, daily!), but the engagement didn’t start with a platform, it started with a tangible solution to a tangible business problem. No bridge required.
2. Learn by Doing. Another big takeaway from 2017 is that predictive data science is a “learn by doing” discipline. Companies that remain on the sidelines of putting predictive data science to work, fearing their data isn’t perfect, are losing out.
First, they are not gaining the benefits of predictive data science that come from good data, running the gamut from lowering customer churn and improving marketing results to optimizing operations. Second, they are missing out on a great way to improve their data.
Nothing cleans up data better than by putting it to use. When the enterprise focuses on finite business problems with real value attached to them and puts predictive data science on the case, the needed data gets spruced up quickly.
Learning by doing is a great maxim for predictive data science and is also a call to action for those companies missing out on the “good” by waiting for the “perfect.” But here’s a companion benefit, and this will appeal to both advanced users of predictive data science and beginners alike: the solutions yielded by predictive data science often can be directed at solving deeper-rooted problems to deliver even greater value, helping the enterprise go on the offensive with data.
We’re seeing this happen with growing frequency. A couple of examples that stand out:
3. Predictive Data Science as ‘Special Ops’. Performing predictive data science at scale does require platform brawn, but building up to that point necessitates nimbleness and agility. That’s part of the reason we suggest the enterprise start thinking of predictive data science as a Special Ops force, complementing its growing internal army of data analysts.
No matter where a company is in its use of predictive data science, it will benefit from deploying smart people with a wide array of data, mathematical, modeling, and business talents to help find gold in the data. Companies can build their own Special Ops predictive data science teams – data scientist has been one of the top hires in the enterprise for two years running – but such teams take time to recruit, build, gel, and deploy, so partnering with specialists in predictive data science can supplement and fast-track that process.
And while many analytics-platform vendors tout vertical specialization, we think predictive data science is ultimately an agnostic discipline. Predictive data science teams that are well rounded bring fresh thinking to data that can pay huge dividends. For example, we recently solved a client’s business problem using models developed from bird-migration patterns. As it so happened, it was one of our data scientists’ familiarity with disease epistemology, not his knowledge of the client’s industry, that led to the breakthrough.
Seems machines aren’t the only ones doing the learning.