Monday, 07 November 2016 11:31

Man versus machine


We are creating so much data from so many sources it is hard to get your [human] head around it.

SAS NevalaSo, began Kimberly Nevala, Director of the SAS Business Strategies, Best Practices and Advisory, and Business Solutions who was in Australia for a flying visit. We caught up following an interview in April this year at the SAS Global Forum.

Nevala is a well-published expert on most things “Big Data” and she is not afraid to broach the big issues like Big Data for good or evil.

Today we mainly chatted about Big Data and machine learning to get her unique perspective on what is a reasonably controversial subject – big data, machine learning, and the insights and patterns relentless machines can uncover. The interview took the form of a chat, so it is all paraphrased and attributed to Nevala.

Data is coming from everywhere – financial transactions, utilities, government, loyalty programs, Internet search, the myriad of apps. The volume has created a perfect storm – too much data, and it is swamping us.

The question is how do we start to harness that – for good – and generate meaningful, useful insights without being invasive (Nevala is a big advocate of personal privacy).

Nevala says it is about using the machine (computer) to sift through the tsunami of data to present insights – not tonnes of raw data – to data analysts and scientists. If anything, it is a well-oiled machine and human interface both dependent on each other for the results – actionable insights.

Machine learning has a little stigma – about machines learning to function without human interaction. Yet Nevala thinks this is the wrong take. Humans must direct the machine in the first place to look for patterns – there is not yet a fully autonomous machine that can just be pointed at an ocean of random data, find insight and determine a net new action. They can, however, do the heavy lifting.

Nevala cited the healthcare industry. In some cases, such as evaluation of medical images, the machine is often as accurate, if not more so, than a human diagnostician. But the machine is still not infallible or 100% accurate. This is where the synergies between man and machine come together. The machine can quickly assimilate masses of data that would be beyond any individual clinician’s scope; make correlations and come up with possible diagnoses and/or treatment options. The skilled clinician will then use those insights to recommend treatment.

Nevala says that machine learning will take on an even more responsible role when data is combined with the Internet of Things (IoT) to give granular information to the insights.

Back to the human/machine interface. Humans need to identify the problems they want a machine to investigate – not solve. Yes, the machine may find insights we didn’t know exist. And by doing so expose a new opportunity or problem to be solved. But machines aren’t autonomously creative, per se. What insights they derive are directly related to the data they are exposed to. Humans also need to evaluate and make determinations on how the machine could or should respond take action on found insights, if at all.

There may come a time when machine learning becomes more like artificial intelligence but even though you may be able to set it in “automatic” it lacks the total of human experience. AI autonomy must by definition be based on human experience, not given the kind of autonomy we see in science fiction.

For example, marketing is a long practiced skill, we understand it, people’s motivations to buy and more. We can point the “machine” at a lake of consumer data to detect patterns and preferred channels for consumption. It can use historical and real-time data to determine that I, as a frequent business traveler and coffee addict, may consistently respond well to a real-time IM that my favorite coffee shop is around the corner. My dad would not welcome this interaction. He brews his coffee at home and will respond to a coupon in the mail. Which can also include incentives for other items he or others like him buy at the grocery. The machine is optimizing activities across known channels (digital, paper, storefront). It won’t, however, independently create a new interaction channel that doesn’t already exist.

What about the autonomous vehicle? Will it always need a driver?

No. In fact, they probably don’t need a human monitor today. While complicated, the problem space is very well bounded. Optimize and navigate a path from point A to point B utilizing a defined set of paths (the roads) and well-articulated rules and codes of conduct. Taking into account, of course, the often unpredictable behavior of human drivers.

This is machine learning in action. Humans set the parameters, and the computer analyses incoming data (road conditions, telematics like speed, distance to the car in front, etc.) to draw actionable insights it can act on within those parameters. Only the machine can do it faster and more reliably. Better yet, the machine doesn’t get tired, angry or drunk.

In fact, it’s a great example of using a “machine” to emulate the way people should drive – not as they actually do. So could an autonomous car drive without a human overseer? Absolutely. When we let that happen is a matter of the level of trust and confidence we have (or have to develop) in the system.

So what comes next?

Nevala quoted the old saying from Father John Culkin, SJ, a Professor of Communication at Fordham University in New York, “We shape our tools, and after that our tools shape us.” Which she has heard recast as “We shape the system and then the system shapes us.”

To that end, Nevala says she doesn’t know exactly what the future holds. But the immediate frontier is the application of cognitive computing to provide a human-friendly interface to the machine. Machine learning has been a goal for decades, and machine learning applications are already widespread. In a few years’ machine learning, especially in big data analysis, will be old hat.

Cognitive computing uses capabilities such as natural language processing and natural language generation to provide a bridge between man and machine. Thereby allowing us to interact and communicate, if you will, with the machine in a more intuitive and natural way. Which may very well be the key to overcoming the man versus machine barrier for a more collaborative system of man and machine. And that will start the next chapter of this argument.


26-27 February 2020 | Hilton Brisbane

Connecting the region’s leading data analytics professionals to drive and inspire your future strategy

Leading the data analytics division has never been easy, but now the challenge is on to remain ahead of the competition and reap the massive rewards as a strategic executive.

Do you want to leverage data governance as an enabler?Are you working at driving AI/ML implementation?

Want to stay abreast of data privacy and AI ethics requirements? Are you working hard to push predictive analytics to the limits?

With so much to keep on top of in such a rapidly changing technology space, collaboration is key to success. You don't need to struggle alone, network and share your struggles as well as your tips for success at CDAO Brisbane.

Discover how your peers have tackled the very same issues you face daily. Network with over 140 of your peers and hear from the leading professionals in your industry. Leverage this community of data and analytics enthusiasts to advance your strategy to the next level.

Download the Agenda to find out more


Ray Shaw

joomla stats

Ray Shaw  has a passion for IT ever since building his first computer in 1980. He is a qualified journalist, hosted a consumer IT based radio program on ABC radio for 10 years, has developed world leading software for the events industry and is smart enough to no longer own a retail computer store!



Recent Comments