Vicky Falconer, Big Data Solutions Specialist for Oracle ANZ, told iTWire that machine learning would affect user interaction "with everything from insurance and domestic energy to healthcare and parking meters. Machine learning is finally here for all to leverage".
"I guess you could say I am a home-grown data scientist. I have a Bachelor of Economics, Economics/Statistics from ANU and a Grad Dip Software Development, Java/C++, HD from Swinburne University of Technology. I have been at Oracle since 2011 and leading its Big Data Solutions since 2012," Falconer said.
"I work with senior executives to help assess the implications of data driven disruptions on their organisations, and develop the right strategies going forward.
"Most discussions focus on what's changing, how data can drive new business models, how to drive value out of data assets, the cultural challenges of data driven innovation, and where to start on a 2-5-year innovation strategy."
She said big data touched every industry and every individual in some way – business leaders, IT professionals, governments, and the people they serve and listed a few obvious ones:
- Retail shoppers receive discounts or store directions delivered via smartphone as they browse a store.
- Online consumers get personalised offers tailored just for them
- Manufacturers measure the success of their new products in days instead of weeks
- Drivers can take advantage of easier access to on-street parking based on real-time data and smarter devices
The remainder of the article is in Falconer's words.
The era of ubiquitous machine learning has arrived
Machine learning is no longer the sole preserve of data scientists. The ability to apply machine learning to vast amounts of data is greatly increasing its importance and wider adoption. We can expect a huge increase in the availability of machine learning capabilities into tools for both business analysts and end users – impacting how both corporations and governments conduct their business. Machine learning will affect user interaction with everything from insurance and domestic energy to healthcare and parking meters. Machine learning is finally here for all to leverage.
When data can't move, bring the cloud to the data
It is not always possible to move big data to an external data centre. Privacy issues, regulations, and data sovereignty concerns often preclude such actions. Sometimes, the volume of data is so great that the network cost of relocating it would exceed any potential benefits. In such instances, the answer is to bring the cloud to the data. Or as it is beginning to be called – processing at the edge.
Increasingly organisations will need to develop cloud strategies for handling data in multiple locations.
Applications, not just analytics, propel big data adoption
Early use cases for big data technologies focused primarily on IT cost savings and analytic solution patterns. Now, we're seeing a wide variety of industry-specific, business-driven needs empowering a new generation of applications dependent on big data. Increasingly, applications are driving big data adoption.
The Internet of Things (IoT) will integrate with enterprise applications
The Internet of Things is for more than inanimate objects. Everything from providing a higher level of healthcare for patients to enhancing customer experience via mobile applications requires monitoring and acting upon the data that people generate through the devices they interact with.
Enterprise must simplify loT application development and quickly integrate this data with business applications. By blending new data sources with real-time analytics and behavioural inputs, enterprises are developing a new breed of cloud applications capable of adapting and learning on the fly.
The impact will be felt not only in the business world but also in the exponential growth of the smart city and smart nation projects across the globe.
Data virtualisation will light up dark data
Data silos proliferate in the enterprise on platforms like Hadoop, Spark, and NoSQL databases. Potentially valuable data stays dark because it is hard to access (and also hard to find). Organisations are realising that it's not feasible to move everything into a single repository for unified access and that a different approach is required.
Data virtualisation is emerging to enable real-time big data analytics without the need for data movement.
Businesses board the bus to ride the data highway
Apache's Kafka technology is already building momentum and looks set to hit peak growth in 2017. In case you've not encountered it, Kafka is a means of seamlessly publishing big data event topics, ingesting data into Hadoop, and distributing data to enterprise data consumers. Kafka employs a traditional, well-proven bus-style architecture pattern, but with very large data sets and a wide variety of data structures. This makes it ideal for bringing data into your data lake and providing subscriber access to any events your consumers ought to know about.
Kafka looks set to be the runaway big data technology of 2017.
A boom in pre-packaged integrated cloud data systems
Increasingly, organisations are seeing the value in data labs for experimenting with big data and driving innovation. But uptake has been slow. It is not easy to build a data lab from scratch -- whether on-premises or in the cloud. Pre-packaged offerings including integrated cloud services such as analytics, data science, data wrangling, and data integration are removing the complexity of do-it-yourself solutions.
Expect a boom in pre-packaged, integrated cloud data labs throughout the year.
Cloud-based object stores become a viable alternative to Hadoop HDFS
Object stores have many desirable attributes: availability, replication (across drives, racks, domains, and data centres), data recovery, and backup. They are the cheapest, simplest places to store large volumes of data, and can directly accommodate frameworks like Spark. We see object storage technologies becoming a repository for big data as they get more and more integrated with big data computing technologies and will provide a viable alternative to HDFS stores for a lot of use cases.
All exist as part of the same data-tiering architecture.
Next-generation compute architectures enable deep learning at cloud scale
The removal of virtualisation layers. Acceleration technologies, such as GPUs and NVMe. Optimal placement of storage and compute. High-capacity, nonblocking networking. None of these things is new, but the convergence of all of them is. Together, they enable cloud architectures that realise order of magnitude improvements in compute, I/O, and network performance.
The result? Deep learning at scale, and easy integration with existing business applications and processes.
Hadoop security is no longer optional
Hadoop deployments and use cases are no longer predominantly experimental. Increasingly, they're business-critical to organisations like yours. As such, Hadoop security is nonoptional.
You can expect to deploy multi-level security solutions for your big data projects in the future.