Delivering the keynote at AWS re:Invent 2017 in Las Vegas, Jassy said: “Builders don’t want machine learning to be black-box or cryptic – they want a much easier engagement.”
Amazon has a long heritage of machine learning, he stated, citing examples such as personalised recommendations when shopping, fulfilment automation and inventory management, drones, voice-driven interactions through Alexa, and also through its Amazon Go experimental customer experiences.
However, Amazon is a large company and most businesses in the world do not have the luxury of an expert ML practitioner on the payroll.
- collecting and preparing training data;
- choosing and optimising ML algorithms;
- setting up and managing environments for training;
- training and tuning the model;
- deploying the model in production; and
- scaling and managing the production environment.
Jassy stated AWS has reduced this manual effort and complexity, through its new product Amazon SageMaker.
SageMaker allows developers to feed in a data set, select a highly-optimised algorithm out of a selection of the 10 most commonly used algorithms (K-Means Clustering, Principal component analysis, Neural topic modelling, Factorisation machines, Linear learner — regression, Linear learner — classification, XGBoost, Latent Dirichlet allocation, Image classification, and Seq2Seq) and an ML framework (Apache MXNet, TensorFlow, Caffe2, CNTK, PyTorch, or Torch) and receive a model with minimum effort.
The developer can review the model and train data, with SageMaker removing another painpoint by auto-tuning parameters through what AWS labels “hyperparameter optimisation”. This is a single checkbox option to turn on, Jassy explained, and will cause SageMaker to spin up multiple copies of your model and framework, comparing many instances of the algorithm with varying parameters, determining which gives the most successful result and also identifying positive features that are fed into subsequent instances of the model.
Finally, when ready to deploy, the developer can, in one-click, create an intelligent service behind an API, across multiple availability zones with auto-scaling.
“Amazon SageMaker takes away most of the much of ML,” said Dr Matt Wood, general manager, Artificial Intelligence, AWS. “You can train dozens and dozens more models because it’s so easy to use all the data available to you. You can come up with all sorts of wonderful ideas.”
Amazon SageMaker opens up ML to any developer, without an understanding of data science required, Jassy stated. “But can we do more to put ML in the hands of all developers?” he asked.
Jassy next unveiled AWS DeepLens – a physical product, being the world’s first wireless video camera with on-board compute optimised for deep learning. “You can go from unboxing to your first inference in less than 10 minutes,” he said.
DeepLens uses Intel-optimised deep learning software and tools and libraries, including the Intel Compute Library for Deep Neural Networks (Intel clDNN) to run real-time computer vision models, powered by an Intel Atom X5 processor with embedded graphics that support object detection and recognition.
AWS DeepLens does not need to upload imagery to the cloud; it can interact with Amazon SageMaker or AWS Lambda to send metadata, and do all its processing within the camera itself.
Two examples of what you could achieve with DeepLens are to send an alert when your dog jumps on your couch, or to open your garage door when the camera recognises a number plate approaching.
“We are seeing a new wave of innovation throughout the smart home, triggered by advancements in artificial intelligence and machine learning,” said Miles Kingston, general manager of the Smart Home Group at Intel. “DeepLens brings together the full range of Intel’s hardware and software expertise to give developers a powerful tool to create new experiences, providing limitless potential for smart home integrations.”
Amazon SageMaker is available now, while AWS DeepLens is available for pre-order for US$249.
Jassy also announced
- Amazon Rekognition Video to detect objects, scenes and activities within motion video as well as skeleton modelling-based person tracking;
- Amazon Transcribe for automatic speech recognition and conversion into grammatically correct text, handling multiple languages and speakers;
- Amazon Translate for real-time translation and batch analysis, with automatic language recognition; and
- Amazon Comprehend, to perform fully managed natural language processing that discovers valuable insights from texts, recognising entities, key phrases, language and sentiment.
The writer has been attending AWS re:Invent 2017 as a guest of the company.