Machine learning models that can be built, trained, and deployed at scale
Fully managed service that allows data scientists and developers quickly and easily to build, train and deploy machine learning models.
It allows scientists and developers to create machine learning models for intelligent, predictive apps.
It is designed to be highly available with no maintenance windows and scheduled downtimes.
Users can choose the type and number of instances used for the hosted notebook, training, or model hosting.
Can be used as batch and endpoint interfaces.
Supports Canary deployment with ProductionVariant, and multiple models can be deployed to the SageMaker HTTPS endpoint.
Supports Jupyter notebooks
Users can save their notebook files to the attached ML storage volume.
After saving their files on the attached ML storage volume, users can modify the notebook instance or select a larger profile using SageMaker console.
Includes built-in algorithms to perform linear regression, logistic regression and principal component analysis. Also factorization machines, neural topic modelling, latent dirichlet allocation, gradient boosted trees. seq2seq. Time series forecasting, word2vec, image classification.
The optimized protobuf format for training data is the best. This allows for Pipe mode, which streams data directly from S3, and speeds up start times and reduces space requirements.
Provides built-in algorithms, prefabricated container images, or extensions to a pre-built containers image, and even the ability to build your own container image.
Supports users custom training algorithms through a Docker Image adhering to the specified specification.
Also available in optimized MXNet and Tensorflow containers, Chainer, & PyTorch
This ensures that ML model artifacts as well as other system artifacts, are encrypted at rest and in transit.
Requests to the API or console are made via a secure (SSL-enabled) connection.
Stores code in ML storage volumes, protected by security groups and optionally encrypted during rest.
SageMaker Neo is a new capability which enables machine learning models run anywhere in the cloud or at the edge. Amazon Comprehend
This service uses natural language processing (NLP), managed to uncover insights and relationships in text.
Identifies the language of the text; extracts key words, places, brands, and events; understands whether the text is positive or negative; analyzes text using tokenizations and parts of speech; and automatically organizes text files by topic.
Amazon Lex can analyze a set of documents and other text files (such social media posts) to automatically organize them by relevant keywords or topics.
This service allows you to build conversational interfaces with voice and text.
Provides advanced deep learning functions for automatic speech recognition (ASR), which converts speech to text, as well as natural language understanding (NLU), to recognize the intent of text. This allows you to create applications that are highly engaging and allow for lifelike conversations and rich user experiences.
common use-cases of Lex include: Application/Transactional bot, Informational bot, Enterprise Productivity bot and Device Control bot.
Cognito for user authentication, Lambda for intent fulfillment, and Polly for text-to-speech use Cognito.
Scales can be tailored to customer needs and do not impose bandwidth restrictions.
It is a fully managed service, so users don’t have to manage scaling resources or maintaining code.
Amazon Polly uses deep learning to improve over the course of time.
text into speech
Advanced deep learning technology is used to synthesize speech that sounds human-like.
Analyze image and video
Identify objects, people, scenes, text, and activities in images