If your head is spinning after Andy Jassy’s re:Invent Keynote this morning, don’t worry – you’re not the only one. AWS announced nearly two dozen major features and services today, and this list includes some real game changers. This morning’s presentation was one of the biggest events of the conference, but it did fall during work hours for those of us who couldn’t make the trip. If you were one of the unlucky ones who had to leave the stream on in the background while you worked, here’s a recap of Amazon’s biggest announcements, broken down by category.

Containers

Containers are one of the hottest technologies in cloud computing today, and it’s easy to see why. You build your application, package it, and then deploy it wherever you want. But because containers are so powerful, managing them at scale can be a real chore. We’ve already got ECS (Elastic Container Service) to manage containerized workloads on AWS, and as of today, we can look forward to a couple more.

Amazon Elastic Container Service for Kubernetes (EKS)

Kubernetes, an open source container orchestration tool, has been growing in popularity for several years now, but has never been easy to integrate with AWS. Well, the people have spoken and Amazon has listened. EKS is a managed service on AWS, which eliminates the need to manually run Kubernetes on EC2 instances that you set up yourself. It’s hybrid cloud compatible, automatically deploys across multiple availability zones, and best of all, it integrates with the rest of the AWS platform, keeping with their goal of simple infrastructure management.

Managed Kubernetes has been one of the most commonly requested features recently, so this announcement doesn’t come as a huge surprise. EKS is currently in preview, and you can get all the details here.

AWS Fargate

As I mentioned, container management at scale is a tough task. But AWS Fargate looks to make it simpler than ever. You can think of it like this – Fargate does for container workloads what Elastic Beanstalk does for applications. It’s a fully managed container service, and that includes the underlying infrastructure (ECS, and in the future, EKS). You package your workload, configure the infrastructure you want it to run on, and upload it. There’s no cluster management, and you don’t have to worry about servers. Fargate aims to be a total container management solution, and a really nice extension of ECS and EKS.

Fargate is available today in the Northern Virginia region, and you can get more information on the AWS blog.

Databases

We don’t often think of databases as among the most glamorous parts of our infrastructure. But there’s no denying that, without them, most of our work just woudn’t be possible. Amazon already offers a few great database solutions including RDS, Aurora, and DynamoDB, but in the ever-changing cloud landscape, there’s always room to improve. Here are a couple of the changes we learned about today.

Aurora Multi-Master

In highly available infrastructure, database clusters are key to redundant, fault tolerant data storage. Aurora, Amazon’s relational database engine, has not supported multi-master replication…until now. With Aurora Multi-Master, you’ll be able to perform reads and writes to master databases across availability zones (and across regions starting in 2018).

Previously, Aurora could fail over within a second on failed reads of a database cluster. The problem, however, was with writes – failover took up to 30 seconds for the master instance to fail over. While this sounds like an eternity, it’s actually not terrible when you compare it to other engines – but Amazon thought they could do better, so they created Multi-Master. The best part? It’s (still) entirely SQL-compliant, so you can start using it without rewriting your existing code.

Aurora Serverless

Everything else is serverless now – why not databases? With Aurora Serverless, this is reality. Using the existing Aurora database engine, Aurora Serverless aims to cut costs and hugely improve scalability (by scaling up and down automatically). It also eliminates the need to manually provision new instances. With Aurora Serverless, your database starts up on demand, shuts down when it’s not in use…and when it is in use, you’re billed by the second.

It’s almost impossible to overstate how cool Aurora Serverless is. For medium or large organizations, it’s pretty much an automatic cost-saving tool – your database knows to stop running when it’s not being used, saving you at minimum 8 hours of cost per day, per instance (assuming your developers sleep 8 hours a day, which might not be a fair assumption to make 🙂). For new developers or owners of small apps and websites, the benefit is the same, but at smaller scale.

Aurora Serverless was one of my personal favorite announcements this morning, and is currently in preview with a scheduled launch in early 2018.

DynamoDB Global Tables

Aurora wasn’t the only database service to get a nice upgrade. With DynamoDB Global Tables, we’ll have access to the first fully managed, multi-master, multi-region database system in the world. I know that’s a mouthful, but the benefits are pretty similar to Aurora Multi-Master. Global Tables means you can easily distribute your DynamoDB instances across multiple regions. This has an impact on availability by increasing redundancy, but it also offers performance benefits – more database locations means lower latency connections for your users.

DynamoDB Global Tables can be used immediately. For information on how to get started, see the official AWS page.

DynamoDB On-Demand Backup

Another new feature for DynamoDB was announced this morning as well – On-Demand Backup. The concept is pretty self-explanatory. You can create backups (and restore them) with one click or API call. This is a huge benefit to organizations subject to archiving regulations, as backups no longer have to impact your performance – it’s all done behind the scenes.

Availability is limited to several US regions and Ireland, and is expected to roll out on an account-by-account basis. See Jeff Barr’s blog post for more details, as well as some pricing guidelines to be aware of.

Amazon Neptune

If you couldn’t tell from yesterday’s post about AppSync, I am quickly learning to love GraphQL and graph databases. AppSync supports real-time GraphQL queries on existing data sources, but it kind of felt like something was missing… That’s where Amazon Neptune comes in.

Neptune is a fully managed graph database service. Existing services have their problems, scalability and availability primary among them. With Amazon Neptune, you can store billions of records will millisecond latency. It supports Apache Tinkerpop and W3C RDF graph models, so tradeoffs you may have made with existing commercial solutions are no longer an issue.

I should have seen this announcement coming, but either way, I’m extremely excited to get started with Amazon Neptune. It’s currently in preview, and you can find all the details here.

Big Data

Running a large scale organization is about data…specifically, big data. Even just a decade ago, analytics were a “nice to have” feature when making complex business decisions. Today, analytics are absolutely necessary to stay competitive. And AWS announced a couple of features that will be extremely helpful in this department.

S3 Select and Glacier Select

These new features are distinct from one another, but they do the same thing. S3 is an ideal solution for a “data lake” because of its durability, scalability, and availability, among other things. Glacier is archive storage for S3 objects. But running queries on massive datasets takes time, and to make the best decisions, that time needs to be as short as possible.

With S3 Select, you can pull out only the data you need using an API to pull only specific parts of an object. Glacier Select works the same way, but the real benefit comes from being able to include archives in your data lake. Not only can you access your data faster (up to 4.5x faster, according to Andy Jassy, and about 5x according to an AWS blog post), you can query more of it. More data means better analytics, and better analytics means better decisions.

Machine Learning

Everyone seems to want to get their hands on machine learning tools these days, and I can’t blame them. Machine learning allows computers to do things by “learning” rather than being explicitly programmed. The implications of this field could fill an entire blog post (note to self…), but a few practical applications include image recognition, natural language processing, and fraud detection.

The problem with machine learning is that it’s complicated. Most organizations don’t have the experts on hand to implement it, at least at scale, and Amazon wants to change that. Before we dive into the announcements, I’ll point out the three “layers” of machine learning implementation, according to Andy Jassy:

  1. Frameworks and Interfaces – these are the actual tools experts use, such as Tensorflow
  2. Platform services – A combination of preset frameworks and interfaces, designed to be more accessible to developers
  3. Application services – Tools that can plug into existing applications to serve a specific machine learning purpose

Amazon doesn’t like to reinvent the wheel, so their newest offerings cover platform and application services by integrating with existing frameworks and interfaces. I promise this will make more sense with some context, so without further ado…

Amazon Sagemaker

Amazon Sagemaker is probably the biggest, most comprehensive service Amazon announced today. Its goal is to let you manage and deploy machine learning frameworks, which is pretty ambitious, so I’ll break it down further into its component processes to illustrate how it works.

Build

Sagemaker comes with prebuilt notebooks (built on Jupyter) that solve common problems in machine learning. Amazon built ten algorithms from scratch to address these problems, and these can be applied directly to whatever you’re trying to accomplish. Like with many other services, you can also import your own if you need a custom solution.

Train

One of the big benefits of Sagemaker is “one-click training.” What this means is that you specify the location of your dataset in S3, choose an instance type to run the computation, and Sagemakers does all the heavy lifting, setting up the algorithms to run your training. Once it finishes, it even tears down the cluster for you, so there’s almost no infrastructure management involved. Sagemaker also allows hyperparameter optimization – you can check a box to spin up multiple copies of your model. Quite literally, it uses machine learning to inform (and improve) your machine learning model.

Deploy

Again, this is one click. To integrate your machine learning framework into an application, you just set the instance type and minimum/maximum numbers for your cluster. Sagemaker then gives you secure endpoints to connect to your app, and that’s it! The cluster you create is fully managed, including auto scaling. You can even train your model elsewhere and import it to Sagemaker just for the management tools.

Whew… that seems like a lot, but Amazon Sagemaker does a lot. Behind the scenes, it integrates with existing tools like Tensorflow (which you can specify), to remove the barrier that can keep everyday developers from getting more involved with machine learning. The applications of a tool like Sagemaker really are endless – it’s powerful enough to run at scale, and simple enough for a relative beginner.

Oh and one more thing – Amazon Sagemaker includes free tier eligible options. Check it out here.

AWS DeepLens

Finally we get to Amazon’s first hardware announcement of the week. DeepLens is a video camera, but it’s also much more. It’s a fully loaded device with onboard compute power optimized for deep learning. To understand what this means, let’s look at what video processing used to look like: in the past, you’d capture video with a device, stream it to the cloud, and process it there. With AWS DeepLens, this process works in reverse. You actually load your machine learning model onto the device and run it there. Since it comes with sample code, you can go from unboxing to running inferences within about 10 minutes.

DeepLens also integrates with Sagemaker (of course) and Lambda. A few devices will be given out this week to ML track attendees at re:Invent, but you can preorder your own DeepLens on Amazon for $249. Devices are expected to ship in 2018. For more info, including hardware specs, see the product page.

Amazon Rekognition Video

Up until now, we’ve been using AWS Rekognition for, well, image recognition (here’s a fun project we did to illustrate). That’s a simplification, but the point is that the service didn’t support video…until today. With Amazon Rekognition Video, you can process real-time and batch video to detect objects, people, activities, and more. To give a couple practical examples, it could be used to detect inappropriate content or check surveillance footage for missing people.

The service is continually trained, meaning that it gets “smarter” as more people use it. What’s different from the previous two services is that Rekognition Video doesn’t need a custom learning model – it’s an application service that is ready to integrate with your existing apps, so you can get started with it right away. Rekognition Video is available in select regions today – here’s more info.

Amazon Kinesis Video Streams

Kinesis is already well-known for its real time streaming capabilities, and today, support for video was added. This service integrates with Rekognition video (as an input source), and comes with an SDK that manufacturers can use to integrate it directly into their devices. Kinesis Video Streams are available immediately, and you can find more info here.

Amazon Transcribe

Transcribe does exactly what it says – it converts speech to text. As someone who’s done a number of interviews and typed them out by hand, this is a service I’ll be using right away. Amazon Transcribe supports long-form, multiple language audio sources, adds intelligent punctuation and formatting, generates timestamps, and even recognizes multiple speakers. It’s currently in preview mode, and you can check it out here.

Amazon Translate

Translate is fairly self-explanatory as well – it translates text from one language to another. In addition to batches of text from S3, it boasts real-time translation, which is amazing news for those looking to use it in customer interactions. Because AWS designs all their services to fully integrate with one another, the potential applications of Translate are really exciting. For instance, you may hook it up to Polly to create an application that can speak to users in multiple languages. Or you might use Lex to create a translation chatbot. There are so many possibilities, and you can find more information on the AWS blog.

Amazon Comprehend

This is another of my top picks from the day – Comprehend is a fully managed natural language processing service. Here’s how it works: you provide data from your lake (S3, most likely) via an API, and Comprehend will provide four elements for analysis:

  1. Entities – Things like people, dates, and specific places
  2. Key phrases – Based on the content of the text, Comprehend picks out the “most important” sets of words
  3. Language – Automatic detection of the language used
  4. Sentiment – Is the text saying something positive or negative?

Analysis of the text is still up to you, but Comprehend makes it easier to pick out relevant information to actually analyze. For example, you might use Comprehend to process a set of articles, then analyze them based on key phrases to sort them into categories in order to provide recommendations in a news application.

Like the last few services, Comprehend is an application service. You can plug it into your existing applications right away – check here for more details.

Internet of things (IoT)

Over the last several years, something amazing has happened – billions of devices have been deployed with limited onboard resources, and together with the cloud, they’re making up a whole new class of technology. This is IoT, or the internet of things. In Amazon’s own words, the value of IoT comes from “closing the gap between the physical and digital world in self-reinforcing and self-improving systems.” And today, they announced five new services to make this a reality.

AWS IoT 1-Click

So you have a device and you want it to do something. This used to mean programming the device manually, but with AWS IoT 1-Click, it’s, well, one click. You choose from a list of preconfigured devices, select a Lambda function that you’ve already created (or a preset) and press a button to deploy that function trigger to your device. IoT 1-Click uses a mobile app as its interface, which makes it a great option for managing small fleets of custom devices around your home, like buttons that turn on the lights or order groceries. For more information, check out the service page.

AWS IoT Device Management

IoT Device Management is similar to 1-Click, but at a larger scale. This service allows you to onboard, deploy, and manage your fleet of devices all from a single location. Some features include organizing inventory, querying the fleet for troubleshooting, and remotely deploying updates. The real power, however, comes with the ability to take action on subsets of your devices, not just all of them at once. If a handful of your IoT devices malfunction, for example, you can update just those ones without having to redeploy the entire fleet. Here’s more information.

AWS IoT Device Defender

If the first two IoT services were “nice to have,” IoT Device Defender is absolutely essential. Many of the DDoS attacks we’ve seen in recent years have utilized unsecured IoT devices. The results of these attacks has been annoying at best and disastrous at worst. IoT Device Defender allows you to set device policies, audit them, and monitor behaviors on an individual level to identify anomalies and out-of-compliance behaviors. It can also send you automatic alerts when it detects a problem. Security has always been the single greatest criticism of IoT, and this new AWS service looks to be a great step in fixing that. IoT Device Defender is scheduled for release in 2018.

AWS IoT Analytics

IoT Analytics might not have a cool name like Athena or Neptune, but it might just be one of the most powerful. This service preprocesses data from your devices before writing it to a time-series data store. Traditionally, IoT devices pick up a lot of “noisy” data, like temperature and humidity, resulting in raw, unstructured information that’s very difficult to process. Up until now, it’s just been the cost of doing business, so to speak. IoT Analytics wants to change that by enriching the captured data to allow for more efficient processing and, as a result, enable more complex decision making. Check here for all the details.

Amazon FreeRTOS

All IoT devices are not created equal. While larger devices often come with a full onboard CPU, smaller ones tend to use an MCU (micro controller unit). The latter class of devices outnumber the former by a 40:1 ratio, but they do still need an operating system. Amazon has created their own version of FreeRTOS (a commonly used OS in these devices), and it’s got some awesome features. Amazon FreeRTOS comes with prepackaged libraries to connect to AWS services, update, and secure the device. It also allows you to easily send data to the cloud for further analysis.

I’ll touch on this more in a moment, but one of the coolest features, in my opinion, is connectivity to nearby AWS GreenGrass devices. If your IoT device is unable to connect to the cloud, or you just want a lower latency connection, you can send it to a nearby GreenGrass device for processing instead. This leads us nicely into the final announcement of the keynote, but more information about Amazon FreeRTOS can be found here.

AWS GreenGrass Machine Learning Inference

GreenGrass has been a staple of Amazon’s IoT offerings for a while now – it’s software that allows you to run compute operations, messaging, and sync data across IoT devices from a central location. If this sounds complex, here’s a quick example:

Say you have a collection of IoT sensors deployed in the field, along with a GreenGrass device. Rather than sending data straight to the cloud, the sensors can connect to the GreenGrass device directly and have it perform some operation for them. This is done locally – the sensors don’t need to be connected to the public internet to communicate with GreenGrass. This saves time by lowering the latency of connections, and money by filtering data from the sensors before sending it all to the cloud for processing.

With Machine Learning Inference, the GreenGrass device still operates the same way – at the edge of your network. But it can now apply machine learning models in the field. For example, perhaps you have an IoT sensor that takes an action in response to a voice command. Before, you’d have to send that data to the cloud for processing, then back to the sensor, which would then trigger the action (back to the cloud again, most likely). With Machine Learning Inference on a GreenGrass device, you can do all of that locally, resulting in much faster response times. For more details, checkout the GreenGrass feature page.

Recap

These were just the product announcements from Andy Jassy’s keynote, but the talk itself included quite a bit more! Today’s presentation really highlighted the power of the AWS platform, and the ways it’s changing the world we live in. We also heard from a few guests, including Mark Okerstrom, CEO of Expedia, who had some powerful words about the ways the cloud is connecting people – not just computers.

The takeaway from today’s talk was pretty simple. The cloud is enabling us to do amazing things right now and the longer we wait to adopt these new technologies, the farther behind we fall. Most of the features that were announced today would have sounded like science fiction even just 3-4 years ago. The cloud is enabling us to do things that once seemed impossible, and the only limitation is our skill in how we use it.

That’s where we come in. At Linux Academy and Cloud Assessments, we appreciate the power of platforms like AWS, and we want to ensure that everyone has the opportunity to use them to create a better world. You can even get started today by signing up for a free account. Or if you’re at re:Invent, stop by our booth (#737 and 738 in the Venetian). Worst case scenario, you walk away with a free t-shirt. Best case, you change your life.

We’ll be back tomorrow with a recap of Werner Vogels’ keynote speech (and hopefully more cool announcements). Stay tuned!

About This Author

Phil Zona is a technical writer for Cloud Assessments. When he's not writing, he enjoys web development, cooking (and eating), and watching videos of animals behaving like humans.

Post a Reply

Your email address will not be published. Required fields are marked *