The first time we heard of AWS Lambda we got excited. That’s because it sounded like a sleek new sports car to us. As it turns out, it is a serverless computing service that is provided by Amazon. Just imagine the things you could do with this kind of access online. Well, we’re gonna go through some of that right now.

1 – So What Exactly Does Serverless Mean?

First off, it’s such a new word that Webster’s hasn’t even gotten a hold of it to define and twist into their format fully. The simple explanation – and that’s all we’re going to go with – is that you write your code. You upload it to Lambda. Lambda does the rest. Done. Okay, we have to explain that a bit further. Lambda will automatically run whatever code you feed to it, and it does this without you having to either provide or manage a server. Hence the term, serverless. You run your stuff without a server. Simple, right?

2 – The System Uses Continuous Scaling

In addition to automatically running your code, Lambda does another ‘automatic’ thing for you. It will scale your application by running the code in response to each trigger. A simple way to look at this is to imagine your laptop computer. When you press the ‘enter’ button, you create a trigger. Code buried within your computer responds to that. With Lambda, it does the same thing but responds to the code you upload to the serverless environment. The automatic scaling fits the size of the workload that has been generated by the trigger. Got that?

3 – This Is Not Like A Taxi Cab Meter

You know how when you hop into a cab the meter starts running, and it keeps running even if the cab is sitting still in traffic? We hate that. You probably hate that. It’s not fair, and we are going to sit down and write a nasty e-mail one of these days about the way these meters operate. What’s that got to do with Lambda? Oh, right. There is something that we can connect between Lambda and taxi cab meters. With Lambda you are charged for every 100ms (milliseconds) your code executes as well as the number of times your code is triggered by the serverless system. Here’s the fun part – if your code isn’t running, like a cab stuck in traffic, you don’t pay anything. Yes, it was a reach, but we made our point.

Be Forewarned, Here’s Some Technical Stuff

There are some really sexy parts to the whole Lambda picture, but unless you are some kind of techno-geek, it’s going to be just gibberish to you. Regardless, here are some specs to mull over.

1 – Runtimes

AWS Lambda supports a handful of different runtimes. They include:

Node.js v8, 10 and v6, 10

Java 8

Python 3.6 and 2.7

.NET Core 1.0.1 and 2.0

Go 1x

Ruby 2.5

2 – The Execution Environment

Here’s another really sexy thing about Lambda. Every function runs inside a container which happens to have a 64-bit Amazon Linux AMI. There’s more. That environment also contains the following:

128MB – 3008MB Memory in 64MB increments

512MB ephemeral disk space

900-second maximum execution duration

50MB compressed package size

250MB uncompressed package size

What About The CPU?

How dare you notice that! Actually, there’s a pretty good reason why we didn’t list it as part of the container specs. No, it’s not because we forgot to, either. The reason why you don’t see the CPU in the list just about this paragraph is that you don’t have direct control over the CPU with Lambda. However, what is really the important point to make here is that as you increase the amount of memory being used, the CPU also increases in size.

More On The Ephemeral Disk Space

The disk space is only available in one location. That would be in the form of the /tmp directory. Do we have to spell out to you what the /tmp stands for? If you have already said something along the lines of “isn’t that just a temporary directory and what the heck am I gonna do with that?” you’d be in the same general territory of this next sentence. The disk space is just temporary in the /tmp directory (like you guessed).

The Whole Idea Behind 900 Seconds

Okay, we are going to save you a bit of time working out the math. For reference, 900 seconds is the equivalent of 15-minutes. So, this means that the execution duration available with Lambda will run for a maximum of 15-minutes. That may sound pretty awesome to some of you. Others may consider that to be a rather kooky length of time. The reality is, with a max of 15-minutes Lambda is obviously not designed for long running processes.

Why Are We Talking About The Size Of My Package?

Settle down, stud. The package size listed about is a reference to how much code you can use to run your function. The package size also includes anything your function may import that would be considered as a dependency. The limits are posted at 250MB when uncompressed and 50MB when compressed. That’s still a whole lotta space to work with.

Speaking Of The Package, Here’s How It Functions

That was sort of a play on words. What happens is that Lambda functions are required to be packaged when sent to AWS. Essentially, the process you would likely follow would involve compressing the function and any dependencies and uploading that package to an S3 bucket. You’ll also have to let AWS know that the package you just sent them is one you plan to use when a specific event takes place. That would be the trigger. There’s logic in there somewhere, but when you boil it all down, the process is simple and straightforward.

What’s This Thing Gonna Cost You?

Well, here’s another really cool bonus about AWS Lambda. There is a free tier. We would refer to it as a very generous free tier and suggest you do the same. Here’s what this comes with:

1M free requests per month

400,000 GB-seconds of computer time per month

If you go past this threshold, you enter into a realm where you will be spending a bit of money for Lambda. Don’t worry, and it is more than reasonable. For example, the cost after you surpass the free tier levels is:

$0.20 per 1M requests

$0.00001667 for every GB-second

Clearly, Lambda is popular because it is such an affordable infrastructure platform.

The fact that the container – including all the required resources – runs functions that are completely managed by AWS is a bit of a mind-twisting concept for us. But when you look into it a bit further, it really does make a lot of sense.

The Simple Mechanics Behind Cause And Effect

When an event calls for a function to respond, it automatically gets pulled out by AWS to handle the request. When there is no trigger for a specific request, that function – all that code and jazzy stuff connected to it – just sits there idling. But while it is idling, it doesn’t cost you anything.

Here’s some really cool news to add to that. If additional requests are made as the first request is being handled, a new container is created to serve that request. Where this would be most likely to happen is if your function happens to be experiencing a spike in requests. As much as we want to avoid using the word ‘clone’ your container ends up getting recreated multiple times by the cloud provider in order to address the extra requests.

It’s sort of like hitting the drive-thru, and all ten cars in line behind you order the exact same thing as you. Since you will be receiving the original burger and fries, additional staff in the restaurant or fast food joint is now tasked with creating additional burgers and fries that would essentially match what you received. The ‘container’ requested for each additional order would have the same ingredients as the original and once put together, each new burger could be served out the window to another user who had requested one.

Well, there you have it. Everything you need to know about AWS Lambda and had nowhere to go for some information than to us. It really isn’t the easiest computing system to explain when you have to try to picture something that does not require a server. However, when you take a closer look at the specs, it doesn’t take long to realize that there is really something special going on in the world of Amazon that goes far beyond e-books and a lot of resellers.