Running a server farm in the cloud at full space 24/7 can be awfully costly. What if you could turn off most of the space when it isnt needed? Taking this idea to its close conclusion what if you could fetch up your servers on-demand when they are needed and only prepare sufficient space to feel the load?
Enter serverless computing. Serverless computing is an execution standard for the cloud in which a cloud preparer dynamically allocates—and then charges the user for—only the calculate resources and storage needed to execute a particular piece of code.
In other words serverless computing is on-demand pay-as-you-go back-end computing. When a request comes in to a serverless endpoint the back end whichever reuses an existing ’hot’ endpoint that already contains the correct code or allocates and customizes a resource from a pool or instantiates and customizes a new endpoint. The infrastructure will typically run as many instances as needed to feel the incoming requests and release any idle instances behind a cooling-off time.
’Serverless’ is of order a misnomer. The standard does use servers although the user doesnt have to handle them. The container or other resource that runs the serverless code is typically running in a cloud but may also run at an edge point of nearness.