@MIT series is a group of articles being written to describe my learning path attending to the Cloud & DevOps: Continuous Transformation at the MIT.
This article at a glance — TL;DR
Introduced the serverless paradigm, pros and cons, limits and the evolution to reach it
Serverless computing is a Cloud-based solution for a new computing execution model in which the manager of the server architecture and the application developers are distinctly divided. A frictionless connection exists in that the application does not need to know what is being run or provisioned on it; in the same way that the architecture does not need to know what is being run.
The journey that led us to serverless (image below).
A true microservice:
- Does not share data structure and database schema
- Does not share internal representation of objects
- You must be able to update it without notifying the team
- Your functions become Stateless: you have to assume your function will always run a new recently deployed container.
- Cold starts: since every time your function will run in a new container, you have to expect some latency for the container to be spun up. After the first execution, the container is kept for a while, and then the call will become a “warm start”.
- Cloud provider takes care of most back-end services
- Autoscaling of services
- Pay as you go and for what you use
- Many aspects of security provided by cloud provider
- Patching and library updates
- Software services, such as user identity, chatbots, storage, messaging, etc
- Shorter lead times
- Managing state is difficult (leads to difficult debug)
- Complex message routing and event propagation (harder to track bugs)
- Higher latency for some services
- Vendor lock-in
Exercises and Assignments
- Assignment: Deploy an existing application to AWS using Lambda and DynamoDB to show a Pacman Game. A few screenshots below:
All the resources used to reach the results above are stored in this GitHub repository: https://github.com/guisesterheim/MITCloudAndDevOps