This is a living document. Come back often to get updates.
Cloud Native
Serverless deployments micro frontends
What is cloud-native?
That is a great question! And if you ask different developers what they understand by
cloud-native, you will get different answers. In this section, I will try to give you an answer
based on -my understanding- of cloud-native but also let you know what the industry as a whole
means to represent with the term.
How I understand cloud-native
Cloud-native applications and developers are those that leverage cloud technologies and services, almost exclusively, from code crafting to code delivery and app publishing.
Cloud-native is more than just a buzzword. It is a set of principles that contribute to shipping highly scalable, elastic, resilient, and fault-tolerant applications.
JavaScript developers working on Jamstack or composable frontends are essentially cloud-native developers, because they are leveraging cloud services to build and deliver their applications, e2e.
Ironically, a lot of JavaScript developers have no idea what cloud-native is and they never heard the term.
How the industry and the ecosystem understand cloud-native
For many people in the industry, though, cloud-native is all about containers, and particularly linked to Kubernetes, aka K8s, a portable and scalable management system for containerized applications,
that automates their development, integrations, deployment and orchestration.
If you're a frontend developer who doesn't want to learn about Docker, Kubernetes or containers development and management, in general, you can
still benefit from cloud-native principles and technologies. There are many Platforms as a Service (PaaS) that abstract away the complexity of containers and
let you focus on building your application. Even the set-up of CI/CD pipelines -that I discuss further down- takes only a few minimal configuration steps.
Those PaaS are broadly known as fully-managed, serverless cloud services.
What is containerization?
If you're not interested in containers, you can skip this part. But it won't hurt to know what containers are and why they exist.
Virtualization
Containers are a form of virtualization that allows you to package an application with all its dependencies together. Because of that, you can port the whole package to reproduce an environment, anywhere else. Back when I started working as a developer,
most virtualization was done using virtual machines (VMs). VMs are much more complex than containers, and they require a hypervisor to run. That makes them beeffier,
slower, and more resource-intensive than containers, typically requiring additional software and configuration to run.
For developers, particularly when they work in the frontend, and are not very familiar with virutalization concepts and mechanics, VMs can be hard to set-up, launch, manage and debug when something goes wrong.
Containers
Containers, however, are easier to configure, package and manage, than virtual machines, although some can argue they
also have a learning curve and some software pre-requirements. But they have quite some advantages:
Containers are portable
Because containers bundle together everything you need to run an application, from the operating system to any 1P or 3P dependency, you can run them anywhere.
Containers are secure
Containers provide a high level of isolation for workloads, by virtualizing hardware capabilities that a machine needs to run software, like CPU, RAM, hard disk capacity, etc.
That makes containers secure, because they are isolated from the host machine, and other containers when they exist.
Container images
Container images are basically the read-only blueprint that defines container instances built from a that image. Because they are portable, they can be shared, allowing developers to quickly get started with all they need to start working.
When I got my first enterprise gig, I remember how difficult it was to get started. Local environment configuration could take hours or days, to install everything you needed to start working. Containers make it possible to get started in minutes.
But you said there is a learning curve
That's true. Working with containers can be challenging, if you're the one setting them up and managing them. Luckily, containers also
enable a great separation of concerns: application developers focus on the application, and operations teams focus on the infrastructure.
This is also one of the principles of cloud-native development.
The reality is that even if you're a frontend developer, you're probably building and deploying in a container today, and you may not even know it!
Fully managed PaaS push the envelope even further
When you're building and deploying your application to a fully-managed PaaS, you're effectively deploying your app to a container, that is built with the configuration you provide, and that is managed by the PaaS provider team.
Microservices architectures
Microservices architectures highly leverage containerization, because containers allow for fast iterations, continous integration, independent deployability, and all the concepts
that we want to benefit from, when we decouple a tightly coupled and interdependent architecture.
Microservices are effectively the building blocks of composable architectures in the backend, and the inspiration for the frontend micro-architecture counterpart.
CI/CD - Deployment pipelines automation
CI/CD stands for `continous integration and deployment` and they typically go together. If you're relatively new to web development,
you may not be entirely familiar with the precoursors of these concepts, since you may have learned to publish your websites
using a platform that gives you a CI/CD pipeline out of the box.
Those pipelines and the workflows that make them work, are basically integrated services that automate the process of building, testing and deploying the code to be published -typically remotely, somewhere in the cloud
- and that are integrated with a platform hosting the code on top the version control system you use -often times git-, like GitHub or GitLab.
Back in the days I started doing web development, the workflow was all manual and error prone. For example, I was versioning my application files by
manually naming the folders containing them, locally, with a `v1`, `v2`, `v3` and so on, zipping them manually, and then uploading them to the server, using an FTP client.
Then I had to manually unzip on the server side. Those were the days!
image caption: CI/CD pipeline
If you're a frontend developer focused in the UI or even a Jamstack developer deploying functions, you may not need or want to know how these pipelines are built,
or understand their internal mechanics. However, it may be very useful to know more, as your application grows and you need to scale it, and integrate with other tools or services.
GitHub Actions
GitHub Actions are for sure the most popular CI/CD tool and I'm very sure that, even if you have not used them, you've heard of them.
When you build a site with a one of the many PaaS that provide cloud infrastructure, they typically offer you some form of developer tools, sometimes a CLI -command line interface-, sometimes a dashboard on their web UI,
so you can manage your application and all its assets, and also you can build it, and generate workflows, and push them to the repository hosting, as well.
You don't have to understand a workflow or action file. You just know it works! But in the next iteration of this documentation, I will explain them in-depth. So stay tuned!
Infrastructure as Code
If you've doing frontend development, exclusively, all your career in tech, or you're a more junior developer, it may be that you've never heard of ´Infrastructure as Code´, or you're not very sure what it is.
´IaC´ for short, is a way of describing what the infrastructure to be provisioned to support a deployment should include in a declarative way, in terms of components -please keep in mind that when I say components here, I don't refer to frontend components, but each unit of a composable infrastructure model-, and how each of those components should be configured.
´Infrastructure as code´ files are typically written in a language that both the human developer and the machines can interpret, usually YAML or JSON -although it can be a ´Domain specific language´, like Bicep and serialized to be sent over the internet so they can be consumed by services via an API.
What does Infrastructure as Code help with?
Describing infrastructure in a code file, helps it become repeatable across deployment environments, and so consistent and standarized, and in many cases extensible in an incremental way so whatever deployments exist, are not deleted or corrupted.
´Infrastructure as Code´ is one of the components of software deployment automation, the foundation of CI/CD, and the software that handles its interpretation and execution, is also in charge of running additional workflows or processes to guarantee deployments are successful like building and testing, securing the infrastructure by injecting secrets and enforcing policies, handling all network tasks and integrations, and making sure each one of the components are available in the right order, to satisfy dependencies amongst them.
What does provisioning mean?
Provisioning means laying out the foundation to support other code to run. For example, provisioning a database in the cloud, will imply creating the clusters or containers, enabling the runtime, installing the database software, provisioning or configuring routes to make it accessible from the internet, generating keys and connection strings, and usually also enabling the web interface for the developer in charge, to manage it.
Serverless functions
Serverless functions are basically functions that run in a serverless or fully managed execution context, in the cloud.
Serverless doesn't mean there are no servers. It means you don't have to worry about provisioning, scaling, securing or in general managing the servers or infrastructure supporing the execution context.
Serverless execution contexts are typically:
Elastic: they scale out and in, up and down depending on the configuration.
Auto scalable: meaning the scaling can be pre-configured, and doesn't require manual steps.
Highly available: they are equiped with fault-tolerance mechanisms to auto recover, and hence be compliant with high availability SLAs (nines after the 99%)
Globally available: they're deployed in distribution over a CDN with multiple points of presence (PoPs)
It's true that like every service we consume to pay-as-we-go, we need to be mindful of costs and design and implement with execution in mind. Most cloud providers will offer large amounts of free invocation quotas, and mechanisms to limit computing time and memory usage, so you can build and test your application without having to worry about costs.
It is very important to learn to use those design and cost control mechanisms. You don't want to end up pushing an endless invocation loop to a serverless function...
Origin functions
Serverless functions that run in a single region or origin specified by us when we create it, and are not globally distributed by default. This execution
context is meant to be lightweight and fast, and have a single concern. It is typically used to process data, and return a response to the client, although it may be used for other use-cases or computing.
Origin functions have a couple of downsides: because they run in origin there is latency between the end-user client and the function,
with the consequent impact on the user experience. Also, because they are not globally distributed, they are not fault-tolerant unless you pre-configure some sort of fail-over mechanism, and if the origin region in the cloud goes down, the function may not be available.
Another downside is cold-start: when a function is not used for a while, it is unloaded from memory, and when it is called again, it needs to be loaded again and install all dependecies. That is known as cold-start.
Origin functions use-cases
Data processing and access
Media processing
API calls
Webhooks
Authentication and authorization flows
Analytics
Websockets
Edge functions
Edge functions are used in the same way as origin functions, but they are globally distributed, and are executed at the edge of the network, closer to the end-user client. Because of that, latency is reduced and runtime performance improved for the
end-user, and high-availability and fault-tolerance guaranteed.
Additionally, edge functions tend to run in a more lightweight and isolated runtime, typically V8, with different capabilities and requirements. Because the resources needed for the runtime are
typically minimal, cold-start can be reduced to a minimum, and the function can be loaded and executed in much shorter amount of time -down to a few milliseconds-.
Edge functions use-cases
Composability
Custom Content delivery ie: AB testing or customization
Real-time apps
Web Assembly
As a JavaScript developer, you've probably heard about Web Assembly more than once.
Let's start by defining what is Web Assembly and why should we care.
Let me start by clarifying this: although Web Assembly and JavaScript share some common things, Web Assembly is not a replacement for JavaScript. They have different applications (pun intended!).
What is Wasm (or Web Assembly): the short answer
The short answer is Web Assembly, saved with file extension *.wasm -which is the reason why it's known as Wasm, is a low-level or binary format language
that can run in the JavaScript virtual machine in the browser, just like JavaScript, and has access to the Web Platform APIs, but it's faster than JavaScript because being low-level
means that the execution context doesn't have to interpret it before running it.
But there is a lot more! It is portable, and can run also in the server and other devices. It can be compiled to modules, that can be integrated to JavaScript applications for tasks that JavaScript may not be very efficient at.
In sum, this is a very broad, complex and extensive topic, and because of that, I'm dedicating a full page about Wasm and JavaScript and how it applies to composable decoupled frontends that you can read here.
API Gateway or Management Layer
Before we explore how to work with data, and all the options we have to store it, we need to know how to query it. That is done via an API.
I explain API paradigms in this section.
Working with multiple APIs may require we implement a management layer, to abstract away the complexity and consolidate the points of entry and security configurations.
That comes in the form of an API Gateway, also known as north-south bound gateways. API Gateways act as a reverse proxy and security layer, and can be used to implement rate limiting,
authentication, authorization, and other security features.
image caption: API Gateway - North-South Bound
Working with Data
Working with data is one of the most complex and challenging parts of web development, and yet, web development is all about data!
But for frontend developers, the idea of setting up a database and managing it, creating a schema, modeling data, maintaining an ORM*, can feel like an impossible task to accomplish.
Fortunately, we have serverless databases, that are also fully managed, and even provide serverless backend or middleware to query them.
*Don't worry if you don't understand any of that, or even know what an ORM is, I will be developing this topic extensively, with a full section about data modeling, data querying, data persistence, ...and data in general!
Please make sure to read the API first section, to understand the importance of specifying and API, what tools to use for that and
the many types of APIs we can use to work with data.
Serverless Databases
Databases are nothing but software dedicated to manage computer resources to store, index, manipulate and query data.
There are different types and models of databases, and I will dedicate a full section to databases and other types of data stores, like
object storage and data lakes, in future iterations of this site.
You can see a reference in these slides from a talk I gave last year -slides 27 through 37.
I updated them this year, here, adding vector databases, as well -slides 60 and 61.
For now, just a few tips related to databases, if you're decoupling frontends:
Data storage is cheap, it's ok to duplicate data
Make sure your database and data model are selected according to your use-case
Invest time in learning how to model data
If you're working API-first, specifying and modeling your data should be the first thing you do
Caching
Caching requests to our backend, be it a database or a server processing business logic, is very important to preserve user experience but also to
control costs derived from computing resources.
Caching is a very complex topic, and I will dedicate a full section to it in future iterations of this site, too. But, for now, make sure when you
publish to the cloud, especially a distributed composable system, you have a caching strategy in place.
Begin by the basic optimizations, even before caching:
Use headers to reduce roundtrips
Keep payloads to a minimum
Choose your API pattern wisely
And then...
Use a CDN to cache static assets
Use an API management system or gateway to proxy and consolidate requests
Explore the opportunities offered by managed infrastructure, to accelerate responses: like Cache for Redis, for example
A note about multi-cloud and designing serverless systems
When designing applications that are bound to scale, and we're going to deploy to the cloud with a specific provider, we need to keep in mind that APIs and programming models are not the same across them. A migration to a different provider if we end up needing more capacity, as our applications scale, may cost unexpected time and effort.
This website uses a technical cookie to save your cookie preferences in your browser. Additionally, it uses Google Analytics to analyze the traffic.
If you continue to use this website, you consent to the use of cookies. Terms of Service and Privacy