Thomas Step

← Blog

If you still need help building out AWS infrastructure, send me an email and I'll see if I can help out. thomas@thomasstep.com

re:Invent: Inside A Working Serverless SaaS Reference Solution

This is an overview of a session that I went to during re:Invent 2021. I start by providing the notes I took during the session, and then I will give my take and comments if I have any at the end.

Wednesday 9:15

ARC405

Tod Golding from the AWS SaaS factory team

The fit between serverless and SaaS

Key architectural considerations

High-level architecture

Provisioning the control plane - this is in the reference architecture code

Two distinct tenant deployment models (silos vs pools)

Registering new tenants (back into the control plane)

Tier-driven onboarding

Inside tenant provisioning

Mapping the deployments

Now looking at a little bit of code

Applying tiers and configuring tenants

Triggering the tenant provisioning

Managing tenants and users

Creating the user and tenant

Authentication and authorization

Tenant routing

Authorization with the two isolation models

Pool based partition with DDB

Inside application microservices

Tier-based throttling policies

Takeaways

My notes:

The approach taken to create infrastructure (pooled or siloed) in the “Inside tenant provisioning” section was very interesting to me. First, I like (and always have liked) the idea of having infrastructure that can be replicated on-demand as many times as requested. Creating IaC in this way allows for anyone to come through and create their own stack and version without messing with anyone else or an “official” environment that is relied on by external users. Of course, this also lends itself perfectly to this application of replicating the production environment for siloed users. Second, provisioning multiple stacks using CodePipeline outside of a traditional CI/CD process involving code lifecycle-based webhooks sounded genius to me. I do not know why I had not thought about doing something like that before to dynamically create infrastructure stacks.

Lambda layers are something that I have thought about using before, but I ultimately did not use them because I was unsure of the proper use cases. After hearing about them in several sessions including this one, Lambda layers would have been the perfect fit for what I originally wanted to use them for: shared code amongst different Lambdas. I think a good application for layers would be crow-api where I currently copy code from the shared/ directory to each of the Lambda’s code directories, but that would be much easier to accomplish by creating a layer and then adding that layer to each Lambda.

My curiosity was piqued when he started talking about data modeling for a multi-tenant application. I believe that I have done something similar to the model that he was discussing in the past, which is reassuring and makes me feel good. However, I might try to dig through his code a little more to make sure that I correctly understood his approach. From where I am coming from a good way to handle multiple tenants is to simply prepend the partition key with a unique property of the tenant (email, ID, etc.) which then forces the tenant to only be able to query against partitions explicitly with their ID.

Categories: aws