Software Defined Radios, IoT devices, Smartphones... We are moving to a world where software defined products are more and more integrated and in common use. The idea of having extensible hardware that allows us to reprogram and define their purpose is now seeping its way into the internet industries as well as we can see with services such as Amazon Web Services or Google Cloud are becoming more prevalent in the internet space. We are now outsourcing what used to be file servers, databases, and other miscellaneous services that were traditionally personally maintained to the larger web services providers.
ExamRoom Live is a platform that uses these services, in our particular case AWS.
There are of course disadvantages of doing it this way; it forces us to tie our services and availability completely to AWS, there’s no hardware level diagnosis and restoration processes, no hardware isolation, the level of control we have over our own product is diminished. If AWS CDM is down, we have no actionable remediation process on our end to bring back content that is no longer served and have to rely on AWS engineers to fix the problem. However with that comes what we evaluated were the points compelling enough to decide to use this as our technology stack; AWS has a history of consistent availability, and short downtimes, less control over the hardware means the control and responsibility of server maintenance is shifted over to Amazon, allowing for us to allocate more of our manpower resources to development instead of maintenance, scalability is more easily feasible with AWS’s ability to scale horizontally as needed.
Our goal from the start was to make an application that is capable of being hosted completely by AWS, from the database, file storage, content distribution, backend processes, authentication, etc. But our starting point was from experience with server based web application design.
This new serverless infrastructure idea is relatively new, and as result, many supporting documentation, examples, guidelines did not exist for many of the cases we were running into. Best practices were and still are not solidified yet, so our journey through developing ERL was a lot of painstaking learning, and constant revision in accordance with how this shift is evolving.
For the backend, we started with an Express server that would tie into third-party services we were using things such as email and SMS capabilities, and would work on modifying this base into a model where it could fit inside of AWS’s Lambda service, which provides serverless computation. Along the road, the size of the functions we were fitting into the lambdas was proving to become a problem as we added more and more libraries and functionality to our codebase.
At a point, we had to revisit the overall design of how we were handing the backend, and read up on new documentation and guidelines provided by Amazon themselves along with other industry subject matter experts. The result of this was a redesign process that took a couple of months to revise and integrate which would end up as our v3 API, cutting down on the amount of code per lambda invocation, but distributing them to more lambdas which were logically chained together. The result of this is a faster invocation time per lambda, although sometimes at the cost of more lambdas being invoked per API call.
Serverless has a lot of benefits but it requires a lot of knowledge of how the providers actually work, in order to cover for the loss of certain things such as in-memory operations, cron jobs, service daemons, the providers create services that can be used in replacement, but they are all services that need to be learned. This translates to the backend developer now needing to take on some of the burdens of the DevOps engineer as well as understand a bit of system architecture as they need to understand how to define processes of how functions interact with each other, no longer in relation to in code-imports, but as defined infrastructure imports of data between working parts.
On the flip side, this also means that almost all of the processes we used to delegate to server administrators are configurable in a software defined manner. Using frameworks such as the Serverless Framework, we are able to have a level of control over things such as network IP and port access, database configuration, even to the point we can securely close off our entire technology stack per environment within our own virtual cloud. By having this software defined template of our infrastructure, we are able to quickly and easily maintain and sync different environments if changes need to be made, and spin new environments with ease, typically with only one button. Although following this serverless pattern requires more initial development time, because everything is defined at a software level, configurations are more consistent between systems and environments, letting you cut down on the number of variables to consider when diagnosing problems.
Overall the advantages of hosting all of your services using a web services provider is a new and worthwhile investment for many. It alleviates much of the burdens that used to come from server management and other maintenance obligations, however, it does also come at the cost of learning new technologies and shifting the responsibilities of the technology between roles that may not have had those responsibilities before.