This page may be out of date. Submit any pending changes before refreshing this page.
Hide this message.
Quora uses cookies to improve your experience. Read more

What are some criticisms and drawbacks of Serverless Computing?

15 Answers
Rohit Akiwatkar
Rohit Akiwatkar, Serverless Researcher.

Serverless computing also closely associated with Functions as a Service (FaaS) is defined by stateless compute containers and modeled for an event-driven solution.

FaaS provides a platform allowing the developers to execute code in response to events without the complexity of building and maintaining the infrastructure. The 3rd party apps or services would manage the server-side logic and state.

Drawbacks of Serverless computing -

1. Problems due to third-party API system
Vendor control, multitenancy problems, vendor lock-in and security concerns are some of the problems due to the use of 3rd party APIs. Giving up system control while implementing APIs can lead to system downtime, forced API upgrades, loss of functionality, unexpected limits and cost changes.

2. Lack of operational tools
The developers are dependent on vendors for debugging and monitoring tools. Debugging Distributed Systems is difficult and usually requires access to a significant amount of relevant metrics to identify the root cause.

3. Architectural complexity
Decisions about how small (granular) the function should be, takes time to assess, implement and test. There should be a balance between the number of functions should an application call. It gets cumbersome to manage too many functions and ignoring granularity will end up creating mini-monoliths.

AWS Lambda, for now, limits you to how many concurrent executions you can be running of all your lambdas. The problem here is that this limit is across your whole AWS account. Some organizations use the same AWS account for both production and testing. That means if someone, somewhere, in your organization does a new type of load test and starts trying to execute 1000 concurrent Lambda functions you’ll accidentally Denial of service (DoS) your production applications.

4. Implementation drawbacks
Integration testing Serverless Apps is tough. The units of integration with Serverless FaaS (i.e. each function) are a lot smaller than with other architectures and therefore we rely on integration testing a lot more than we may do with other architectural styles. Problems related to deployment, versioning and packaging also exist. You may need to deploy a FaaS artifact separately for every function in your entire logical application. It also means you can’t atomically deploy a group of functions and there’s no concept of versioned applications so atomic rollback isn’t an option. You may need to turn off whatever event source is triggering the functions, deploy the whole group, and then turn the event source back on.

For more details on Serverless computing, framework and pros/cons of serverless architecture visits this blog → Serverless Architecture

Update: I have also come up with this project - Made-With-Serverless which is a single touch point to understand this technology. It consists of some amazing architecture build using serverless.

Your feedback is private.
Is this answer useful?
Srinath Perera
Srinath Perera, works at Indiana University
  1. Vender Lockin - Major risk serverless face is vendor lock-in due to lack of standards [4, 6]. The real concern is not serverless functions, but the platform services required by those functions. It is hard to abstract away those services efficiently.
  2. Serverless applications need to be modeled as a series of functions that are wired by an event mechanism (EDA). While designing most serverless based designs, architects will have to think differently. EDA based programming is more complex and harder to debug. The resulting architecture is more complex and logical flow is hard to reason about. There is no tooling support (e.g. Serverless IDE) yet that simplify the experience. Also, in today's model, some part of the flow logic will end up in DevOps scripts. [3]
  3. How to handle state as functions are stateless. As discussed by Allamaraju [1] and Mike[10] the Current architecture best practices are to store the state platform services such as databases, shared file systems or messaging systems. This, however, increases vendor lock-in.
  4. Serverless faces two kinds of latency challenges ( see [3, 4, 10]. The latency because of the cold starts needs to boot up the environment, which can be high as one second. Also, the network and serialization latencies added due to network hops between functions. In the second case, as more functions are added, the tail latencies get worse. (See http://highscalability.com/blog/2012/3/12/google-taming-the-long-latency-tail-when-more-machines-equal.html and https://research.google.com/pubs/pub40801.html).
  5. The former latency problem is already at the odd with SaaS pricing model [6]. Users can get a better bill by keeping the instance live via periodic bogues requests. This can be addressed by using machine learning to anticipate requests or by improving the startup time of the code.

Currently, serverless solutions work only for high throughput high latency scenarios.

  1. https://www.slideshare.net/salla...
  2. Occupy the cloud: distributed computing for the 99% Jonas et al., SoCC’17, Occupy the cloud: distributed computing for the 99%
  3. https://medium.freecodecamp.org/...
  4. https://blog.acolyer.org/2017/10...
  5. https://www.nextplatform.com/201...
  6. Serverless: the Future of Software Architecture, https://www.youtube.com/watch?v=...
  7. Amazon introduces Lambda, Containers at AWS re:Invent - SD Times
  8. https://openwhisk.apache.org/
  9. The rise of APIs, https://techcrunch.com/2016/05/2...
  10. http://heidloff.net/article/poly...
  11. Serverless Architectures
Brian Schuster
Brian Schuster, Head of Founder Solutions at Ark Capital

So far, here are some of the issues I have experienced when using serverless technology:

1. The nature of lambda means that you can use up to 5 different languages to implement your code (Node.js, Edge Node, C#, Python, Java). Nothing about your stack dictates that *every* function needs to be written in the same language. One part of the application can be written in Java, another in C#, another in python… it doesn’t really matter so long as it gets the job done. However, what happens if your application grows to needing 1,000 lambda functions and the stack is written in a mixture of these five languages? Because you can’t restrict the program to being only written in one stack, it’s likely that someone might write it in their language du jure for convenience purposes and cause someone else a headache when it breaks down.

2. You’re trusting AWS/Azure to implement their functionality correctly. In plain old cloud architecture, you downloaded you OS and then downloaded *exactly* what you wanted on that box to get your application working. If you wanted to use someone else’s framework, that was your option, but you didn’t have to. In serverless architecture, you’re relying on your provider to know exactly what they’re doing. If they make an error and your functions aren’t working correctly, there’s not a lot you can do.

3. Everything operates statelessly, which means that you have to have a lot more structural overhead to make your application ‘flow’. If you have to save a variable and pass it to another function, you can’t save that to your system memory, you have to store it in a DB or another table and then ‘chain’ the events together.

4. Lack of HIPPA compliance. Because of how lambda is implemented, it breaks certain HIPPA compliance rules and can’t be used for PHI. This means that anyone working at a healthcare organization that deals with sensitive data can’t use this service.

I’m 100% convinced that Serverless architecture will become ‘cloud’ computing for a large majority of providers in the coming years. However, there are still challenges that we need to be aware of in order to provide the best solutions possible.

Your feedback is private.
Is this answer useful?
Mathew Lodge
Mathew Lodge, Formerly VP Cloud Services at VMware

Thanks for the A2A.

The biggest is that it fragments your app logic into scores or hundreds (possibly thousands?) of discrete functions. How these functions are threaded together by events is magic: not written down anywhere, if documented it changes next time you change a function, or if the cloud provider changes the event model. Looking at the code no longer tells you the sequence of execution. It's the same problem faced by complex Node.js apps, though at least with Node there are libraries to help with sequences etc.

Have you tried doing a trace of Lambda functions? With a lot of load on the system, all those parallel events firing? Now, tell me where the bugs are from a linear central log. Having fun yet?

For things that actually need a sequence, you have to contrive this yourself. Not everything is an event and response.

If you have an atomic transaction, how do you turn that into a function? How is locking done? How do you manage retries if something fails? Do you send yourself an event in the future to do the retry? What if it’s a partial failure of an underlying service like, say, S3, where writes work but reads do not? Or vice versa? How do you code that in a function and deal with all the corner cases?

Don't get me wrong, functions have their place. It's just that like most new things in software, they aren't great for everything. They really shine for short, simple transformations of data to/from persistent storage. But as you get more of them, you spend more time trying to figure out the interactions between them. Unless you are very disciplined, this becomes a larger cost than the problem you're solving.

Patricia Johnson
Patricia Johnson, Open Source Licensing and Security Expert at WhiteSource (2012-present)

While a serverless architecture frees development teams from one set of problems, it does bring another set of problems to the forefront. Since serverless computing is still in its early days, it suffers from many of the issues that concern – or should be concerning organizations that choose to use third party and open source solutions to speed up the application development lifecycle.

Martin Fowler, in his article about serverless, points to two main security issues that developers and organizations should be concerned about:

#1 Larger surface for potential attacks: by using multiple vendors for serverless computing, organizations need to be aware that they are adding a variety of security implementations into their development ecosystem.

#2 Loss of the protective barrier provided by server-side applications, when organizations use a BaaS Database directly on their mobile platforms. Fowler stresses that this will require special attention when designing and developing your application.

Other experts also point to the security issues that arise when developers seemingly relinquish control when delegating server maintenance to third party vendors, especially when data hosting is involved. Lee Atchison, a senior director at analytics platform New Relic, warned in a recent interview: “Each service provides a different and unique method for offering serverless computing. This means that an IT professional who wants to take advantage of serverless computing will find they are locked into a single cloud service provider to a greater degree than if they use more standardized traditional server-based computing.”

This means that developers will need to think about what they are or are not willing to delegate to other services, and how they still track and monitor their product throughout the devops lifecycle.

Your feedback is private.
Is this answer useful?
J.D. Hollis
J.D. Hollis, Consultant, CTO

At the moment, I would say one of the biggest drawbacks is developer experience.

Typically with AWS Lambda, you’re triggering the function in response to events from other AWS services. Therefore, to properly develop and test your functions, you have to spin up whatever infrastructure you need before you can start coding your functions. In my experience, I end up writing far more Terraform (or CloudFormation) code than whatever language is going into the Lambda function proper. Working with a team further complicates the situation since each developer will need their own infrastructure for testing. You will need to think through deployment carefully, especially if you want automated testing since often you need the entire ecosystem present to properly test the functions. It’s also difficult to inject service failures into the test environment.

I’ve written more about automated Lambda deployments here: Automated Lambda Deployments with Terraform & CodePipeline