Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's interesting to see the heavily growing demand graph. Is that because people want to adopt it, or is it being mandated or encouraged as "best practice" etc? I'm not implying that true organic demand wouldn't exist because it definitely might, but I have seen in practice where leadership encourages or even mandates usage of FaaS, so the numbers go up even though on a neutral field people wouldn't necessarily choose it. There's also the "I'd like to try it" group who hasn't yet had experience with it so choose it as a target for learning/curiosity reasons.

I wonder because my own experience with FaaS has been mostly bad. There are some nice things for sure, and a handful of use cases where it's wonderfully superior, such as executing jobs where number of concurrent processes fluctuates wildly making rapid scalability highly desirable. The canonical use case being "make a thumbnail for this image."

For web servers though, I find the opacity and observability, and the difficulty running locally to be significant hindrances. There's also lock-in which I hate with a passion. At this point I'd rather manage static EC2 instances than Lambdas, for example. (To be clear I'm not advocating static EC2 instances. My preference is Kubernetes all the things. Not perfect of course, but K8s makes horizontal scalability very easy while improving on visibility, but that's a different conversation)



I think one of the core arguments for larger organisations is that incompetence can not ruin everything.

Firebase is a good case study for this: They heavily argue using Firestore and server less functions. If you succeed solving your problems using their offerings, then they will also guarantee that things run well and scale well.

Firestore, as when I used it last, did not support all the operations that can make traditional DBMS go in their knees. You have document level isolation, ie. no joins or aggregating functions (like count or sum across documents). These functions need to be implemented in another way using aggregators or indices.

So I agree that development is more fun when developing on proper runtimes using fully fledged databases. But when you manage several thousands of developers on all levels, then I think it makes sense to impose another architecture.


> Is that because people want to adopt it, or is it being mandated or encouraged as "best practice"

Its because its the simplest, fastest way to get compute for non realtime bits of code. Its much less hard to deploy stuff to, and its really simple to trigger it from other services.

A lot of things in FB are communicated by RPC, so its not really "web" fucntions that run on there, its more generic ETL type stuff. (as in system x has updated y, this triggers a function to update paths to use the latest version)


> Its because its the simplest, fastest way to get compute for non realtime bits of code. Its much less hard to deploy stuff to, and its really simple to trigger it from other services.

That's all true, but IMO the problem with functions is that they are initially so simple. But deploying code isn't actually that hard of a problem. The hard part is growing and maintaining the codebase over time.

I'm not saying there ISN'T a use case for them, but there should be a very good reason why you want to split them off of other services.


TLDR: 100% agree, discipline is still required!

I worked at a place that went full lambda for a website (this was possibly 2016) They started out with huge velocity, things were much quicker to build and test.

Serverless (the framework) was a joy to deploy with, compared to what they were used to. They had complete control over their architecture for the first time. However they then slammed into fixing all the innovation tokens they deployed (New DB, New message routing, New hosting arch, New auth methods) and hit the productivity wall.


> as in system x has updated y, this triggers a function to update paths to use the latest version

This is the sweet spot. Eventing. Any kind of queue based workload, especially one with variable load is a potential candidate for FaaS kind of architecture. The alternative is a worker-pool specific the workload. FaaS just moves up the abstraction on how processes are managed to a global worker-pool.


Yeah it sucks to be a FaaS customer on a cloud run by someone else... you're just overpaying for easy instead of simple etc.

But if you're in Meta, and you're running on an abstraction built and maintained by Meta, then the it's running for cost instead of profit and the the incentives all align between user and infrastructure provider?

I imagine the development and deployment ease for a system at Meta for Meta could be dreamy. At least, it has the potential to be... :)


> It's interesting to see the heavily growing demand graph. Is that because people want to adopt it, or is it being mandated or encouraged as "best practice" etc?

FaaS is well justified from the point of view of an infrastructure provider. You get far better utilization from your hardware with a tradeoff of a convoluted software architecture and development model.

In theory you also get systems that are easier to manage as you don't have teams owning deployments from the OS and up, nor do they have to bother with managing their scaling needs.

It also makes sense in the technical side because when a team launches a service, 90% of the thing is just infrastructure code that needs to be in place to ultimately implement request handlers.

If that's all your team needs, why not get that redundancy out of the way?

Nevertheless we need to keep things in perspective, and avoid this FANG-focused cargo cult idiocy of mindlessly imitating any arbitrary decision regardless of making sense. FaaS makes sense if you are the infrastructure provider, and only if you have a pressing need to squeeze every single drop of utilization from your hardware. If your company does not fit this pattern, odds are you will be making a huge mistake by mimicking this decision.


>FaaS is well justified from the point of view of an infrastructure provider.

What if you are both provider and user? Are the tradeoffs justified?


Depends on your size and workloads, obviously.


From the paper:

>The rapid growth at the end of 2022 is due to the launch of a new feature that allows for the use of Kafka-like data streams [12] to trigger function calls

In regards to opacity and observability I don't see why it would be any worse to run PHP code on XFaaS than running the PHP code on another host.


Ah thanks, I missed that. Makes sense!


Most teams at meta are free to choose which internal tools to use. We evaluated the maturity, performance, staffing levels, roadmap when choosing tools.


> There's also lock-in which I hate with a passion.

I wonder if this isn't clouding your judgement here. FaaS is subject to lock-in, absolutely, but for teams that need a bit of code run and need to not manage an instance, functions are the way to go. Need to be at an organization that's able to support that properly, but that's table stakes at this point.


It could also be that it's the "easy path" to get something in production with either lighter governance or faster turnaround, etc.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: