Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Fundamentally I think the question is what kind of streams are you processing?

My concept of stream processing is trying to process gigabits to gigabytes a second, and turn it into something much much smaller so that it's manageable to database and analyze. To my mind for 'stream processing' calling malloc is sometimes too expensive let alone using any of the technologies called out in this tech stack.

I understand back pressure, and circuit breakers, but they have to happen at the OS / process level (for my general work) -- a metric that auto scales a microservice worker after going through prometheus + an HPA or something like that ends up with too many inefficiencies to make things practical. A few threads on a single machine just work, but end up taking ages to engineer a 'cloud native' solution.

Once I'm down to a job a second (and that job takes more than a few seconds to run to hide the framework's overhead) or less things like Airflow start to work, and not just fall flat, but at that point are these expensive frame works worth it? I'm only producing 1-1000 jobs a second.

Stream processing with these frameworks like Faust, Airflow, Kafka Streams etc, all just seem like brittle overkill once you start trying to actually deploy and use them. How do I tune the PostgreSQL database for Airflow? How do I manage my S3 life cycles to minimize cost?

A task queue + an HPA really feels more like the right kind of thing to me at that scale vs really caring too much about back pressure, etc when the data rate is 'low', but I've generally been told by colleagues to reach for more complicated stream processors that perform worse, are (IMO) harder to orchestrate, and (IMO) harder to manage and deploy.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: