Hacker Newsnew | past | comments | ask | show | jobs | submit | b-man's commentslogin

> The Agile example makes this worse, not better. Yes, Agile was overhyped and badly implemented in many places. But using that to indict the entire movement as Girardian ritual is precisely the logical move the author claims to be critiquing: take some real failures, blame them on a paradigm rather than specific implementations, declare the whole thing rotten. He scapegoats Agile to validate his theory about scapegoating

I don't think the author did that at all. He was fair to interactive development. He specifically points out the scapegoating of waterfall, where the methodology was misrepresented in order to create the space for agile.


fwiw, your career page seems broken (https://supabase.com/careers)


   Location: EST
   Remote: Yes 
   Willing to relocate: Yes (US preferred)
   Technologies: PostgreSQL (partitioning, performance, OLTP architecture), SQL, F#, C#, C, Java, Clojure, Common Lisp, Scheme, Emacs Lisp, Python, Ruby, AWS, Linux
   Email: ebellani at gmail
I work on high-throughput systems, especially when they’ve grown into a state where migrations, performance, or schema design have become limiting factors.

Recent work:

Re-architected two multi-terabyte OLTP tables (~2TB and ~1TB) receiving 200+ writes/sec. I focus on “rescue architecture” work: fixing dangerous schemas, stabilizing hot paths, removing app-level complexity, and making Postgres scale without rewriting the product.

Open to consulting or full-time roles where data is core to the business and performance/architecture matters.

Résumé: https://www.linkedin.com/in/eduardo-bellani/ https://ebellani.github.io/


ocation: EST Remote: Yes Willing to relocate: Yes (US preferred) Technologies: PostgreSQL (partitioning, performance, OLTP architecture), SQL, F#, C#, C, Java, Clojure, Common Lisp, Scheme, Emacs Lisp, Python, Ruby, AWS, Linux

Email: ebellani at gmail

I work on high-throughput PostgreSQL systems, especially when they’ve grown into a state where migrations, performance, or schema design have become limiting factors.

Recent work:

Re-architected two multi-terabyte OLTP tables (~2TB and ~1TB) receiving 200+ writes/sec. I focus on “rescue architecture” work: fixing dangerous schemas, stabilizing hot paths, removing app-level complexity, and making Postgres scale without rewriting the product.

Open to consulting or full-time roles where Postgres is core to the business and performance/architecture matters.

Résumé: https://www.linkedin.com/in/eduardo-bellani/ https://ebellani.github.io/


Location: EST Remote: Yes Willing to relocate: Yes (US preferred)

Technologies: PostgreSQL (partitioning, performance, OLTP architecture), SQL, F#, C#, C, Java, Clojure, Common Lisp, Scheme, Emacs Lisp, Python, Ruby, AWS, Linux

Email: ebellani at gmail

I work on high-throughput PostgreSQL systems, especially when they’ve grown into a state where migrations, performance, or schema design have become limiting factors.

Recent work:

Re-architected two multi-terabyte OLTP tables (~2TB and ~1TB) receiving 200+ writes/sec. I focus on “rescue architecture” work: fixing dangerous schemas, stabilizing hot paths, removing app-level complexity, and making Postgres scale without rewriting the product.

Open to consulting or full-time roles where Postgres is core to the business and performance/architecture matters.

Résumé: https://www.linkedin.com/in/eduardo-bellani/ https://ebellani.github.io/


> This is the reason for the push-back against it.

Do you have evidence for that? From memory, it was basically because it was associated with the java/.net bloat from the early 2000s. Then ruby on rails came.


I think that's basically the same reason, right? XML itself is bloated if you use it as a format for data that is not marked-up text, so it comes with bloated APIs (which where pushed by Java/.NET proponents). I believe that if XML been kept to its intended purpose, it would be considered a relatively sane solution.

(But I don't have a source; I was just stating my impression/opinion.)


Location: EST Remote: Yes

Willing to relocate: Yes (US preferred)

Technologies: PostgreSQL (partitioning, performance, OLTP architecture), SQL, F#, C#, C, Java, Clojure, Common Lisp, Scheme, Emacs Lisp, Python, Ruby, AWS, Linux

Email: ebellani at gmail

I work on high-throughput PostgreSQL systems, especially when they’ve grown into a state where migrations, performance, or schema design have become limiting factors.

Recent work:

Re-architected two multi-terabyte OLTP tables (~2TB and ~1TB) receiving 200+ writes/sec. I focus on “rescue architecture” work: fixing dangerous schemas, stabilizing hot paths, removing app-level complexity, and making Postgres scale without rewriting the product.

Open to consulting or full-time roles where Postgres is core to the business and performance/architecture matters.

Résumé: https://www.linkedin.com/in/eduardo-bellani/ https://ebellani.github.io/


I have written the entire backend of a fintech using nothing but postgresql, integration over http and webhook receival included (the last bit was with postgrest, but you get the point)


Location: EST

Remote: Yes

Willing to relocate: Yes (US preferred)

Technologies: PostgreSQL (partitioning, performance, OLTP architecture), SQL, F#, C#, C, Java, Clojure, Common Lisp, Scheme, Emacs Lisp, Python, Ruby, AWS, Linux

Email: ebellani at gmail

I work on high-throughput PostgreSQL systems, especially when they’ve grown into a state where migrations, performance, or schema design have become limiting factors.

Recent work:

Re-architected two multi-terabyte OLTP tables (~2TB and ~1TB) receiving 200+ writes/sec. I focus on “rescue architecture” work: fixing dangerous schemas, stabilizing hot paths, removing app-level complexity, and making Postgres scale without rewriting the product.

Open to consulting or full-time roles where Postgres is core to the business and performance/architecture matters.

Résumé: https://www.linkedin.com/in/eduardo-bellani/ https://ebellani.github.io/


> Not to mention that perfectly normalizing a database always incurs join overhead that limits horizontal scalability. In fact, denormalization is required to achieve scale (with a trade-off).

This is just not true, at least not in general. Inserting on a normalized design is usually faster, due to smaller index sizes, fewer indexes and fitting more rows per page.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: