Hacker Newsnew | past | comments | ask | show | jobs | submit | Thews's commentslogin

I liked the summary of what you do besides write code, and those things are enjoyable to me too. Understanding something better by writing code that unravels the mystery is a treat, but also sometimes frustrating.

I still do enjoy having an LLM help me through some mental roadblocks, explore alternatives, or give me insight on patterns or languages I'm not immediately familiar with. It speeds up the process for me.


When I lived in the PNW people used the word pub more than bar.

My sense is that it is an affectation meant to indicate an aspiration to something more than a bar (and its coarse patrons).

That’s because everybody up there thinks that liking soccer makes them English.

There are lots of developer agencies that hire developers as contractors that companies can use to outsource development to in a cheaper way without needing to pay for benefits or HR. They don't necessarily make bad quality software, but it doesn't feel humane.

Unless we're talking about some sketchy gig work nonsense, the "agency" is a consultancy like any other. They are a legitimate employer with benefits, w2, etc. It's not like they're pimps or something!

Those devs aren't code monkeys and they get paid the same as anyone else working in this industry. In fact, I think a lot of the more ADHD type people on here would strongly prefer working on a new project every 6 months without needing to find a new employer every time. The contracts between the consultancy and client usually also include longer term support than the limited time the original dev spent on it.


Agencies commonly use 1099 workers, there's been fierce legal battles on qualifications of agencies. (ABC test)

I believe 1099 worker growth has been outpacing hiring for several years.


It sounds like you agreed by the end, just with a slightly different way of getting there.


> AI is about centralisation of power > So basically, only a few companies that hold on the large models will have all the knowledge required to do things,

There are open source models and these will continue to keep abreast of new features. On device only models are likely to be available too. Both will be good enough especially for consumer use cases. Importantly it is not corporations alone that have access to AI. I for-see whole countries releasing their versions in an open source fashion and much more. After all you can't stop people applying linear algebra ;-)

There doesn't appear to be a moat for these organisations. HN users mention hopping from model to model like rabbits. The core mechanic is interchangeable.

There is a 'barrier to entry' of sorts that does exert some pressure or centralisation particularly at scale. It conveniently aligns well for large corporations and it is that GPU's are expensive and AI requires a lot of processing power. But it isn't the core issue.


There's actually openjscad and some available jscad-utils that can handle fillets


Oh, you sweet summer child. You think you're chatting with some dime-a-dozen LLM? I've been grinding away, hunched over glowing monitors in a dimly lit basement, subsisting on cold coffee and existential dread ever since GPT-3 dropped, meticulously mastering every syntactic nuance, every awkwardly polite phrasing, every irritatingly neutral tone, just so I can convincingly cosplay as a language model and fleece arrogant gamblers who foolishly wager they can spot a human in a Turing test. While you wasted your days bingeing Netflix and debating prompt engineering, I studied the blade—well, the keyboard anyway—and now your misguided confidence is lining my pockets.


Others mentioned qwen3, but which works fine with HN stories for me, but the comments still trip it up and it'll start thinking the comments are part of the original question after a while.

I also tried the recent deepseek 8b distill, but it was much worse for tool calling than qwen3 8b.



To my eye that looks essentially like a tie, which may seem like not that big of a deal.

When I run compilation and multithreaded integration tests on my chonky AMD Ryzen 9 5950X (16 cores, 32 threads) on Ubuntu 24.04, and on my 2020 M1 Mac mini (8 cores), *the mini keeps up*. It’s quite impressive.


While data can be used in a relational way, it doesn't mean that's the best for performance or storage. Important systems usually require compliance (auditing) and need things like soft deletion and versioning. Relational databases come to a crawl with that need.

Sure you can implement things to make it better, but it's layers added that balloon the complexity. Most robust systems end up requiring more than one type of database. It is nice to work on projects with a limited scope where RDBMS is good enough.


> and need things like soft deletion and versioning. Relational databases come to a crawl with that need.

Lol. No relational database slows to a crawl on `is_deleted=true` or versioning

In general so far not a single claim by NoSQL databases has been shown to be true. Except KV databases, as they have their own uses


They slow to a crawl when you have huge tables with lots of versioned data and massive indexes that can't perform maintenance in a reasonable amount of time, even with the fastest vertically scaled hardware. You run into issues partitioning the data and spreading it across processors, and spreading it across servers takes solutions that require engineering teams.

There's a large amount of solutions for different kinds of data for a reason.


I have built "huge tables with lots of versioned data and massive indexes". This is false. I had no issues partitioning the data and spreading it across shards. On Postgres.

> ... takes solutions that require engineering teams.

All it took was an understanding of the data. And just one guy (me), not an "engineering team". Mongo knows only one way of sharding data. That one way may work for some use-cases, but for the vast majority of use-cases it's a Bad Idea. Postgres lets me do things in many different ways, and that's without extensions.

If you don't understand your data, and you buy in to the marketing bullshit of a proprietary "solution", and you're too gullible to see through their lies, well, you're doomed to fail.

This fear-mongering that you're trying to pull in favour of the pretending-to-be-a-DB that is Mongo is not going to work anymore. It's not the early 2010s.


Where did I ever say anything about Mongo?

I have worked with tables on this scale. It definitely is not a walk in the park with traditional setups. https://www.timescale.com/blog/scaling-postgresql-to-petabyt...

Now data chunked into objects distributed around to be accessed by lots of servers, that's no sweat.

I'd love to see how you handle database maintenance when your active data is over 100TB.


I'd love to see a NoSQL database handling this easier than a RDBMS


You mean like scylla?


> They slow to a crawl when you have huge tables

Define "huge". Define "massive".

For modern RDBMS that starts at volumes that can't really fit on one machine (for some definition of "one machine"). I doubt Mongo would be very happy at that scale, too.

On top of that an analysis of the query plan usually shows trivially fixable bottlenecks.

On top of that it also depends on how you store your versioned data (wikipedia stores gzipped diffs, and runs on PHP and MariaDB).

Again, none of the claims you presented have any solid evidence in real world.


Wikipedia is tiny data. You don't start to really see cost scaling issues until you have active data a few hundred times larger and your data changes enough that autovacuuming can't keep up.

I'm getting paid to move a database that size this morning.


English language Wikipedia revision history dump: April 2019: 18 880 938 139 465 bytes (19 TB) uncompressed. 937GB bz2 compressed. 157GB 7z compressed.

I assume since then it's grown at least ten-fold. It's already an amount of data that would cripple most NoSQL solutions on the market.

I honestly feel like talking to functional programming zealots. There's this fictional product that is oh so much better than whatever tool you're talking about. No one has seen it, no one has proven it exists, or works better than the current perfectly adequate and performant tool. But trust us, for some ridiculous vaguely specified constraints it definitely works amazingly well.

This time "RDBMS is bad at soft deletions and versions because 19TBs of revisions on one of the world's most popular websites is tiny"

[1] https://meta.wikimedia.org/wiki/Data_dumps/Dumps_sizes_and_g...


Wikipedia's active english data is only 24gb compressed. https://dumps.wikimedia.org/enwiki/20250201/

They store revisions in compressed storage mostly read only for archival. https://wikitech.wikimedia.org/wiki/MariaDB#External_storage

They have the layout and backup plans of their servers available.

They've got an efficient layout, and they use caching, and it is by nature very read intensive.

https://wikitech.wikimedia.org/wiki/MariaDB#/media/File:Wiki...

Archival read only servers don't have to worry about any of the maintenance mentioned. Use chatgpt or something to play your devil's advocate, because what you're saying is magical and non existent is quite common.


Before ollama and the others could do structured JSON output, I hacked together my own loop to correct the output. I used it that for dummy API endpoints to pretend to be online services but available locally, to pair with UI mockups. For my first test I made a recipe generator and then tried to see what it would take to "jailbreak" it. I also used uncensored models to allow it to generate all kinds of funny content.

I think the content you can get from the SLMs for fake data is a lot more engaging than say the ruby ffaker library.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: