Hacker Newsnew | past | comments | ask | show | jobs | submit | 9rx's commentslogin

Also easy. The only hard thing found around software is the people.

This is true, its easy to ship software to 0 users.

Considerate people will be considerate at any time of the day, but there are a lot more checks and balances to keep the inconsiderate at bay at night (e.g. noise bylaws). They soon learn that they have to become considerate. Conversely, anything goes for the inconsiderate morning crowd.

> The agile idea in itself is quite simple: Inspect and Adapt.

Ish. Agile is, ultimately, about removing managers.

    - Individuals over processes
    - Working software over documentation
    - Customer collaboration over contract negotiation
    - Responding to change over following a plan
Developers on a team coming together to inspect and adapt, as you say, is a necessary function when you don't have a manager to do it for you. Hence why it is included in the Twelve Principles. Each of the twelve principles present a function that needs to be considered when you don't have a manager to do the work for you.

Of course, this point from the Twelve Principles is always Agile's sticking point: "Build projects around motivated individuals." In the real world, businesses don't want to hire motivated individuals that will drive projects, they want to hire many cheap, replaceable commodities along with just one motivated manager to whip them into shape. That is what that Jira stuff mentioned in the earlier comment is all about.


> You certainly wouldn’t let your contractor improve things on your dime

Where you have a contractor hired on a full-time basis with the intent to built the best house, or at least the most moated house, on the market so that all the people of the world come to live in your house, not someone else's, of course you would.


Your example worsens your position. I do not want my home builder going over budget and over time without telling me, only finding out after their “continual improvement” that I won’t be moving into my house. Worse, because they believe only they’re able to know what’s best.

In your fictional scenario of unlimited budget and time, sure I grant that an expert should work unguided.


> Your example worsens your position.

Worsens what position?

> Worse, because they believe only they’re able to know what’s best.

Well, you certainly wouldn't hire a contractor if you knew better, would you? That would be pointless. The whole reason for hiring a contractor, instead of hiring the same laborers the contractor will go on to hire on you behalf anyway, is because the contractor brings the expertise you lack. If you can't trust them to know better than you, why bother? It just becomes an unnecessary expense and a waste of another person's time.


In practice what you’re suggesting is that you’ll let your home builder run over time and over budget. Intervention on your behalf would be an unnecessary expense and a waste of the builders time. They know better and that would be pointless.

Your position is employers are building the best house with the biggest moat to attract the entire world, with an assumed endless time and budget, and there are no bounds to be set with employees.


> you’re suggesting is that you’ll let your home builder run over time and over budget.

If you thought your home builder was going to run over time and budget, you wouldn't hire him in the first place. Those who can't find the necessary trust don't build homes. There are plenty of used homes out there to buy.


Agile is really about removing managers. The twelve principles does encourage short development cycles, but that's to prevent someone from going off into the weeds — having no manager to tell them to stop.

If you bought and owned it, you could sell it to another auto manufacturer for some pretty serious amounts of money.

In reality, you acquired a license to use it. Your liability should only go as far as you have agreed to identify the licenser.


You can actually do that. Except that they could just buy one themselves.

Companies exist that buy cars just to tear them down and publish reports on what they find.


> Companies exist that buy cars just to tear them down and publish reports on what they find.

What does it mean to tear down software, exactly? Are you thinking of something like decompilation?

You can do that, but you're probably not going to learn all that much, and you still can't use it in any meaningful sense as you never bought it in the first place. You only licensed use of it as a consumer (and now that it is subscription-only, maybe not even that). If you have to rebuild the whole thing yourself anyway, what have you really gained? Its not exactly a secret how the technology works, only costly to build.

> Except that they could just buy one themselves.

That is unlikely, unless you mean buying Tesla outright? Getting a license to use it as a manufacturer is much more realistic, but still a license.


Check out Munro and Associates. I'm not talking about software. The whole car.

For what reason?

In case you have forgotten, the discussion is about self-driving technology, and specifically Tesla's at that. The original questioner asked why he is liable when it is Tesla's property that is making the decisions. Of course, the most direct answer is because Tesla disclaims any liability in the license agreement you must agree to in order to use said property.

Which has nothing to do with an independent consulting firm or "the whole car" as far as I can see. The connection you are trying to establish is unclear. Perhaps you pressed the wrong 'reply' button by mistake?


I started responding to this. I interpreted it to be referring to the whole car.

> Yep, you bought it, you own it, you choose to operate it on the public roads. Therefore your liability.


> It’s funny that perfect capitalism (no payroll expenses) means nobody has money to actually buy any of the goods produced by AI.

When you remember that profit is the measure of unrealized benefit, and look at how profitable capitalists have become, its not clear if, approximately speaking, anyone actually has the "money" to buy any goods now.

In other words, I am not sure this matters. Big business is already effectively working for free, with no realistic way to ever actually derive the benefit that has been promised to it. In theory those promises could be called, but what are the people going to give back in return?


Can you please dig into this more deeply or suggest somewhere in which I can read more?

The economy in the 21st century developed world is mostly about acquiring positional goods. Positional goods as "products and services valued primarily for their ability to convey status, prestige, or relative social standing rather than their absolute utility".

We have so much wealth that wealth accumulation itself has become a type of positional good as opposed to the utility of the wealth.

When people in the developed world talk about the economy they are largely talking about their prestige and social standing as opposed to their level of warmth and hunger. Unfortunately, we haven't separated these ideas philosophically so it leads to all kinds of nonsense thinking when it comes to "the economy".


Money is an IOU; debt. People trade things of value for money because you can, later, call the debt and get the exchanged value that was promised in return (food, shelter, yacht, whatever) I'm sure this is obvious.

I am sure it is equally obvious that if I take your promise to give back in kind later when I give you my sandwich, but never collect on it, that I ultimately gave you my sandwich for free.

If you keep collecting more and more IOUs from the people you trade your goods with, realistically you are never going to be able to convert those IOUs into something real. Which is something that the capitalists already contend with. Apple, for example, has umpteen billions of dollars worth of promises that they have no idea how to collect on. In theory they can, but in practice it is never going to happen. What don't they already have? Like when I offered you my sandwich, that is many billions of dollars worth of value that they have given away for free.

Given that Apple, to continue to use it as an example, have been quite happy effectively giving away many billions of dollars worth of value, why not trillions? Is it really going to matter? Money seems like something that matters to peons like us because we need to clear the debt to make sure we are well fed and kept warm, but for capitalists operating at scales that are hard for us to fathom, they are already giving stuff away for free. If they no longer have the cost of labor, they can give even more stuff away for free. Who — from their perspective — cares?


Money is less about personal consumption and more about a voting system for physical reality. When a company holds billions in IOUs, they are holding the power to decide what happens next. That capital allows them to command where the next million tons of aluminum go, which problems engineers solve, and where new infrastructure is built.

Even if they never spend that wealth on luxury, they use it to direct the flow of human effort and raw materials. Giving it away for free would mean surrendering their remote control over global resources. At this scale, it is not about wanting more stuff. It is about the ability to organize the world. Whether those most efficient at accumulating capital should hold such concentrated power remains the central tension between growth and equality.


The gap for me was mapping [continuing to hoard dollars] to [giving away free goods/services], but it makes sense now. I haven't given economics thought at this level. Thank you!

It's really simple: if you crash the market and you are liquid you can buy up all of the assets for pennies. That's pretty much the playbook right now in one part of the world, just the same happened in the former Soviet Union in the 90's.

I get (and got) that. My focus was specifically on: "its not clear if, approximately speaking, anyone actually has the 'money' to buy any goods now."

Cause it’s mostly bought on credit now, not with cash

> You still gotta understand what you're doing.

Of course, but how do you begin to understand the "stochastic parrot"?

Yesterday I used LLMs all day long and everything worked perfectly. Productivity was great and I was happy. I was ready to embrace the future.

Now, today, no matter what I try, everything LLMs have produced has been a complete dumpster fire and waste of my time. Not even Opus will follow basic instructions. My day is practically over now and I haven't accomplished anything other than pointlessly fighting LLMs. Yesterday's productivity gains are now gone, I'm frustrated, exhausted, and wonder why I didn't just do it myself.

This is a recurring theme for me. Every time I think I've finally cracked the code, next time it is like I'm back using an LLM for the first time in my life. What is the formal approach that finds consistency?


You're experiencing throttling. Use the API instead and pay per token.

You also have to treat this as outsourcing labor to a savant with a very, very short memory, so:

1. Write every prompt like a government work contract in which you're required to select the lowest bidder, so put guardrails everywhere. Keep a text editor open with your work contract, edit the goal at the bottom, and then fire off your reply.

2. Instruct the model to keep a detailed log in a file and, after a context compaction, instruct it to read this again.

3. Use models from different companies to review one another's work. If you're using Opus-4.5 for code generation, then consider using GPT-5.2-Codex for review.

4. Build a mental model for which models are good at which tasks. Mine is:

  3a. Mathematical Thinking (proofs, et al.): Gemini DeepThink

  3b. Software Architectural Planning: GPT5-Pro (not 5.1 or 5.2)

  3c. Web Search & Deep Research: Gemini 3-Pro

  3d. Technical Writing: GPT-4.5

  3e. Code Generation & Refactoring: Opus-4.5

  3f. Image Generation: Nano Banana Pro

> You're experiencing throttling. Use the API instead and pay per token.

That was using pay per token.

> Write every prompt like a government work contract in which you're required to select the lowest bidder, so put guardrails everywhere.

That is what I was doing yesterday. Worked fantastically. Today, I do the very same thing and... Nope. Can't even stick to the simplest instructions that have been perfectly fine in the past.

> If you're using Opus-4.5 for code generation, then consider using GPT-5.2-Codex for review.

As mentioned, I tried using Opus, but it didn't even get the point of producing anything worth reviewing. I've had great luck with it before, but not today.

> Instruct the model to keep a detailed log in a file and, after a context compaction

No chance of getting anywhere close to needing compaction today. I had to abort long before that.

> Build a mental model for which models are good at which tasks.

See, like I mentioned before, I thought I had this figured out, but now today it has all gone out the window.


Drives me absolutely crazy how lately any time I comment about my experience using LLMs for coding that isn’t gushing praise, I get the same predictable, condescending lecture about how I'm using it ever so slightly wrong (unlike them) which explains why I don't get perfect output literally 100% of the time.

It’s like I need a sticky disclaimer:

  1. No, I didn’t form an outdated impression based on GPT-4 that I never updated, in fact I use these tools *constantly every single day* 
  2. Yes, I am using Opus 4.5
  3. Yes, I am using a CLAUDE.md file that documents my expectations in detail
  3a. No, it isn’t 20000 characters or anything
  3b. Yes, thank you, I have in fact already heard about the “pink elephant problem”
  4. Yes, I am routinely starting with fresh context
  4a. No, I don’t expect every solution to be one-shotable 
  5. Yes, I am still using Opus fucking 4.5 
  6. At no point did I actually ask for Unsolicited LLM Tips 101.
Like, are people really suggesting they never, ever get a suboptimal or (god forbid) completely broken "solution" from Claude Code/Codex/etc?

That doesn't mean these tools are useless! Or that I’m “afraid” or in denial or trying to hurt your feelings or something! I’m just trying to be objective about my own personal experience.

It’s just impossible to have an honest, productive discussion if the other person can always just lob responses like “actually you need to use the API not the 200/mo plan you pay for” or “Opus 4.5 unless you’re using it already in which case GPT 5.2 XHigh / or vice versa” to invalidate your experience on the basis of “you’re holding it wrong” with an endlessly slippery standard of “right”.


When I wrote my reply I was not familiar with the existing climate of LLM-advice-as-a-cudgel that you describe.

> to invalidate your experience on the basis of “you’re holding it wrong”

This was not my intent in replying to 9rx. I was just trying to help.


GP didn’t, but I’ve found the tips that you’ve shared helpful, so thank you for taking the time.

Nonsense. I have ran an experiment today - trying to generate a particular kind of image.

Its been 12 hours and all the image gen tools failed miserably. They are only good at producing surface level stuff, anything beyond that? Nah.

So sure, if what you do is surface level (and crap in my opinion) ofc you will see some kind of benefit. But if you have any taste (which I presume you dont) you would handily admit it is not all that great and the amount invested makes zero sense.


> if what you do is surface level (and crap in my opinion)

I write embedded software in C for a telecommunications research laboratory. Is this sufficiently deep for you?

FWIW, I don't use LLMs for this.

> But if you have any taste (which I presume you dont)

What value is there to you in an ad hominem attack here? Did you see any LLM evangelism in my post? I offered information based on my experience to help someone use a tool.


Not likely. The alternative was for them to modify SQLite without the test suite and no obvious indication of what they would need to do to try to fill in the gaps. Modifying SQLite with its full test suite would be the best choice, of course, but one that is apparently[1] not on the table for them. Since they have to reimagine the test suite either way, they believe they can do a better job if the tests are written alongside a new codebase.

And I expect they are right. Trying to test a codebase after the fact never goes well.

[1] With the kind of investment backing they have you'd think they'd be able to reach some kind of licensing deal, but who knows.


I don't get this. In their own rust implementation they have to write and use their own test and they still don't have access to the proprietary sqlite tests. So their implementation will necessarily be whatever they implement + whatever passes their tests. Same as it would be if they forked sqlite in C. (Plus they would have the open source tests). Am I missing something?

You are missing that HN accounts needlessly overthink everything, perhaps?

Otherwise, I doubt it. They have to write the tests again no matter what. Given that, there is no downside to reimplementing it while they are at it. All while there is a big upside to doing that: Trying to test something after the implementation is already written never ends well.

That does not guarantee that their approach will succeed. It is hard problem no matter how you slice it. But trying to reverse engineer the tests for the C version now that all knowledge of what went into it in the first place is lost is all but guaranteed to fail. Testing after the fact never ends well. Rewriting the implementation and tests in parallel increases the chances of success.


> A database that can scale from in-process to networked is badly needed

Why not Postgres? https://pglite.dev


From what I’ve read there’s a pretty sizable performance gap between SQLite and pglite (with SQLite being much faster).

I’m excited to see things improve though. Having a more traditional database, with more features and less historical weirdness on the client would be really cool.

Edit: https://pglite.dev/benchmarks actually not looking too bad.. I might have something new to try!


[flagged]


Did you actually click the link? pglite aims to be embeddable just like sqlite.

pglite runs in wasm so it should be possible to embed it where you want, like sqlite?

Why would I want wasm for an embedded database? It's not a feature, quite an anti-feature frankly.

edit: it looks like pglite is only useful for web apps


> it looks like pglite is only useful for web apps

Where else other than web apps (herein meaning network-attached database servers that, more often than not, although not strictly so, use HTTP as the transport layer) are at meaningful risk of bumping up against SQLite write contention? If your mobile/desktop app has that problem, it is much more likely that you have a fundamental design issue and not a scaling problem.


I'm not sure I follow what that has to do with embedding a database into your application

In a networked environment, which includes the web, it is typical to expose your database over the network. In the olden days clients started speaking SQL over the network, but there are a number of pitfalls to this approach. SQL was designed for use on mainframes, which, understandably, does not translate to the constraints of the network very well.

To alleviate the pressure of those pitfalls, we started adding middle databases (oft called web apps, API services, REST services, etc.) that proxied the database through protocols that are more ideal to the realities and limitations of the network. Clients were then updated to use the middle database, seeing the hacks required to make SQL usable over the network to be centralized in one spot, greatly reducing the management burden.

But having two database servers is pretty silly when you think about it. Especially when the "backend" database's protocol isn't suitable for the network[1]. Enter the realization that if you use something like SQLite, you don't need another, separate database server. You can have one database server[1] that speaks a network friendly API. Except SQLite itself has a number of limitations that doesn't make it well suited to being the backing engine of your network-first DBMS.

That is what the article is about — Pointing out those limitations, and how Turso plans to overcome them. If your use case isn't "web app", SQLite is already going to do the job just fine.

[1] After all, if it were suited for networks, you wouldn't need the middle service. Clients would already be talking to that database directly instead.

[2] As in one logical database server. In practice, you may use a cluster of servers to provide that logical representation.


Are you an AI?

Not really, I used it to develop against a "real" postgres database for a node backend app. It worked fine and made it pretty easy to spin up a development/CI environment anywhere you want. Only when inserting large amounts of data you start to notice it is slower than native postgres. I had to stop using it because we required the postgis extension (although there is some movement on that front!).

You don’t want another server but you do want networking?

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: