Hacker Newsnew | past | comments | ask | show | jobs | submit | mjrbrennan's commentslogin

Brisbane now as well since the past ~1 year or so, much better than the old Go Card system. The only downside in most places with these systems though is that there is no easy way to pay for e.g. children, would be perfect if you could say 3x tickets on this card when you tap in + out.


Yes, I’ve only just started trying out Claude Code and I do not mesh well with this method of asking AI to do something, then having to wait a few minutes and come back and check its work.

I find this leads so easily to distraction and I find this workflow very boring. If I’m going to use AI I want to use it in a more integrated way, or in a more limited way like just querying ChatGPT.

Will still try Claude more but I’m really not a fan so far.


Not trying to be rude here, but that `last_week.md` is horrible to me. I can't imagine having to read that let alone listen to the computer say it to me. It's so much blah blah and fluff that reads like a bad PR piece. I'd much rather scan through commits of the last week.

I've found this generally with AI summaries...usually their writing style is terrible, and I feel like I cannot really trust them to get the facts right, and reading the original text is often faster and better.


Here's a system prompt I tend to use

    ## Instructions
    * Be concise
    * Use simple sentences. But feel free to use technical jargon.
    * Do NOT overexplain basic concepts. Assume the user is technically proficient.
    * AVOID flattering, corporate-ish or marketing language. Maintain a neutral viewpoint.
    * AVOID vague and / or generic claims which may seem correct but are not substantiated by the the context.
Cannot completely avoid hallucinations and it's good to avoid AI for text that's used for human-to-human communication. But this makes AI answers to coding and technical questions easier to read.


>it's good to avoid AI for text that's used for human-to-human communication.

Assuming it is fact checked, why?


Why would I?

The only argument is that it improves the style of writing.

But I am in an ESL environment and no one cares about that.

Even otherwise why would anyone want to read a decompressed version instead of the "prompt" itself?


Personally, I find it hard to not be insulted by it. If I put thought into a comment or question or request, I don’t want generated nonsense back.

As I say, keep your slop in your own trough.


I felt the same thing about the onboarding. Like what future are we trying to build for ourselves here, exactly? The kind where instead of sitting down with a coworker to learn about a codebase, instead we get an ai generated PowerPoint to read alone????

Im so over this timeline.


all of this just reads like the supposed UML zeitgeist that was supposed to transform java and eliminate development 20 years ago

if this is all ultimately java but with even more steps, its a sign im definitely getting old. it’s just the same pattern of non technical people deceiving themselves into believing they dont need to be technical to build tech and then ultimately resulting in again 10-20 years of re-learning the painful lessons of that.

let me off this train too im tired already


> all of this just reads like the supposed UML zeitgeist that was supposed to transform java and eliminate development 20 years ago

See also 'no-code', 4GLs, 5GLs, etc etc etc. Every decade or so, the marketers find a new thing that will destroy programming forever.


20 years before UML/Java it was "4th Generation Languages" that were going to bring "Application Development Without Programmers" to businesses.

https://en.wikipedia.org/wiki/Fourth-generation_programming_...


And before that it was high-level programming languages, or as we call them today, programming languages.


The 4GL was mostly reporting languages, as I remember. Useful ones, too. I still feel we haven't been even close to utilizing specialized programming languages and toolkits.

Put another way, I am certain that Unity has done more to get non-programmers to develop software than ChatGPT ever will.


I'd argue first prize for that goes to Excel (for a sufficiently broad definition of "develop software").


The mistake was going after programmers, instead of going after programming languages, where the actual problem is.

UML may be ugly and in need of streamlining, but the idea of building software by creating and manipulating artifacts at the same conceptual level we are thinking at any given moment, is sound. Alas, we've long ago hit a wall in how much cross-cutting complexity we can stuff into the same piece of plaintext code, and we've been painfully scraping along the Pareto frontier ever since, vacillating between large and small functions and wasting time debating merits of sum types in lieu of exception handling, hoping that if we throw more CS PhDs into category theory blender, they'll eventually come up with some heavy duty super-mapping super monad that'll save us all.

(I wrote a lot on it in in the past here; c.f. "pareto frontier" and "plaintext single source of truth codebase".)

Unfortunately, it may be too late to fix it properly. Yes, LLMs are getting good enough to just translate between different perspectives/concerns on the fly, and doing the dirty work on the raw codebase for us. But they're also getting good enough that managers and non-technical people may finally get what they always wanted: building tech without being technical. For the first time ever, that goal is absolutely becoming realistic, and already possible in the small - that's what the whole "vibe coding" thing heralds.


I’ve heard this many times before but I’ve never heard an argument that rebukes the plain fact that text is extremely expressive, and basically anything else we try to replace it with less so. And it happens that making a von Neumann machine do precisely what you want requires a high level of precision. Happy to understand otherwise!


The text alone isn't the problem. It's the sum of:

1) Plaintext representation, that is

2) a single source of truth,

3) which we always work on directly.

We're hitting hard against limits of 1), but that's because we insist on 2) and 3).

Limits of plaintext stop being a problem if we relax either 2) or 3). We need to be able to operate on the same underlying code ("single source of truth") indirectly through task-specific view, that hide the irrelevant and emphasize the important for the task at hand, which is something that typically changes multiple times a day, sometimes multiple times an hour, for each programmer. The views/perspectives themselves can be plaintext or not, depending on what makes most sense; the underlying "single source of truth" does not have to be, because you're not supposed to be looking at it in the first place (beyond exceptional situations, similar to when you'd be looking at the object code produced by the compiler).

Expressiveness is a feature, but the more you try to express in fixed space, the harder it becomes to comprehend it. The solution is to stop trying to express everything all at once!

N.b. makes me think of a recent exchange I had on HN; people point out that code is like a blueprint in civil engineering/construction - but then, in those fields there is never a single common blueprint being worked on. You have different documents for overall structure, different for material composition, hydrological studies, load analysis, plumbing, HVAC, electrical routing, etc. etd. Multiple perspectives on the same artifacts. You don't see them merge all that into a single "uber blueprint", which would be the equivalent of how software engineers work with code.


How so? Even just hypertext is more expressive than plain text. So is JSON, or any other data format or programming language which has a string type for that matter.


Those are all still text.


Yes, structured text is a subset of text. That doesn't negate the point made.


Of all the things I read at uni UML is the thing I've felt the least use for - even when designing new systems. I've had more use for things I never thought I'd need like Rayleigh scattering and processor design.


I think most software engineers need to draw a class diagram from time to time. Maybe there are a lot of unnecessary details to the UML spec, but it certainly doesn't hurt to agree that a hollow triangle for the arrow head means parent/child while a normal arrow head means composition, with a diamond at the root for ownership.

As the sibling comment says, sequence diagrams are often useful too. I've used them a few times for illustrating messages between threads, and for showing the relationship between async tasks in structured concurrency. Again, maybe there are murky corners to UML sequence diagrams that are rarely needed, but the broad idea is very helpful.


True but I don't bother with a unified system, just a mermaid diagram. I work in web though, so perhaps if I went back to embedded (which I did only a short while) or something else when a project is planned in it entirety rather than growing organically/reacting to customers needs/trends/the whims of management.


I just looked at Mermaid and it seems to as close to UML as I meant by my previous comment. Just look at this class diagram [1]: triangle-ended arrows for parent/child, the classic UML class box of name/attributes/methods, stereotypes in <<double angle brackets>>, etc. The text even mentions UML. I'm not a JS dev so tend to use PlantUML instead - which is also UML based, as the name implies.

I'm not sure what you mean by "unified system". If you mean some sort of giant data store of design/architecture where different diagrams are linked to each other, then I'm certainly NOT advocating that. "Archimate experience" is basically a red flag against both a person and the organisation they work for IMO.

(I once briefly contracted for a large company and bumped into a "software architect" in a kitchenette one day. What's your software development background, I asked him. He said: oh no, I can't code. D-: He spent all day fussing with diagrams that surely would be ignored by anyone doing the actual work.)

[1] https://mermaid.js.org/syntax/classDiagram.html


The "unified" UML system is referring to things like Rose (also mentioned indirectly several more comments up) where they'd reflect into code and auto-build diagrams and also auto-build/auto-update code from diagrams.


I've been at this 16 years. I've seen one planned project in that 16 years that stuck anywhere near the initial plan. They always grow with the whims of someone.


> I think most software engineers need to draw a class diagram from time to time.

Sounds a lot like RegEx to me: if you use something often then obviously learn it but if you need it maybe a dozen or two dozen times per year, then perhaps there’s less need to do a deep dive outside of personal interest.


UML was a buzzword, but a sequence diagram can sometimes replace a few hundred words of dry text. People think best in 2d.


Sure, but you're talking "mildly useful", rather than "replaced programmers 30 years ago, programmers don't exist anymore".

(Also, I'm _fairly_ sure that sequence diagrams didn't originate with UML; it just adopted them.)


>People think best in 2d.

no they don't. some people do. Some people think best in sentences, paragraphs, and sections of structured text. Diagrams mean next to nothing to me.

Some graphs, as in representations of actual mathematical graphs, do have meaning though. If a graph is really the best data structure to describe a particular problem space.

on edit: added in "representations of" as I worried people might misunderstand.


FWIW, you're likely right here; not everyone is a visual thinker.

Still, what both you and GP should be able to agree on, is that code - not pseudocode, simplified code, draft code, but actual code of a program - is one of the worst possible representations to be thinking and working in.

It's dumb that we're still stuck with this paradigm; it's a great lead anchor chained to our ankles, preventing us from being able to handle complexity better.


> code - not pseudocode, simplified code, draft code, but actual code of a program - is one of the worst possible representations to be thinking and working in.

It depends on the language. In my experience, well-written Lisp with judicious macros can come close to fitting the way I think of a problem. But some language with tons of boilerplate? No, not at all.


As a die-hard Lisper, I still disagree. Yes, Lisp can go further than anything else to eliminate boilerplate, but you're still locked in a single representation. The moment you switch your task into something else - especially something that actually cares about the boilerplate you hidden, and not the logic you exposed - and now you're fighting an even harder battle.

That's what I mean by Pareto frontier: the choices made by various current-generation languages and coding methodologies (including choices you as a macro author makes, too), are all promoting readability for some tasks, at the expense of readability for other tasks. We're just shifting the difficulty around the time of day, not actually eliminating it.

To break through that and actually make progress, we need to embrace working in different, problem-specific views, instead of on the underlying shared single-source-of-truth plaintext code directly.


IMHO there's usually a lot of necessary complexity that is irrelevant to the actual problem; logging, observability, error handling, authn/authz, secret management, adapting data to interfaces for passing to other services, etc.

Diagrams and pseudocode allow to push those inconveniences into the background and focus on flows that matter.


Precisely that. As you say, this complexity is both necessary and irrelevant to the actual problem.

Now, I claim that the main thing that's stopping advancement in our field is that we're making a choice up front on what is relevant and what's not.

The "actual problem" changes from programmer to programmer, and from hour to the next. In the morning, I might be tweaking the business logic; at noon, I might be debugging some bug across the abstraction layers; in the afternoon, I might be reworking the error handling across the module, and just as I leave for the day, I might need to spend 30 minutes discussing architecture issue with the team. All those things demand completely different perspectives; for each, different things are relevant and different are just noise. But right now, we're stuck looking at the same artifact (the plaintext code base), and trying to make every possible thing readable simultaneously to at least some degree.

I claim this is a wrong approach that's been keeping us stuck for too long now.


I'd love this to be possible. We're analyzing projections from the solution space to the understandability plane when discussing systems - but going the other way, from all existing projections to the solution space, is what we do when we actually build software. If you're saying you want to synthesize systems from projections, LLMs are the closest thing we've got and... it maybe sometimes works.


Yeah, LLMs seem like they'll allow us to side-step the difficult parts by synthesizing projections instead of maintaining them. I.e. instead of having a well-defined way to go back and forth between a specific view and underlying code (e.g. "all the methods in all the classes in this module, as a database", or "this code, but with error handling elided", or "this code, but only with types and error handling", or "how components link together, as a graph", etc.), we can just tell LLMs to synthesize the views, and apply changes we make in them to the underlying code, and expect that to mostly work - even today.

It's just hell of an expensive way to get around doing it. But then maybe at least a real demonstration will convince people of the utility and need of doing it properly.

But then, by that time, LLMs will take over all software development anyway, making this topic moot.


ok, but my reference to sentences, paragraphs and sections would not indicate code but rather documentation.


oops, evidently I got downvoted because I don't think best in 2d and that is bad, classy as always HN.


Lmao I remember uni teaching me UML. Right before I dropped out after a year because fuck all of that. It's a shame because some of the final year content I probably would've liked.

But I just couldn't handle it when I got into like COMP102 and in the first lecture, the lecturer is all "has anybody not used the internet before?"

I spent my childhood doing the stuff so I just had to bail. I'm sure others would find it rewarding (particularly those that were in my classes because 'a computer job is a good job for money').


Yes that's what gets me too. I want to engage with my coworkers, you know other humans? And get their ideas and input and summaries. Not just sit in my office alone having the computer explain everything to me badly, or read through Powerpoints of all things...


> I want to engage with my coworkers, you know other humans?

I.e. the very species we try to limit our contact with, which is why we chose this particular field of work? Or are you from the generation that joined software for easy money? :).

/s, but only partially.

There are aspects of this work where to "engage with my coworkers" is to be doing the exact opposite of productive work.


Naw, the new future (technically the present for orgs that use AI intelligently) is:

The AI already generated comprehensive README.md files and detailed module/function/variable (as needed) doc comments, which you could read but end up mostly being consumed by another AI, so you can just tell it what you're trying to do and ask it how you might accomplish that in the codebase, first at a conceptual level, then in code once you feel comfortable enough with the system to be able to validate the work.

All the while you're sitting next to another coworker who's also doing the same thing, while you talk about high level architecture stuff, make jokes, and generally have a good time. Shit, I don't even mind open offices as much as I used to, because you don't need that intense focus to get into a groove to produce code quickly like you did when manually writing it, so you can actually have conversations with an entire table of coworkers and still be super productive.

No comment on the political/climate side of this timeline, but the AI part is pretty good when you master it.


What kind of stuff are you building where that is even remotely possible? I get that generating documentation works fine, but building features just isn't there yet for non-trivial apps, and don't even get me started on trying to get the agents to backtrack and change something they did


I have them backtrack all the time, including rewriting models and underlying db, then reworking from the ground up.

Another approach is Ill dictate how an api SHOULD work, or even go nuclear and write code i want to work, and tell the they must make the test pass and cant change what i wrote. They take these constraints well ime.


Usually the tricks and problems in a codebase are not in the codebase at all, they are in somebody's head.

It would be helpful if I had a long rambling dialogue with a chat model and it distilled that.


> It would be helpful if I had a long rambling dialogue with a chat model and it distilled that.

IME this can work pretty well with Gemini in the web UI. If it misinterprets you at any stage you can edit your last comment until it gets on the same page, so to speak. Then once you're to a point in the conversation where you're satisfied it seems to "get it", you can drop in some more directly relevant context like example code if needed and ask for what you want.


Yup, you can always tell LLMs just from the ridiculous output most of the time. Like 8-20 sentences minimum, for the most basic thing.

Even Gemini/gpt4o/etc are all guilty of this. Maybe they'll tighten things up at some point - if I ask an assistant a simple question like "is it possible to put apples into a pie?" what I want is "Yes, it is possible to put apples into a pie. Would you like to know more?"

But not "Yes, absolutely — putting apples into a pie is not only possible, it's classic! Apple pie is one of the most well-known and traditional fruit pies. Typically, sliced apples are mixed with sugar, cinnamon, nutmeg, and sometimes lemon juice or flour, then baked inside a buttery crust. You can use various types of apples depending on the flavor and texture you want (like Granny Smith for tartness or Honeycrisp for sweetness). Would you like a recipe or tips on which apples work best?" (from gpt4).


Yeah I was done at "What happened here was more than just code..." -_-


You got past the grey text on gray background? -_-


I didn't. I open up Chrome's Developer Tools and drop this into the console:

document.body.style.backgroundColor = "black";


Python, a journey that began with an initial commit and evolved through a series of careful refinements to establish a robust foundation for the project..

Wow yeah what a waste. That is exactly the opposite of saving time.


You can specify desired style in the prompt. The author seems to like PR sounding fluff while making morning coffee.


If this was meant to be read, I might've agreed, but:

1) This was supposed to be piped through TTS and listened to in the background, and...

2) People like podcasts.

Your typical podcast is much worse than this. It's "blah blah" and "hahaha <interaction>" and "ooh <emoting>" and "<irrelevant anecdote>" and "<turning facts upside down and injecting a lie for humorous effect>", and maybe some of the actual topic mixed in between, and yet for some reason, people love it.

I honestly doubt this specific thing would be useful for me, but I'm not going to assume it's plain dumb, because again, podcasts are worse, and people love it.


What kind of podcast have you listened to, if any?

They aren't all Joe Rogan.


Name one that isn't > 90% fluff and human interaction sounds.


RadioLab, 99% invisible, Revisionist History, Everything is alive.


Conversations with Tyler Cowen, Complex Systems with patio11 are two off the top of my head that concentrate on useful information, and certainly aren't "> 90% fluff and human interaction sounds".

Unless of course people talking in any capacity is human interaction sounds, in which case, yes, every podcast is > 90% human interaction sounds.


Thanks. I didn't realize 'patio11 even has a podcast, I'll definitely want to listen to that one.

> Unless of course people talking in any capacity is human interaction sounds, in which case, yes, every podcast is > 90% human interaction sounds.

No, I specifically mean all the thing that is not content - hellos, jokes, emoting, interrupting, exchanging filler commentary, etc. It may add character to the show, but from the POV of efficiently summarizing a topic, it's fundamentally even worse than the enterprisey BS fluff in the example in question.


There is a whole subcategory of wonkish podcasts, which I consider Patio11 to be the gold standard, where it is just two people having an information dense discussion. They don't tend to make it as far in the charts as the tech bro podcasts, but once you find them, they are GOLD.


Remember the sycophant bug? Maybe making the user FEELGOOD is part of what makes it feel smart or like a good experience. Is the reward function being smart? Is it maximizing interaction? Does it conflict with being accurate?


I ran the prompt as-is on one of the main repos that I work on and the sycophancy was cloying.

It praised so many things that I would just consider table steaks and made simple tweaks or features sound like massive projects.

I’m sure it could be improved by tweaking the prompt and there were parts of it that I found impressive that it had picked out (specifically things not in commit messages) but I found it unusable in its current form.


Yeah, I honestly don't know how anyone can put up with reading this sort of thing, much less have it read to them by a computer(!)

I suppose preferences differ, but really, does anyone _like_ this sort of writing style?


I agree, it's atrocious!

1. I shouldn't have used a newly created repo that had no real work over the course of the last week.

2. I should have put more time into the prompt to make it sound less nails on chalkboard.


Yes I find this is the main benefit of Google Maps GPS now, predicting and avoiding traffic whenever possible.


waze is far, far superior at that


I thought Waze was owned by Google though? I assumed they would use Waze’s traffic data.


as crazy as it sounds - they don’t. you can open them side-by-side and you’ll often see much different routes.


Same, I switched back to a single 27" screen last year. For me it's better to focus on one thing at a time especially since my eyes aren't the best, and I switch between virtual desktops with F1-F4 (or when I use my mac with the 3-finger swipe gesture).


MacOS also has ctrl+left/right for switching virtual desktops. The gesture can get a bit tedious if you're jumping across multiple desktops in one go. I don't think it's particularly ergonomic either.


I loved them both for different reasons, I went last year for the first time. As another commenter said Madrid felt very imperial, and as you say was beautiful, clean, and walkable. Barcelona felt more arty and had a great coastal vibe to it. I would go back to either in a heartbeat!


Same, I really don’t like HAML or Jade or other such things as a HTML replacement. I just never saw the point, it doesn’t seem like less work than just using HTML?


Definitely try to power through Mad Men, I tried to watch it a few times when I was younger but never got past season 3, whereas season 4 is where the real heart and message of it shows through, and the character development reaches new peaks. Really does seem like a show that resonates more when you are older.


I feel the same way about the one here in Australia -- I always double check it's scanned the correct numbers. Haven't been cheated out of millions yet, unfortunately.


Disclaimer: I work at Discourse. We discuss all our work on an internal Discourse forum, it makes everything much easier to track and long form slow lane discussion is encouraged and baked into Discourse.

We also have chat built in now, with a strong emphasis on interoperability between chat channels and topics so discussions can be easily moved between the fast and slow lane. I love the way we work and I always feel like communicating with the rest of my colleagues is seamless.

The blog post on our recent 3.0 release goes into this more if you are interested https://blog.discourse.org/2023/01/discourse-3-0-is-here/


Thanks for sharing! I had no idea this was possible. I think this makes things much more interesting.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: