Hacker Newsnew | past | comments | ask | show | jobs | submit | Izkata's commentslogin

> that was very difficult for many codebases to upgrade for years.

In case people have forgotten: python 3.3 through 3.5 (and 3.6 I think) each had to reintroduce something that was removed to make the upgrade easier. Jumping from 2.7 to 3.3 (or higher depending on what you needed) was the recommended route because of this, it was less work than going to 3.0, 3.1, or 3.2


Execution time, not parse time. It's a side effect of function declarations being statements that are executed, not the list/dict itself. It would happen with any object.

It's still ridiculous. A hypothetical Python4 would treat function declarations as declarations not executable statements, with no impact on real world code except to remove all the boilerplate checks.

There is no such thing as a "function declaration" in Python. The keyword is "def", which is the first three letters of the word "define" (and not a prefix of "declare"), for a reason.

The entire point of it being an executable statement is to let you change things on the fly. This is key to how the REPL works. If I have `def foo(): ...` twice, the second one overwrites the first. There's no need to do any checks ahead of time, and it works the same way in the REPL as in a source file, without any special logic, for the exact same reason that `foo = 1` works when done twice. It's actually very elegant.

People who don't like these decisions have plenty of other options for languages they can use. Only Python is Python. Python should not become not-Python in order to satisfy people who don't like Python and don't understand what Python is trying to be.


You are describing a completely different language, that differs in very major ways from Python. You can of course create that, but please don't call it Python 4 !

You think so but then you write a function with a default argument pointing to some variable that is a list and now suddenly the semantics of that are... what?

you could just treat argument initialization as an executable expression which is called every time you call a function. If you have a=[], then it's a new [] every time. If a=MYLIST then it's a reference to the same MYLIST. Simple. And most sane languages do it this way, I really don't know why python has (and maintain) this quirk.

What are the semantics of the following:

    b = ComplexObject (...)
    # do things with b

    def foo (self, arg=b):
        # use b

    return foo
Should it create a copy of b every time the function is invoked? If you want that right now, you can just call b.copy (), when you always create that copy, then you can not implement the current choice.

Should the semantic of this be any different? :

    def foo (self, arg=ComplexObject (...)):
Now imagine a:

    ComplexObject = list

I wonder, why that kind of ambiguity or complexity even comes to your mind at all. Just because python is weird?

def foo(self, arg=expression):

could, and should work as if it was written like this (pseudocode)

def foo(self, arg?): if is_not_given(arg): arg=expression

if "expression" is a literal or a constructor, it'd be called right there and produce new object, if "expression" is a reference to an object in outer scope, it'd be still the same object.

it's a simple code transformation, very, very predictable behavior, and most languages with closures and default values for arguments do it this way. Except python.


What you want is for an assignment in a function definition to be a lambda.

  def foo (self, arg=lambda : expression):
Assignment of unevaluated expressions is not a thing yet in Python and would be really surprising. If you really want that, that is what you get with a lambda.

> most languages with closures and default values for arguments do it this way.

Do these also evaluate function definitions at runtime?


yes they do. check ruby for example.

Let's not get started on the cached shared object refs for small integers....

What realistic use case do you have for caring about whether two integers of the same value are distinct objects? Modern versions of Python warn about doing unpredicatble things with `is` exactly because you are not supposed to do those things. Valid use cases for `is` at all are rare.

if v is not None as opposed to if not v is one of those use cases if you store 0 or False or an empty list, etc.

> Valid use cases for `is` at all are rare.

There might not be that many of them, depending on how you count, but they're not rare in the slightest. For example, you have to use `is` in the common case where you want the default value of a function argument to be an empty list.


I assume you refer to the `is None` idiom. That happens often enough, but I count it as exactly one use case, and I think it's usually poorly considered anyway. Again, you probably don't actually want the default value to be an empty list, because it doesn't make a lot of sense to mutate something that the caller isn't actually required to provide (unless the caller never provides it and you're just abusing the default-argument behaviour for some kind of cache).

Using, for example, `()` as a default argument, and cleaning up your logic to not do those mutations, is commonly simpler and more expressive. A lot of the community has the idea that a tuple should represent heterogeneous fixed-length data and a list should be homogeneous; but I consider (im)mutability to be a much more interesting property of types.


Could you expand on this? For example, this works just fine:

    def silly_append(item, orig=[]):
       return orig + [item]
Edit: Oh, I think you probably mean in cases where you're mutating the input list.

Okay so a little bit of out of universe trivia on Dollhouse: It was planned for 5 seasons. With the risk of cancellation, the second season and the two Epitaph episodes were a severely compressed telling of 4 seasons of plans. And, personally I think it worked well. Something similar happened to Babylon 5, Manifest, Jericho (IIRC), Firefly (Serenity was the original ending that would have played out over multiple seasons), and the Escaflowne anime: the main plots towards the end got compressed to create a faster pace at the climax while ensuring the story could be finished (though I wasn't a fan of how Manifest ended).

My concern about original writers being involved in reboots is if they want to fill out the story they couldn't tell the first time around and end up with a more standard pacing that's less exciting, and end up getting cancelled before finishing. Then we end up with things like Tru Calling and Dark Matter, which had planned plots they couldn't finish.


Oh wow, I really enjoyed Dollhouse but I didn't know that! I was always confused why Season 2's plot went by so quickly. Thanks.

I think it worked well with the Epitaph episodes being as short as they were. I don’t think I’d have enjoyed that much darkness for many seasons. They were great though, to show sobering consequences of what they were toying with.

My best guess based on another good series with a similar 5-season arc, the 3rd season would have been when the main characters realize Rossum's larger plans (Topher's breakdown when he puts together schematics from the individual Dollhouses, or the visit from one of the execs using Victor's body, for two examples that did make it into the series), 4th season would have been the dark one when everything goes to crap, and 5th season would have been the hopeful restoration of Epitaph One / Two.

The only thing I remember reading for sure is, the 2nd season would have largely been about exploring what was happening to Echo because of what occurred in the 1st season aired finale (episode "Omega") - which did happen, but it I don't remember it dominating the season. Didn't sound like it would have been a dark season, but looking back on it seems like it would have been painfully slow.


I don't use it.

I know my mind fairly well, and I know my style of laziness will result in atrophying skills. Better not to risk it.

One of my co-workers already admitted as much to me around six months ago, and that he was trying not to use AI for any code generation anymore, but it was really difficult to stop because it was so easy to reach for. Sounded kind of like a drug addiction to me. And I had the impression he only felt comfortable admitting it to me because I don't make it a secret that I don't use it.

Another co-worker did stop using it to generate code because (if I'm remembering right) he can tell what it generates is messy for long-term maintenance, even if it does work and even though he's new to React. He still uses it often for asking questions.

A third (this one a junior) seemed to get dumber over the past year, opening merge request that didn't solve the problem. In a couple of these cases my manager mentioned either seeing him use AI while they were pairing (and it looked good enough so the problems just slipped by) or saw hints in the merge request with how AI names or structures the code.


I've been using ChatGPT to teach myself all sorts of interesting fields of mathematics that I've wanted to learn but never had the time previously. I use the Pro version to pull up as many actual literature references as I can.

I don't use it at all to program despite that being my day job for exactly the reason you mentioned. I know I'll totally forget how to program. During a tight crunch period, I might use it as a quick API reference, but certainly not to generate any code. (Absolutely not saying it's not useful for this purpose—I just know myself well enough to know how this is going to go haha)


How do you get chatgpt to teach you well? I feel like no matter how dense and detailed i ask it to be or how much i ask it to elaborate and contextualize topics with their adjacent topics to give me a full holistic understanding, it just sucks at it and is always short of helping me truly understand and intuit the subject matter.

Yes, this is my experience as well. At some point you would be better off find something written by a human, because AI would just take you in circles.

This is an interesting usecase, and I want to learn more about your workflow. Do you also use Lean etc. for math proofs.

I’m the same way. But I took a bite and now I’m hooked.

I started using it for things I hate, ended up using it everywhere. I move 5x faster. I follow along most of the time. Twice a week I realize I’ve lost the thread. Once a month it sets me back a week or more.


I repeatedly tried to use LLMs for code but god they suck. I've tried most tools and models and for me it's still way faster to write things by hand.

I'm a magical tool, it's almost like if I knew what I wanted to do ! Don't have to spend time explaining and correcting.

Also, a good part of the value of me writing code is that I know the code well and can fix things quickly. In addition, I've come to realize that while I'm coding, I'm mostly thinking about the project's code architecture and technical future. It's not something I'll ever want to delegate I think.


I use AI to discuss and possibly generate ideas and tests, but I make sure I understand everything and type it in except for trivial stuff. The main value of an engineer is understanding things. AI can help me understand things better and faster. If I just setup plans for AI and vibe, human capital is neglected and declines. I don't think there's much of a future if you don't know what you're doing, but there is always a future for people with deep understanding of problems and systems.

I think you are right, deep understanding of systems and domains will not become obsolete. I forsee some types of developers moving into a more holistic systems design and implementation role if coding itself becomes quite routinely automated.

The atrophy of manually writing code is certainly real. I'd compare it to using a paper map and a compass to navigate, versus say Google Maps. I don't particularly care to lose the skill, even though being good and enjoying the programming part of making software was my main source of income for more than a decade. I just can't escape being significantly faster with a Claude Code.

> he can tell what it generates is messy for long-term maintenance, even if it does work and even though he's new to React.

When one can generate code in such a short amount of time, logically it is not hard to maintain. You could just re-generate it if you didn't like it. I don't believe this style of argument where it's easy to generate with AI but then you cannot maintain it after. It does not hold up logically, and I have yet to see such a codebase where AI was able to generate it, but now cannot maintain it. What I have seen this year is feature-complete language and framework rewrites done by AI with these new tools. For me the unmaintainable code claim is difficult to believe.


have you tried using AI generated code in a non hobby project? one that has to go to production?

it just allucinates packages, adds random functions that already exist, creates new random APIs.

How is that not unmantainable?


We use it daily in our org. What you’re talking about is not happening. That being said, we have fairly decent mono repo structure, bunch of guides/skills to ensure it doesn’t do it that often. Also the whole plan + implement phases.

If it was July 2025, I would have agreed with you. But not anymore.


I used to experience those issues a lot. I haven't in a while. Between having good documentation in my projects, well-defined skills for normal things, simple to use testing tools, and giving it clear requirements things go pretty smoothly.

I'd say it still really depends on what you're doing. Are you working in a poorly documented language that few people use solving problems few people have solved? Are you adding yet another normal-ish kind of feature in a super common language and libraries? One will have a lot more pain than the other, especially if you're not supplying your own docs and testing tools.

There's also just a difference of what to include in the context. I had three different projects which were tightly coupled. AI agents had a hard time keeping things straight as APIs changed between them, constantly misnaming them and getting parameters wrong and what not. Combining them and having one agent work all three repos with a shared set of documentation made it no longer make mistakes when it needed to make changes across multiple projects.


Yes, all the time. Yes, those go to production. AI has improved significantly the past 2 years, I highly recommend you give it another try.

I don't see the behaviour you describe, maybe if your impression is that of online articles or you use a local llama model or ChatGPT from 2 years ago. Claude regularly finds and resolves duplicated code in fact. Let me give you a counter-example: For adding dependencies we run an internal whitelist for AI Agents; new dependencies go through this system, we had similar concerns. I have never seen any agent used in our organisation or at a client, in the half year or so that we run the service, hallucinate a dependency.


So where does your responsibility of this code end ? Do you just push to repo, merge and that's it or do you also deploy, monitor and maintain the production systems? Who handles outages on saturday night, is it you or someone else ?

FWIW I mainly use Opus 4.6 on the $100/mo Max plan, and rarely run into these issues. They certainly occur with lower-tier models, with increased frequency the cheaper the model is - as for someone using it for a significant portion of their professional and personal work, I don’t really understand why this continues to be a widespread issue. Thoroughly vetting Plan Mode output also seems like an easy resolution to this issue, which most devs should be doing anyways IMO (e.g. `npm install random-auth-package`).

We use it for 100s of projects and what you say hasn't happened for a while.

LLMs rarely if ever proactively identify cleanup refactors that reduce the complexity of a codebase. They do, however, still happily duplicate logic or large blocks of markup, defer imports rather than fixing dependency cycles, introduce new abstractions for minimal logic, and freely accumulate a plethora of little papercuts and speed bumps.

These same LLMs will then get lost in the intricacies of the maze they created on subsequent tasks, until they are unable to make forward progress without introducing regressions.

You can at this point ask the LLM to rewrite the rat’s nest, and it will likely produce new code that is slightly less horrible but introduces its own crop of new bugs.

All of this is avoidable, if you take the wheel and steer the thing a little. But all the evidence I’ve seen is that it’s not ready for full automation, unless your user base has a high tolerance for bugs.

I understand Anthropic builds Claude Code without looking at the code. And I encounter new bugs, some of them quite obvious and bad, every single day. A Claude process starts at 200MB of RAM and grows from there, for a CLI tool that is just a bundle of file tools glued to a wrapper around an API!

I think they have a rats nest over there, but they’re the only game in town so I have to live with this nonsense.


> And at that point you could have just done those regulations without UBI. Hmm.

Rent control is already a thing, and typically good short term but bad long term: renters don't move out because they can't get such low rent elsewhere, and landlords can't afford repairs so things are left broken. It's a great way to create slums over a few generations.


All of that sounds nice, but the current understanding is what fell out when people started questioning the specifics of how all of that would work.

More than a decade ago I suggested this as a compromise as a joke, but then decided to try it out - ended up liking it more than any other options and have used it for all my personal stuff ever since.

Also helps with auth failures, I've used it several times with co-workers who can't figure out why their ssh key isn't working. It lists the keys out and some extra information.

We're talking email. They are not only capable of it, but do it all the time.

Your definition of "courier" is personal to you, but not compatible with anything that might be accepted by a judge.

In this case, Tile sent the email, it was delivered seamlessly to the plaintiffs' designated agents, and then it was hidden from the plaintiffs by their designated agents. Those agents are not couriers; as far as the law is concerned, and as far as the law can be concerned, they are the plaintiffs.

Tile has no control over who you make responsible for receiving your mail. As soon as they've gotten your mail to that person, they've done everything that can be done.


Which goes right back to what OP was saying, it's not taking into account how email actually works. The user did not mark it as spam, contrary to what you said earlier - they never even saw it. It's not even close to the same thing as the recipient tossing physical mail into the trash without opening it.

> Not content with winning his day in court, Mr Argarkov is now taking matters further and trying to sue Tinkoff Credit Systems for 24 million rubles (£470,000) over its failure to honour the contract he created. For its part, the bank is counter-suing Mr Argakov for alleged fraud.

> [..]

> The court is set to review Mr Argakov’s case next month.

Followup a few days later, they both withdrew their claims: https://www.themoscowtimes.com/2013/08/14/man-who-outwitted-...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: