Hacker Newsnew | past | comments | ask | show | jobs | submit | lrobinovitch's commentslogin

Nice! Have you figured out how to manage album art with beets-alternatives?


Been fortunate to get to try out Sculptor in pre-release - it's great. Like claude code with task parallelism and history built in, all with a clean UI.


Also linked in the article, https://esolangs.org is worth a read if this is up your alley


That wiki also comes with a funny CAPTCHA:

> Which number does this Befunge code output: 9731231181>\#+:#*9-#\_$.@


What exactly drove you nuts? The python ecosystem is very broad and useful, so it might be suitable for the application (if not, reasonable that you'd be frustrated). With strict mypy/pyright settings and an internal type-everything culture, Python feels statically typed IME.


It's not even close compared to working with Java or Go or any language built with static typing in mind.

To be clear, I'm not opposed to type hints. I use them everywhere, especially in function signatures. But the primary advantage to Python is speed (or at least perceived speed but that's a separate conversation). It is so popular specifically because you don't have to worry about type checking and can just move. Which is one of the many reasons it's great for prototypes and fucking terrible in production. You turn on strict type checking in a linter and all that goes away.

Worse, Python was not built with this workflow in mind. So with strict typing on, when types start to get complicated, you have to jump through all kinds of weird hoops to make the checker happy. When I'm writing code just to make a linter shut up something is seriously wrong.

Trying to ad typing to a dynamic language in my opinion is almost always a bad idea. Either do what Typescript did and write a language that compiles down to the dynamic one, or just leave it dynamic.

And if you want types just use a typed language. In a production setting, working with multiple developers, I would take literally almost any statically typed language over Python.


But TypeScript erases (its) types at runtime, exactly like Python. Python is Python's TypeScript. Whether you want TS or JS-like semantics is entirely dependent on whether you use a type checker and whether you consider its errors a build breaker.


I'm not sure what you're trying to say here. If you mean Python's type annotations are erased at runtime... Okay? It still has runtime type information. It's not "erasure" as that term applies to Java for example. And Typescript compiles down to JavaScript, so obviously it's runtime behavior is going to be the same as JavaScript.

In my view it's always a mistake to try and tac static typing on top of a dynamic one. I think TS's approach is better than Python's, but still not nearly as good as just using a statically typed language.


The fact that the types are reflected at runtime is what makes FastAPI/Pydantic possible, letting us use Python types to define data models used for serialization, validation, and generating. In TypeScript, we have to use something like Zod, instead of "normal" TypeScript types, because the types are not reflected at runtime.


I think a couple of things have to be untangled here.

The problem we are talking about in both Python and TS comes from the fact that they are (or compile down to) dynamic languages. These aren't issues in statically typed languages... because the code just won't compile it it's wrong and you don't have to worry about getting data from an untyped library.

I don't know a lot about Zod, but I believe the problem you are referring to is more about JavaScript then TS. JavaScript does a LOT of funky stuff at runtime, Python thank God actually enforces some sane type rules at runtime.

My point was not about how these two function at runtime. My point was that if you want to tac static typing onto a dynamic language, Typescripts approach is the better one, but even if can't fix the underlying issues with JS.

You could take a similar approach in Python. We could make a language called Tython, that is statically typed and then compiles down to Python. You eliminate an entire class of bugs at compile time, get a far more reliable experience then the current weirdness with gradual typing and linters, and you still get Pythons runtime type information to deal with things like interopt with existing Python code.


Typescript requires a compiler to produce valid Javascript. Python 3 shoved types into Python 3 without breaking backwards compatibility I think.

You would never have typing.TYPE_CHECKING to check if type checking is being done in TypeScript, for example, because type hints can't break Javascript code, something that can happen in Python when you have cyclic imports just to add types.


Not it doesn’t. It doesn’t throw errors, but they’re still introspectable in python, unlike typescript


I would say mypy is better than nothing but it still misses things sometimes, and makes some signatures difficult or impossible to write. I use it anyway, but patched-on static typing (Erlang, Clojure, and Racket also have it) seems like a compromise from the get-go. I'd rather have the type system designed into the language.


Mypy is trash but Pyright is very good.


I went from mypy to pyright to basedpyright and just started checking out pyrefly (the OP), and it's very promising. It's written in Rust so it's very efficient.


You know you can just use a compiled language with statically checked types, right?


For the kind of work I'm using Python for (computer vision, ML), not really. The ecosystem isn't there and even when it's possible it would be much less productive for very little gain. Typed Python actually works quite well in my experience. We do use C++ for some hand-written things that need to be fast or use libraries like CGAL, but it has a lot of disadvantages like the lack of a REPL, slow compile times and bad error messages.


Python is the second most popular programming language in the world. It's not that easy to avoid.


You have to force push each time you do this, right? How do your coworkers find the incremental change you made to commit 1 after you force push it, and how do you deal with collaborative branches effectively this way? And if I don't want to work this way and force push, are there other benefits of jj?


the heuristic is 'if you know about rerere and especially if you use it, you should try jj'. if you never force push, you might not see value in jj. (I basically always force push.)


That makes sense, good to know, thanks.

> I basically always force push

How do your colleagues deal with this, or is this mostly on experimental branches or individual projects?


JJ has the concept of "immutable changesets" -- if it sees a commit is referenced from a branch that it's not tracking, it assumes it ought not rebase that commit. Changesets on branches that look like the main branch are immutable too. And you can edit the revset that JJ considers immutable if you need it to be different from the default.

The net effect is that I can change "my" branches as I wish, but I can't change stuff that's been merged or other folks' branches unless I disable the safety features (either using `--ignore-immutable` or tracking the branch).

JJ also makes it really easy to push a single changeset as a branch, which means as you evolve that single commit you can keep the remote updated with your current work really easily. And it's got a specific `jj evolog` command to see how a specific changeset has evolved over time.


It's generally fine if you force push a branch that you're the only one working on. In many projects, there's an expectation that the 'PR Branch' you create in order to make a github pull request is owned by you, and can be rebased/edited/force-pushed at will. It's very common to do things like `git commit --amend --no-edit` to fix a typo or lint issue and then force push to update the last commit.

This has it's problems, and there's a reason things like Geritt are popular in some more sophisticated shops, as they make it much easier to review changes to PRs in response to reviews, as an example.


The PRs are either small enough that it isn’t a problem or large enough that it isn’t a problem… the odd in-between PR experience sucks and it’s one of the cases when I sometimes add more commits instead of force pushing.

+1 to sibling gerrit recommendation; I used to use it a decade ago and it was better then than GitHub PRs today.


People barely ever work off my branches.


IIRC it's push force with lease, ie non destructive push force. No one will be bothered or notice what you did.

And if you have conflicts, it's really easy to rebase and fix any issue.


Ha, I made something in the same vein a few years ago: https://colorcontroversy.com/


This is great! What I would love is a way to compare myself with someone else though. I'm French and my wife is American, we have a lot of disagreement about colors (neither of us have vision deficiencies, we have ruled that out).


This is good too, particularly as it also shows this same issue with other colours.

I'd like to see a combination website where it gives the answers at the end.


It's unclear to me what you're defining as real. Coal mining? Childcare? Community centers? Through hiking? Interesting theoretical realms can have enormous consequences in the physical/tangible world, as I'm sure you know :). Maybe it's more of a "presence factor" in relation to this story: a measure of how aware you are of the roles and responsibilities you have and how engaged with them you are.


I make no value judgement here. I thought the OP's post was interesting as an example of how humans can mediate their own "VR" experience, and have done so for all of human history. The "absent-minded professor" is a stereotype for a reason. It can be disconcerting for someone with high imagination factor to interact with someone with an imagination factor of 0, even if all other qualities (age, culture, language, etc) are the same, since the paths they have both walked are so very different. The error modes that arise from impedance mismatch go in both directions. It's not clear what nature will select for. Certainly over short periods of time, nature has selected for heavy abstraction and all the military/economic power it yields. The longer time frame has not yet played out.


The path everyone has walked is different from everyone else. You seem to be trying to reduce it to a formula, coin new terms, and literally apply numeric values to people. I don't think anyone is that simple in reality.

If you have struggled to interact with people who are different than you, that is also part of the human experience, not something we need to devise measurements for.


Your strawman assumes a reductive user who will replace a person with a number. This of course happens in real life, with IQ, Meyers-Briggs, and so on. This is wrong. It is a kind of wrongness exemplified by "Animal Farm", the nuanced ideals of revolution that eventually reduce to "4 legs good; 2 legs bad".

IF is a tool mostly to remind high IF people to cherish the value of both real and imaginary experience, and a tool to help people who dwell mostly in either realm to respect each other. If a high IF person forgets to respect the real, he's liable to forget his wedding. If a low IF person forgets, he's liable to miss out on the wonder and value of abstract thought.


Of course, there is disagreement on a person’s roles and responsibilities.

To someone, my responsibility might be answering the doorbell quickly when Amazon drops off a package.

To another, it might be how responsive I am to email.

These are in conflict, and sometimes it’s worth missing an Amazon package to finish an important email.


Absolutely, the concept of roles and responsibilities is highly subjective


NYC was also my least favorite part of Recurse. I had to move 2 weeks in because my first apartment was right next to the train.

I absolutely loved Recurse though and credit it for a lot of my development as a programmer and it's my favorite programming community.

Luckily you can also do Recurse remotely now!


This github issue is often linked when this topic is discussed: https://github.com/github/gh-ost/issues/331

> Personally, it took me quite a few years to make up my mind about whether foreign keys are good or evil, and for the past 3 years I'm in the unchanging strong opinion that foreign keys should not be used. Main reasons are:

> * FKs are in your way to shard your database. Your app is accustomed to rely on FK to maintain integrity, instead of doing it on its own. It may even rely on FK to cascade deletes (shudder). When eventually you want to shard or extract data out, you need to change & test the app to an unknown extent.

> * FKs are a performance impact. The fact they require indexes is likely fine, since those indexes are needed anyhow. But the lookup made for each insert/delete is an overhead.

> * FKs don't work well with online schema migrations.


> FKs are a performance impact. The fact they require indexes is likely fine, since those indexes are needed anyhow. But the lookup made for each insert/delete is an overhead.

This is not a valid argument at all and I'm concerned anyone would think it is.

If you have a foreign key, it means you have a dependency that needs to be updated or deleted. If that's the case, you will have an overhead anyway, the only question being whether it's at the DB level or at the application level.

I don't think there are many cases where there's any advantage to self-manage them at the application level.

> FKs don't work well with online schema migrations

This seems to be related only to the specific project that the issue is about if you read about the detailed explanation below.


> If that's the case, you will have an overhead anyway, the only question being whether it's at the DB level or at the application level.

Inserts and updates do not require referential integrity checking if you know that the reference in question is valid in advance. Common cases are references to rows you create in the same transaction or rows you know will not be deleted.

If you actually want to delete something that may be referred to elsewhere then checking is appropriate of course, and in many applications such checking is necessary in advance so you have some idea whether something can be deleted (and if not why not). That type of check may not be race free of course, hence "some idea".


> the only question being whether it's at the DB level or at the application level.

It is not a binary situation like that. With the rise of 'n-tier' systems that are ever so popular today, there are often multiple DB levels. The question is not so much if it should go into the end user application – pretty much everyone will say definitely not there – but at which DB level it should it go in. That is less clear, and where you will get mixed responses.


Note that this was written in 2016 in the context of a mysql-centric project. You will not find an "unchanging strong opinion that foreign keys should not be used" outside that context.

I haven't kept up with mysql enough to know if there are still good reasons to avoid foreign keys. I just stick with postgresql.


sharding is still a big problem for foreign keys


ah! thats a blast from the past. I maintain pg-osc (online schema change tool for postgres) and very much agree that FKs make OSC hard.


The responder to that issue has also written some blog posts that go into more detail on the subject.

* https://code.openark.org/blog/mysql/things-that-dont-work-we...

* https://code.openark.org/blog/mysql/the-problem-with-mysql-f...


A Sphinx plugin[0] allows for writing in markdown, and I'd heavily encourage using it if you're looking to get widespread adoption of sphinx on a project or at a workplace. Rst is fine once you learn it but removing barriers to entry is useful.

[0] https://www.sphinx-doc.org/en/master/usage/markdown.html


edit: my understanding of feature parity in reST/Markdown seems outdated - comment below might be incorrect

The value prop of Sphinx goes down a lot if you're not using reST because you can't use the extensive catalog of directives, such as the ref directive that I mentioned in my first comment. If you must use Markdown then there's not much difference between Sphinx and all the other Markdown-powered SSGs out there. In other words there's not a compelling reason to use Sphinx if you've got to use Markdown.

From Sphinx's Getting Started page:

> Much of Sphinx’s power comes from the richness of its default plain-text markup format, reStructuredText, along with its significant extensibility capabilities.

https://www.sphinx-doc.org/en/master/usage/quickstart.html#g...


Myst has parity with most reST features and is equivalent to markdown for users not using those features: https://myst-parser.readthedocs.io/en/v0.13.7/using/syntax.h...


Oh cool I need to revisit MyST then. I thought it didn't support cross-references but it looks like I'm wrong: https://myst-parser.readthedocs.io/en/v0.13.7/using/syntax.h...

I will have to dig into exactly how much parity we're talking here but if it's very strong parity then I redact my previous statement

Thanks for correcting me!


It works with all docutils and Sphinx roles, and almost all directives, including extensions.

A notable exception is autodoc (automodule, autoclass, etc.), and any other directives that generate more rST. The current workaround is to use eval-rst:

https://myst-parser.readthedocs.io/en/latest/syntax/code_and...

Some more discussion about that in these issues:

https://github.com/executablebooks/MyST-Parser/issues/163

https://github.com/sphinx-doc/sphinx/issues/8018


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: