Hacker Newsnew | past | comments | ask | show | jobs | submit | felipeccastro's commentslogin

It might be the opposite. Python apps still get written despite the performance hit, because understandability matters more than raw performance in many cases. Now that we’re all code reviewers, that quality should matter more, not less. Programmer time is still more expensive than machine time in many cases.

Are Python apps really so easy to understand? I seriously disagree with this idea given how much magic goes behind nearly every line of Python. Especially if you veer off the happy path.

I certainly am no fan of C but from a certain point of view it’s much easier to understand what’s going on in C.


Well-written Python apps are very easy to understand, especially if they use well-designed libraries.

The 'magic' in Python means that skilled developers can write libraries that work at the appropriate level of abstraction, so they are a joy to use.

Conversely, it also means that a junior dev, or an LLM pretending to be a junior dev, can write insane things that are nearly impossible to use correctly.


> Well-written Python apps are very easy to understand, especially if they use well-designed libraries.

Oh. Why haven't I seen those?


Dunno dude, maybe you haven't looked. Maybe you hate Python. Maybe you're stupid. Anything could be possible.

One of the (many) reasons that I moved away from Python was the whole "we can do it in 3 lines"

Oh cool someone has imported a library that does a shedload of really complicated magic that nobody in the shop understands - that's going to go well.

We're (The Software Engineering community as a whole) are also seeing something similar to this with AI generated code, there's screeds of code going into a codebase that nobody understands is full across (give a reviewer a 5 line PR and they will find 14 things to change, give them a 500 line PR and LGTM is all you will see).


I've cooled significantly on Python now that there are a number of strongly typed languages out there that have also gotten rid of the boilerplate of languages Python used to compete with.

Readability gets destroyed when a function can accept 3 different types, all named the same thing, with magic strings acting as enums, and you just have to hope all the cases are well documented.


Type systems document data movement throughout applications :-)

And the other problem with functions accepting dynamic types is that your function might only in reality handle one type, it still has to defensively handle when someone passes it things that will cause an error.

All the dynamic typing really did is move the cognitive load from the caller to the called.


I'd much more prefer to review something written in Rust or Go, even if I'd much rather write it in Python if I had to do it manually.

The better structure and clear typing makes the review much easier.


My biggest reason for liking Go, over Python can be summed up in one word: Discipline.

Python was supposed to be embracing the idea of "there's only one way to do it", which appeals after Perl's "There's many ways to do it", but the reality is, there's 100 ways to do it, and they're all shocking.


I've used XFCE for a 2011 laptop, it was about as fast as LXDE but better polished. Windows was unusable there, and XFCE made the computer feel brand new. Only the modern websites that would still cause slowness, but the OS was great.


I have noticed that static type checking often enables people to build systems more over-engineered than they could without it. It's not a coincidence that factory-factory-impl happened in Java, not Ruby.


The train of thought is “what is everyone using? I’ll use that too”


This coupled with the fact that "web development" now means anything going from a content rich website like a blog, towards some e-shop, all the way to complex applications like ux design, video editing, etc.

It's pretty absurd to have such a broad range of web solutions, and think the same solution can cover everything.


Why? Microsoft's GUI framework as well as Apple's covered plenty of use cases before the rise of the web browser.


Then why did HTML became so popular if win32 or MFC were so great?


> Then why did HTML became so popular if win32 or MFC were so great?

One of the factors is that web dev pushes for a complete separation of concerns, and thus allows frontend developers to specialize in front end development. Therefore it becomes far easier to hire someone to do frontend work with a webdev background than a win32/MFC background.

Number of applicants is also a big factor. There is far more demand for webdev than pure GUI programming. You can only hire people who show up, and if no one shows up then you need to scramble.

Frontend development is also by far the most expensive part of a project. In projects which use low-level native frameworks you are forced to hire a team for each target platform. Adopting technologies that implement GUIs with webpages running in a WebView allow projects to halve the cost. This is also why technologies like React Native shine.

Also, apps like Visual Studio Code prove that webview-based apps can be both nice to look at and be performant.

It's not capabilities. It's mainly the economics.


In the win32/MFC days, there was no "front-end developer". There was only HTML and content creators writing it.

Then there came small web applications, and still no "front-end developers", since functionality could only work on the server.

It's only when AJAX was introduced in the mid 2000's that you could start to talk about "front-end developers".

By that time, win32 and MFC was old. We had Java, C# with .net framework, etc.


Because it solved different problems. CSS is terrible, but deployment simplicity and distribution channel were more powerful than how shitty HTML is for making GUIs. The fact that MFC was owned by Microsoft didn't help either.


Why would you make GUI's with HTML? Its main use was for content, not applications. Hyper Text Markup Language.

So you agree both solve different problems. Well, those are 2 use cases of front-end right now.


> The train of thought is “what is everyone using? I’ll use that too”

I'm not so sure about that. We're seeing Next.js being pushed as the successor of create-react-app even in react.dev[1], which as a premise is kind of stupid. There is something wrong definitely going on.

[1] https://react.dev/learn/creating-a-react-app


It was interesting handling frontend interviews recently.

We do a 30-min tops exercise where you create a React project to show how to use useState and useEffect, etc. I help with whatever command they want to use and allow Google/ChatGPT.

More than half of the candidates had no idea how to use React without Next.js, and some argued it was impossible, even after I told them the opposite.


This surprises me a lot. I spin up new react apps with vite often to replicate issues with 3rd party libs we use. Like how do they not know you can just spin something up over on CodePen or CodeSandbox and there's not a hint of a server side paradigm required? (sure, vite has a little server but you don't really need to know anything about it)


Some devs have worked exclusively in feature mills where expectations are rock bottom and some senior setups the project for them. When recruiters don't filter them, a basic React test has to.


What are you really testing for? That sounds like a bad interview.


Basic react experience presumably. As a first approximation, it seems like every possible interview sounds like a bad interview to someone. What has worked well for you?


Seems more like a test on random React minutiae. Like, let's take some framework, take away some random piece. How well do you know the area around that random piece we just removed? Frameworks are large and gnarly (or there isn't enough to them). Expecting a candidate to be lucky and know random implementation details in the area that happened to be picked doesn't seem like you'd select for anything other than luck.

For me, lately, the interview question is "here's code that ChatGPT generated for (previous interview question as related to the role we're hiring for that we could do)", what's wrong with it? What do now? (ChatGPT may or may not have actually generated the code in question.)


It's not React "minutiae". It's incredibly basic concepts, that if you don't know, you cannot in good faith say you know react.

It's like not knowing how to write a for loop or how to access an object's property in JavaScript.


I remember one of the first technical interviews I conducted about 15 years ago, I asked the candidate the difference between == and ===. She had the same answer as gp, claiming the doesn't "memorize minutiae like that."


> Seems more like a test on random React minutiae.

It is more like test on whether or not you can figure out random React minutiae (with Google/ChatGPT, if needed) when presented with a need. Which isn't a bad approximation for how well you will do at finding any random minutiae as needs present themselves. React-based development doesn't require much original thought — the vast majority of the job really is just figuring out the minutiae of your dependencies to fit your circumstantial need.

For fun, I asked ChatGPT for an answer and it gave a perfectly good one back without hesitation. Even if you had no idea what React was beyond knowing it is a library for developing web components, you should still be able to answer that particular question with ease.


I was assuming that particular interview was not open ChatGPT. If all you want to test for is can you understand the words that are coming out of my mouth, type that into ChatGPT, and then read it to me, yeah, it seems fine.


Why would one random part of the interview disallow ChatGPT when it is otherwise accepted for answering other random React minutiae?


Because humans have to interact with other humans in conversations, and if you can't read social cues as to when something is and isn't acceptable, you're boned. I have trouble with that, so it's not surprising to me when others do as well.

When you're in a work meeting, do you just put ChatGPT up on one laptop and Claude on another and just sit back for 30 minutes to an hour?


It was deemed acceptable to use ChatGPT to discover the minutiae of useState and useEffect. What is special about createRoot that makes it off limits?


The most basic React functionality isn't "React minutiae".


You have to remember, Next is the only framework that can support some of the features in the latest version of React.

To many people, it's just basic logic: "everyone must want the latest React features, and the only way to get those is with Next, so everyone must want Next".



> You have to remember, Next is the only framework that can support some of the features in the latest version of React.

That is extremely fishy, isn't it?


Not necessarily since they have to do with the inherently complex niche features like unified server/client rendering (e.g. RSC, streaming SSR with selective hydration, server actions).

Next.js is essentially the reference and test bed impl.

Where people go wrong is thinking they need to default to the inherently complex niche feature of client hydration which is a niche optimization enabled by a quirk of web tech.


> Not necessarily since they have to do with the inherently complex niche features like unified server/client rendering (...)

My point is that it's fishy how they push features that just so happen to be the value proposition of the only corporation that just so happens to be able to implement them.


If everyone made decisions for themselves instead of following everyone else we’d be so much better off, in all areas.


This is a little disingenuous because unfortunately you can't make decisions on technical merits alone. It takes a lot of resources to keep these projects thriving and up to date. You almost have to go with options where these resources have been deployed, even if they are terrible sometimes.


This is only partially true. For example, with React Native even the core team now tells you to "just use Expo", as if relegating all responsibility to a project maintained by a for-profit that thinks 2 weeks is enough time to beta test a Major release.

It's also dismissive of market forces, i.e. developers have to pay bills and therefore are easier to hire if they know the skillset that is in wide use.

I've never worked or interviewed a single senior that wanted to use Next.


I agree with all your points but last I tried, the VS Code LSP was terrible. It’s hard to justify a new language when even the basics of autocomplete, inline errors and go to definition don’t work well. Part of the reason was that any function can be called on anything, which pollutes the autocomplete list.

Has the LSP situation improved yet? Similar issue with Crystal lang, which I enjoy even more than Nim.


Unfortunately the LSP hasn’t improved that much. There’s been some work on to kill errant processes and such. So it’s a bit more stable. It does work pretty well when it works though. But I just kill it now.

Unfortunately it may not be until Nim 3 based on the Nimony rewrite comes out. It supports incremental compiling and caching which will make the underlying LSP tooling much better.

However I find with Cursor that I don’t mind so much the missing autocomplete. I’ve actually thought about writing an alternative LSP just for quick find and error inlining…


Frankly, I'm surprised this is the only issue you bring up (I had many, when I first tried Nim several years ago - I think they were related to cross-platform GUI libraries for Nim, or the lack of them, or their awful state back then).

But LSP as a major concern? For me these little helpers are useful to catch small typos but I could happily do without them.


It's not just small typos, it's the ability to explore apis, the standard library, go to definition, quickly catch any error at the location it happens, not having to memorize large models and their field names, the list goes on.

I can work without an LSP, but when I'm searching for a new language that would be used by a team (including Junior devs) it's hard to justify something missing the basics of good DX. I haven't tried it with Cursor though, it might be less of a dealbreaker at this point.


How do you navigate through a project with things like `go to definition` or `incoming calls`? (given that we are talking about a relatively large code base maintained by more that one or two individuals)

You can do it with just rg or something similar but it will give you many false positives and are going to waste quite some time.


And if keeping in local storage a list of all pages, you can create an index html automatically in a predefined format which makes it more of a database rather than loose documents.


Not sure why this was downvoted, but I’d be very interested in learning how well does pglite compares to SQLite (pros and cons of each, maturity, etc)


It is ironic how “rewrite it in Rust” is the solution to make any program fast, except the Rust compiler.


It's not ironic at all. Rust programs being fast is in large part due to shifting work from runtime to compile time.


This. Making something super customizable is a lot harder to implement (code being too generic, hard to reason about and debug) and often presents a worse UX ("why are there so many options??"). Having the UX design team interview and consider the needs of each user role interacting with the application, and ensuring the app displays/asks only the appropriate info for each user, hiding the rest and adopting smart defaults (instead of requiring everything), is easier to implement, safer and produces more intuitive interfaces than highly customizable ones, in many cases.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: