Hacker Newsnew | past | comments | ask | show | jobs | submit | more SCdF's commentslogin

There are plenty of ebook stores if you google around, that have a standard range and use Adobe DRM, so off the bat wouldn't work on a kindle. In theory you can remove that DRM using Calibre, but I haven't tried.

Other than that, not really? There are plenty of ways in which you can buy _a_ drm free book, but not some large range (or even bandcamp-quality range, where there are authors you've heard of, just not Dan Brown / Stephen King sized ones) site.

I haven't got around to solving this problem so am also interested. I already own a kindle, I don't want to generate ewaste by changing physical device.


Was impacted by invalid ASIN pop up. Got fed up finally. Sold my kindle paperwhite via classifieds to someone who would embrace the Amazon walled garden (so someone new e-reading). Then bought a pocketbook. Now all my ebooks work again. And no waste. I use beam ebooks for drm free books. Bought all my Expanse books there.


> The early narrative was that companies would need fewer seniors, and juniors together with AI could produce quality code

I'm not deep into it, but I have not a single time seen that direction argued before this post. Maybe it was _really_ early on?

The narratives I always saw were, firstly, "it will be as a good as a junior dev", then "it's like pairing with an overly enthusiastic junior dev", then finally arguments similar to those presented in this article.

Which, frankly, I'm still not so sure about. Productivity is incredibly hard to measure: we are still not completely, non-anecdotally sure AI makes folk broadly more productive. And even if it does, I am beginning to wonder how much ai is short term productivity with long term brain rot, and whether that trade off is really worth it.


A lot of it is just that it's at the local maximum of popularity and relative user inexperience, so it's the juiciest target.

But also, npm was very much (like js you could argue) vibed into existence in many ways, eg with the idea of a lock file (eg reproducible builds) _at all_ taking a very long time to take shape.


We got lockfiles in 2016 (yarn) and 2017 (npm), before Go, Ruby, and others; I believe python is just getting a lockfile standard approved now.

You could already specify exact versions in your package.json, same as a Gemfile, but reality is that specifying dependencies by major version or “*” was considered best practice, to always have the latest security updates. Separating version ranges from the lock files, and requiring explicit upgrades was a change in that mindset – and mostly driven by containerization rather than security or dev experience.


So have they stopped having the >1 average remote drivers for each self driving vehicle as well?

The problem with these statements is language has so much context implicit in it. "driving around on their own" to me means with zero active oversight. "driving around" to me means not just in a small set of city streets, but as a replacement for human driving (eg anywhere a vehicle can physically fit). Obviously to you it means other things, but it's what makes these conversations and statements of fact challenging.


That >1 spec is from Cruise, who went defunct in 2023.

Tesla's have >1 but they are not really self-driving, but more "100% human supervised self-driving."


Thanks for this post. Can I just say I really miss when tech news was dominated by "here is a cool font we designed", and not "here is our new upgraded torment nexus".


Part of the difficulty here is that new age AI / LLMs in discussion has a lot of similarities with crypto, in the sense that there is a lot of nonsense out there. Unlike crypto there is obviously some value, as opposed to none, so it's not all grift. But like you I find it hard to sort the difference between the two.

Fundamentally for me I can't get over the idea that extraordinary claims require extraordinary evidence, and so far I haven't really seen any evidence. Certainly not any that I would consider extraordinary.

It's like saying that if a magician worked _really really hard_ on improving, evolving and revolutionising their "cut the assistant in half and put them back together" trick, they'll eventually be able to actually cut them in half and put them back together.

I have not seen a convincing reason to think that the path that is being walked down ends up at actual intelligence, in the same way that there is no convincing reason to think the path magicians walk down ends up in actual magic.


> It's like saying that if a magician worked _really really hard_ on improving, evolving and revolutionising their "cut the assistant in half and put them back together" trick, they'll eventually be able to actually cut them in half and put them back together.

So, surgery?

As the stage magicians Penn and Teller (well, just Penn) said, stage magic is about making something very hard look easy, so much so that your audience simply doesn't even imagine the real effort you put into it.

Better analogy here would be asking if we're trying to summon cargo gods with hand-carved wooden "radios".


No, not surgery. Surgery wasn't gotten to by way of working really hard on a magic trick. I'm also reasonably sure surgery is not at a point where you can cut someone entirely in half and put them back together.

Since you brought up Penn and Teller, take the bullet catch. They are not actually catching a bullet. No matter how hard they work on perfecting their bullet catch trick, this will not allow them to catch a real bullet shot from a real gun in their real teeth. Working on the trick is not the journey where the end point is doing the actual thing you're representing in the trick.


Surgery is at a point where you can take a living human's heart out without killing them, and replace it with one you got out a corpse. And if the waiting list for donated corpse hearts is too long, you can have yours switched off for months while a machine pumps for you.

Lungs, kidneys, liver, lots can be successfully transplanted in similar ways.

> Surgery wasn't gotten to by way of working really hard on a magic trick

My point is other way around: if you wanted a magic trick where you could literally cut someone in half for real and then put them back together, that is literally "surgery".

> They are not actually catching a bullet. No matter how hard they work on perfecting their bullet catch trick, this will not allow them to catch a real bullet shot from a real gun in their real teeth. Working on the trick is not the journey where the end point is doing the actual thing you're representing in the trick.

For some tricks, sure.

For others… is the nailgun memorising trick faked for safety reasons? I mean, I sure would, and I assume their insurance requires at least some safety interlock that's not visible, but you can just memorise a sequence as Penn says while doing the trick.

And a few of the Fool Us tricks he says ~"we think you just practiced really hard for a very long time": https://youtu.be/Lx1P1YA2rlA?feature=shared&t=325 and same video different timestamp: https://youtu.be/Lx1P1YA2rlA?feature=shared&t=482


I'm not sure.

We never expected that there even could be a magic trick that came so close to mimicing human intelligence without actually being it. I think there's only so many ways that matter can be arranged to perform such tricks, we're putting lots of work into exploring that design space more thoroughly than ever before, and sooner or later we'll stumble on the same magic tricks that we humans are running under the hood.


> and sooner or later we'll stumble on the same magic tricks that we humans are running under the hood.

Right, so this is the extraordinary claims bit. I'm not an expert in any of the required fields, to be clear, but I'm just not seeing the path, and no one as yet has written a clear and concise explainer on it.

So my presumption, given past experience, is that it is hype designed to drive investment plus hopium, not something that is based on any actual reasoned thought process.


Sure, but evolution isn't an actual reasoned thought process, and it still managed us without even having humans as an explicit goal, we just popped out of the process by accident as a way to be effective at surviving in the wild.


That's the usual way of things. Most of the time research progress is made at the boundary without anybody seeing the path. It happens more like slips in a fault line; little sudden steps forward in one area that nobody anticipated, and which create new strains elsewhere along the fault line, as those discoveries open up new attacks on old problems. Gradually the problem yields but not according to anybody's big plan.

In that sort of situation, the rate of progress is affected by two things: how many people are working at the frontier figuring out the problems that prevent the field from progressing, and how much economic pressure there is to exploit each new solution that gets identified. When the economic pressure is low, new breakthroughs mostly stay in the lab and circulate slowly. Researchers will come up with ideas that could solve the problem but don't have resources to test every one. But when the pressure is great, each new breakthrough quickly scales up, and more ideas get tested in parallel.

Sometimes a bunch of progress does happen on a schedule, as part of a master-planned research effort, like the Manhattan or Apollo projects, or semiconductor lithography R&D schedules. In those cases the main pathway is known at the outset but there a bunch of novel engineering sub-problems to solve along the way. Most research doesn't happen that way though. And even when it does, to anybody on the outside who doesn't themselves see the route laid out from a high altitude, it looks the same. And even in these cases, there may be a few big-picture questions that they aren't sure about until late into the project, resulting in multiple paths being tried at once to improve the chance of success.

I think there are several hard, fundamental problems currently being grappled with that stand between today's AI and AGI, and unlike Sam Altman I don't think scaling will be enough to overcome them. But I do think there are now tremendous forces being deployed to grapple with them, and tremendous pressures being built up behind that, so any slips along the fault line could yield rapid movement forward.

Is this hype to drive investment? Depends who you're listening to. If they're an executive at NVIDIA or OpenAI then sure, it probably is. But not all of it. One of the main advocates for the view that I share is Eliezer Yudkowsky, who has been talking about this since before AI was on any CEO's radar. His latest book is called "If Anyone Builds It, Everyone Dies". I'm not sure how he could phrase his concerns in any less-appealing way to an investor.


> Unlike crypto there is obviously some value, as opposed to none

Even some cryptocurrencies like Monero have value if you consider "making digital transactions anonymously" to have value. I definitely do.


> Unlike crypto there is obviously some value

To be fair, there is obviously some economic value in the fungibility of crypto-currency. The political and technical aspects are dubious.

> extraordinary claims require extraordinary evidence

Agreed, the only extraordinary achievement for this magic act so far is market capitalisation.


The Horizon (UK Royal Mail accounting software) incident killed multiple postmasters through suicide, and bankrupted and destroyed the lives of dozens or hundreds more.

The core takeaway developers should have from Therac-25 is not that this happens just on "really important" software, but that all software is important, and all software can kill, and you need to always care.


From what I've read about that incident I don't know what the devs could have done. The company sure was a problem but also the laws basically saying a computer can't be wrong. No dev can solve that problem.


> Engineers are legally obligated to report unsafe conduct, activities or behaviours of others that could pose a risk to the public or the environment. [1]

If software "engineers" want to be taken seriously, then they should also have the obligation to report unsafe/broken software and refuse to ship unsafe/broken software. The developers are just as much to blame as the post office:

> Fujitsu was aware that Horizon contained software bugs as early as 1999 [2]

[1] https://engineerscanada.ca/news-and-events/news/the-duty-to-...

[2] https://en.wikipedia.org/wiki/British_Post_Office_scandal


I don't think it's fair to blame individual developers for a systemic failure. Its not their fault there is no governing body to award or remove the title of "software engineer" and promote the concept of a software engineer refusing to do something without harming their career. Other engineering disciplines have laws, lobbied for by their governing body, that protect the ability of individual engineers to prevent higher-ups from making grave mistakes.


> Its not their fault there is no governing body to award or remove the title of "software engineer" and promote the concept of a software engineer refusing to do something without harming their career.

Those governing bodies didn't form by magic. If you look at how hostile people on this site are to the idea of unionization or any kind of collective organisation, I'd say a large part of the problem with software is individual developers' attitudes.


I have worked in this industry for 20 years and never met a piece of software I would deem "safe". It's all duct tape and spit. All of it.

I have had software professionally audited by third parties more than a few times, and they basically only ever catch surface level bugs. Recently, the same we the audit finished we independently found a pretty obvious sql injection flaw.

I think the danger is not in producing unsafe software. The real danger is in thinking it can ever can be safe. It cannot be, and anyone who tells you otherwise is a snake oil salesman.

If your life depends on software, you are one bit flip from death.


Then you haven't read deep enough into the Horizon UK case. The lead devs have to take a major blame for what happened as they lied to the investigators and could have helped prevent early on some suicides if they had courage. These devs are the worst kind of, namely Gareth Jenkins and Anne Chambers.


as you point out this was a messup on a lot of levels. its an interesting effect tho not to be dismissed. how your software works and how its perceived and trusted can impact people psychologically.


It was a distributed system lashed together by 'consultants' (read: recent graduates with little real world software engineering experience) in an era where best practices around distributed systems were non-existent. They weren't even thinking about what kind of data inconsistencies they might end up with.


The code being absolute dog shit was true regardless of that law's existence. There are plenty of things the developers could have done.

That law is irrelevant to this situation, except in that the lawyers for Fujitsu / Royal Mail used it to imply their code was infallable.


Given whole truth testimony?


But there is still a difference here. Provenance and proper traceability would have allowed the subpostmasters to show their innocence and prove the system failable.

In the Therac-25 case, the killing was quite immediate and it would have happened even if the correct radiation dose was recorded.


I’m not sure it would. Remember that the prosecutors in this case were outright lying to the courts about the system! When you hit that point, it’s really hard to even get a clean audit trail out in the open any more!


I don't understand the distinction here.

> Provenance and proper traceability would have allowed

But there wasn't those things, so they couldn't, so they were driven to suicide.

Bad software killed people. It being slow or fast doesn't seem to matter.


Slow killing software can be made more secure by adding the possibility for human review.

Fast killing software is too fast for that.


I'm really trying to understand your point, but I am failing.

It sounds like you're saying that you shouldn't care as much about the quality of "slow killing software" because in theory it can be made better in the future?

But... it wasn't though? Horizon is a real software system that real developers like you and me built that really killed people. The absolutely terrible quality of it was known about. It was downplayed and covered up, including by the developers who were involved, not just the suits.

I don't understand how a possible solution absolves the reality of what was built.


I teach the horizon post office scandal in my database courses. And my takeaway is, that software fails. And if people's lives are involved, an audit trail is paramount.

In slowly killing software the audit trail might be faster than the killing. In fast killing software, the audit trail isn't.


Yes, the audit trail that should exist is part of the package. Or more generically, Horizon should have had enough instrumentation, combined with adequate robustness, where they could detect the issues the lack of robustness caused, and resolve those issues without people dying.

My core point is that if you're designing a system, *any system*, you should be thinking about what is required to produce safe software. It isn't just "well I don't work on medical devices that shoot radiation at people, so I don't need to worry"[1]. You still need to worry, you just solve those problems in different ways. It's not just deaths either, it's PII leakage, it's stalking and harassment enablement, it's privilege escalation, etc.

[1] I have heard this, or a variation of this, from dozens of people over the my career. This is my core bug bear about Therac-25, is that it allows people to think this way, and divest themselves of responsibility. I am very happy to hear you are teaching a course about Horizon, because it's a much more grounded example that devs will hopefully see themselves in more. If your course is publicly available btw, I'd love to read it.


It's just a course about database design and in the first seminar we look at different news stories that have something to do with databases, like trump putting some random Italian chef on an international sanction list should make us think about primary keys and identifying people.

And the horizon post office scandal is the last and most poignant example that real people are affected by the systems we build and the design decisions we make. That sometimes easy to forget.


https://www.coderabbit.ai/blog/our-response-to-the-january-2...

> No customer data was accessed

As far as I can tell this is a lie.

The real answer is that they have absolutely no clue if customer data was accessed, and no way to tell. I'm not even sure Github could tell, but it's not clear if the exploits way of generating private keys to access private repositories is any different to what CodeRabbit does in normal operation.


This wiki page was created in 2011, in case you're wondering how long they've held this position


You can tsc on the code and then ship that git hash if it passes. You don't need to run it every single time the code executes, nothing of value is gained, because nothing has changed.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: