Hacker Newsnew | past | comments | ask | show | jobs | submit | ohadron's commentslogin

The maximum theoretical size for a zip archive is 16 exabytes (2^64 bytes). It's free if you have where to store it.


Should be doable on consumer hardware nowadays, if you cheat by using a file system that either supports sparse files (https://en.wikipedia.org/wiki/Sparse_file) or block-level deduplication (https://en.wikipedia.org/wiki/Data_deduplication). You may need to use raw block I/O to create such file, and there will be lots of duplicated content in the archive.

Also: how hard is that limit? ZIP archives have their TOC at the end of the file and allow for inserting ‘junk’ that is never referenced in the ZIP’s table of contents. Isn’t it possible to add such junk to make an archive go over that limit (assuming that your file system allows files larger than 2⁶⁴ bytes)?


The problem is once you zip them to full compression, you really can't use them ever again. That is unless you get the good ones that let you technically unzip without requiring destruction.


But why


> Each step moves further from "how do we build better models?" toward "how do we monetize the models we have?"

I don't think OpenAI launching ChatGPT Apps and Atlas signals they're pivoting.

It's just that when you raise that much money you must deploy it in any possible direction.


LLMs would be amazing for this


I wouldn't put an LLM in the loop for anything that has security implications.


Enigma 2.0 getting cracked due to the prevalence of the em dash.


This is a terrific idea and could also have a lot of value with regards to accessibility.


The problem, as always, is that LLMs are not deterministic. Accessibility needs to be reliable and predictable above all else.


Took a while but Wix / Webflow / SquareSpace / Wordpress did end up automating a bunch of work.


They did, but do you think there are more or fewer web development jobs now compared with the 90s?


There is a whole lot of brochure type web work that has disappeared, either to these site builders or Facebook. I don't know what happened to the people doing that sort of work but I would assume most weren't ready to write large React apps.


Why are you assuming that? How do you think all the new React jobs were filled? React developers don’t magically spring into existence with a full understanding of React out of nowhere, they grow into the job.


The web developer / web page ratio in 2025 is for sure way lower than it was in 1998.


Why should anybody care about that metric? People care about jobs.


More, but that doesn't say anything about the future.


Sure it does. It’s not a guarantee, but presuming that a pattern is likely to continue is not nothing. When a pattern is observed, the onus is on the “This time is different!” side to make their case.


The aim of AI is to automate almost everything. That sounds like a future quite different from any past.


If you don’t accept that an observed trend says anything about the future, you shouldn’t make unsupported assertions in the opposite direction. They say less.


AI aiming to automate everything is something new. That's the point. There was no AI in the past similar to what is slowly unfolding now. Not even close. If you disagree with the word "anything" i used, then yes i understand i shouldn't have used this word.


For one thing, it's way faster than the OpenAI equivalent in a way that might unlock additional use cases.


Speed has been the consistent thing I've noticed with Gemini too, even going back to the earlier days when Gemini was a bit of a laughing stock. Gemini is fast


I don't know exactly the speed/quality tradeoff but I'll tell you this: Google may be erring too much on the speed side. It's fast but junk. I suspect a lot of people try it then bounce off back to Midjourney, like I did.


That’s actually a good prompt


"Now that you've been promoted, you don't build CRUD tools anymore. Those are below your level. Instead, you build AI agents that build the CRUD tools."


Took a bit to run but now my iPhone feels much faster. Thanks!


PSA: Don't shorten URLs if you don't have to because if the shortener ceases working your link is dead.


Or should this be more specific in that you shouldn't shorten URLs using the provider's domain name, but bring your own domain. So if the provider goes away, you can in some way migrate the links.


I agree!

That's why I set up my own thing. I don't care about analytics at all, so I just wrote a simple build system doto generate some very basic HTML redirects.

It isn't perfect but it's very cheap to run!

https://github.com/lucienbill/lucien.run/


And if you use Netlify, you can just create a _redirects file like this:

  / https://example.com
  /cv https://example.net/cv.pdf
  /git https://github.com/octocat
Works good for me on ale.sh so far!


Yes, having control of your data is safer.

Even if the supplier disappears, it is possible to quickly switch to another platform


As a hypertext purist, I used to think this. But working for a large organization, shortlinks can be an invaluable way to actually maintain the integrity of links that have been deployed to unrevisable media (emails, print, pdfs, etc).

If a resource has been relocated off of a host/url, often part of that situation is that we don't have immediate access to implement a redirect from that host to the resource's new location.

Now I see a shortlink manager as a centralized redirect manager, which is so much more rational and stable than creating a tangle of redirect config across dozens of hosts or hundreds of content applications.

The caveat is that you don't need to use a 3rd party domain or service, you should definitely at least use your own domain. You also don't need to make them unreadable hashes, they can actually be more human-friendly.


https://www.golinks.io Interesting product that does this for internal links.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: