Hacker Newsnew | past | comments | ask | show | jobs | submit | itake's commentslogin

Dick's Drive in Seattle (IMHO an expensive city) charges $5.75 for their deluxe burger on Doordash.

https://www.doordash.com/store/dick's-drive-in-seattle-77050...


> helps in managing sexual needs

There are plenty of "sexual needs" that society says "no, you can't satisfy them." (for example, Nguyễn Xuân Đạt).

I don't think sexual needs are needs that can't be managed without media.

> educational: whether it is about workings of sex

I find when a partner characterizes porn, the sex is worse... Maybe other people enjoy the sounds or behaviors seen in the videos, but not for me.


>I don't think sexual needs are needs that can't be managed without media.

Of course they can, but it still helps - that's why I used that wording.

Also replacement of one sex need with another feels more viable than with other needs, given how the chemical machinery of the body seems to work.

> I find when a partner characterizes porn, the sex is worse... Maybe other people enjoy the sounds or behaviors seen in the videos, but not for me.

I can't say that the content isn't majorly bad, or that the field is not rife with abuse. That's a real problem, but I think u related to the original question of "does it address a real need".

In this case I think the main takeaways are the ideas, techniques, and what you can learn about body from some of the more realistic videos. Somewhat unfortunately, many people pick wrongly, but I do believe right choices exist.


This is the same thinking that governments are justify the age verification and ID tracking: the system makes an opportunity for old people to get scammed, so everyone needs to give up their privacy.

Well… I think you’re conflating the stated reason for solving a problem versus what these “solutions” are actually trying to do.

I don't know who, but there are a lot of news articles about high volume oil trading activities shortly before publicly military action.

My ex-employer (non-FANGA, but still over $10b mkt cap) started using similar software.

Feels good to read the "ex-"-part in your sentence. It'd be analog to my supervisor sitting right behind me and keeping a super dense protocol - no fucking way, ever.

while not the main reason, I definitely cited it as a reason for departure in my exit interview.

Not really true. Remember the prompt engineering craze a few years ago with crazy complex prompt composers (langchain) that don’t need to exist any more because the underlying model got so much better at understanding what the humans are actually asking for?

A model cannot read your mind. It can guess, and those guesses are more likely to be wrong if you don't give it the right input, and model performance gets worse if not steered/curated properly. The output depends on the input.

https://medium.com/@adambaitch/the-model-vs-the-harness-whic... | https://aakashgupta.medium.com/2025-was-agents-2026-is-agent... | https://x.com/Hxlfed14/status/2028116431876116660 | https://www.langchain.com/blog/the-anatomy-of-an-agent-harne...

(I don't think anecdotes are useful in these comparisons, but I'll throw mine in anyway: I use GPT-5.4, GPT-5.3-Codex, Gemini-3-Pro, Opus, Sonnet, at work every week. I then switch to GLM-5.1, K2-Thinking. Other than how chatty they get, and how they handle planning, I get the same results. Sometimes they're great, sometimes I spent an hour trying to coax them towards the solution I want. The more time I spend describing the problem and solution and feeding them data, the better the results, regardless of model. The biggest problem I run into lately is every website in the world is blocking WebFetch so I have to manually download docs, which sucks. And for 90% of my coding and system work, I see no difference between M2.5 and SOTA models, because there's only so much better you can get at writing a simple script or function or navigating a shell. This is why Anthropic themselves have always told people to use Sonnet to orchestrate complex work, and Haiku for subagents. But of course they want you to pay for Opus, because they want your money.)


> The biggest problem I run into lately is every website in the world is blocking WebFetch so I have to manually download docs, which sucks

Try a scraping service! Perplexity can show cached pages (or at least parts) and I've seen others.


"artistically interesting" is IMHO both a subjective and 'solved' problem. These models are trained with an "artistically interesting" reward model that tries to guide the model towards higher quality photos.

I think getting the models to generate realistic and proportional objects is a much harder and important challenge (remember when the models would generate 6 fingers?).


I think this is the same as using a cloudflared tunnel?

to access my home desktop machine, I run:

``` $ ssh itake@ssh.domain.me -o ProxyCommand="cloudflared access ssh --hostname %h" ```

and I setup all the cloudflare access tunnels to connect to the service.


If I understand you correctly, you SSH in via cloudflared and then use that tunnel to reach other services through that session. That would work, yes.

Tela takes a little different approach. The agent exposes services directly through the WireGuard tunnel without SSH as an intermediary, so you don't need sshd running on the target. Each machine gets its own loopback address on the client, so there is no port remapping.

The big difference is the relay, though. With cloudflared, Cloudflare terminates TLS at their edge. With Tela, you run the hub yourself and encryption is end-to-end. The hub only ever sees encrypted data (apart from a small header).


People new to cities look for community.


I'm wonder though:

1. Why does AI need that folder structure? Why not a flat list of files and let the AI agent explore with BM25 / grep, etc.

2. pre-compute compression vs compute at query time.

Kaparthy (and you) are recommending pre-compressing and sorting based on hard coded human abstraction opinions that may match how the data might be queried into human-friendly buckets and language.

Why not just let the AI calculate this at run time? Many of these use cases have very few files and for a low traffic knowledge store, it probably costs less tokens if you only tokenize the files you need.


> Why does AI need that folder structure? Why not a flat list of files and let the AI agent explore with BM25 / grep, etc.

Progressive disclosure, same reason you don't get assaulted with all the information a website has to offer at once, or given a sql console and told to figure it out, and instead see a portion of the information in a way that is supposed to naturally lead you to finding the next and next bits of information you're looking for.

> use cases

This is essentially just where you're moving the hierarchy/compression, but at least for me these are not very disjoint and separable. I think what I actually want are adaptable LoRa that loosely correspond to these use cases but where a dense discriminator or other system is able to adapt and stay in sync with these too. Also, tool-calling + sql/vector embeddings so that you can actually get good filesystem search without it feeling like work, and let the model filter out the junk.

> let the AI calculate this at run time?

You still do want to let it do agentic RAG but I think more tools are better. We're using sqlite-vec, generating multimodal and single-mode embeddings, and trying to make everything typed into a walkable graph of entity types, because that makes it much easier to efficiently walk/retrieve the "semantic space" in a way that generalizes. A small local model needs at least enough structure to know these are the X ways available to look for something and they are organized in Y ways, oriented towards Z and A things.

Especially on-device, telling them to "just figure it out" is like dropping a toddler or autonomous vehicle into a dark room and telling them to build you a search engine lol. They need some help and also quite literally to be taught what a search engine means for these purposes. Also, if you just let them explore or write things without any kind of grounding in what you need/any kind of positive signals, they're just going to be making a mess on your computer.


Maybe it depends on the use case, but my opinion is, if you do need to apply compression, it should be done via a tool call real time instead of in a pipeline.

For example, if you’re trying to summarize the status of a project, instead of feeding an agent (in real time or via summarization pipeline), it’s better to write a script that summarizes the status of all of the jira tickets, instead of asking the agent to read all of the tickets to create a summary

Another small data point, I think people would prefer to ask questions of an AI model instead of reading the generated summaries.


> Why does AI need that folder structure? Why not a flat list of files and let the AI agent explore with BM25 / grep, etc.

It doesn't. The human creating the files needs it, to make it easier to traverse in future as the file count grows. At 52k files, that's a horrendous list to scroll through to find the thing you're looking for. Meanwhile, an AI can just `find . -type f -exec whatever {} \;` and be able to process it however it needs. Human doesn't need to change the way they work to appease the magic rock in the box under the desk.


> The human creating the files needs it

why? The human would just talk to the AI agent. Why would they need to scroll through that many files?

I made a similar system with 232k files (1 file might be a slack message, gitlab comment, etc). it does a decent job at answering questions with only keyword search, but I think i can have better results with RAG+BM25.


And when the system fails for whatever reason?

Just because AI exists doesn't mean we can neglect basic design principles.

If we throw everything out the window, why don't we just name every file as a hash of its content? Why bother with ASCII names at all?

Fundamentally, it's the human that needs to maintain the system and fix it when it breaks, and that becomes significantly easier if it's designed in a way a human would interact with it. Take the AI away, and you still have a perfectly reasonable data store that a human can continue using.


> 1. Why does AI need that folder structure? Why not a flat list of files and let the AI agent explore with BM25 / grep, etc.

Two reasons I think:

Coding agents simulate similar things to what they have been trained on. Familiarity matters.

And they tend to do much better the more obvious and clear a task is. The more they have to use tools or "thinking", the less reliable they get.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: