Hacker Newsnew | past | comments | ask | show | jobs | submit | phamilton's commentslogin

I've definitely done some vibe-coding with the explicit intent to reduce memory usage.

How efficient is AI at reducing RAM consuption?

This feels like an oxymoron

Given the premise that zero day exploits are going to be frequent going forward, I feel like there is a new standard for secure deployment.

Namely, all remote access (including serving http) must managed by a major player big enough to be part of private disclosure (e.g. Project Glasswing).

That doesn't mean we have to use AWS et al for everything, but some sort of zero trust solution actively maintained by one of them seems like the right path. For example, I've started running on Hetzner with Cloudflare Tunnels.

Anyone else doing something similar?


> For example, I've started running on Hetzner with Cloudflare Tunnels.

How much latency does this add?


We've testing something similar, not using in prod yet. Network was 20ms RTT. The real variable was processing time median was sub-5ms most days but some regions would be 30ms for 8-10 hour blocks randomly.

Isn't this just a SPAC?

The shoe business was sold, a shell of a public company was left, and it essentially acquired a brand new company focused on AI.


That is probably still bad enough too. The SPAC era of 2020 and 2021 was not great [1] and SPACs are normally not the best vehicles [2]

[1]https://certuity.com/insights/what-happened-to-spacs/ [2]https://mergersandinquisitions.com/spac-vs-ipo/


That isn't how I read the article?

But then a company whose only asset is it has a listing should be able to go up by 580% doing not very much.


The announcement was that it secured $50M in financing, sold the shoes business for $39M leaving $20M or so in cash.

An empty public company with $70M in financing to enter a hyped market was valued at $115M. The stated intent is to spend their money on a CapEx item with a fairly high demand and resale value (GPUs) in a sector that has a pretty simple playbook.

The 580% bump is a fun headline, but "startup secures $50M in funding at a 5.8x valuation bump" isn't unheard of.

Have I invested? No. Is this a ridiculously funny narrative and story? Absolutely. Is it the most ridiculous valuation I've seen? No.


Yup, that's exactly what it is.

I just got it last week. Still a few quirks, but positive so far.

It seems to forget about what I do and don't have access to (in terms of apps). I've had to remind it that I have Spotify and YT music more than once.

Other than that agreed going okay so far.


Do you have an option to not use it? Google forcefully updated my TV and the normal assistant is gone now

First thing I do after I purchase any smart TV - turn off network access, disable auto-updates (mine is a Sony). So, this way 1) it can collect whatever it wants but it can't phone back home and 2) I don't wake up one day and find myself on a learning curve I didn't sign up for (happened to me once, they completely re-did the UI, for worse!)

How do you watch anything without network access?

I use Apple TV and give it network access instead, this way the TV doesn't have the chance to update. My Apple TV is set to update manually too. Of course, the assumption here is Apple TV doesn't phone home - and I'm no Apple fanboy, but I think this is as close as we get to online streaming with privacy.

Roku

I dropped my MBA on concrete and the edges got dinged up and sharp.

A bit of 220 grit sandpaper and all the sharp edges are smooth and it actually looks pretty cool. I was grimacing at first but now I like the feel.


Too many MBAs, not enough concrete.


More like Fire Emblem


15 years ago I was an intern at Micron and learned they passed on a contract with Apple because Apple insisted on discounts and there wasn't a compelling reason to reduce profit at Micron.

So yeah, Apple probably does pay less. But the market has enough demand that suppliers do say no.


This is actually relevant, because DRAM costs just as much now per Gb as it did 15 years ago (that's controlling for inflation; it's as much as it cost 20 years ago on a pure price basis).


> For those building with a mix of bash and custom tools, Gemini 3.1 Pro Preview comes with a separate endpoint available via the API called gemini-3.1-pro-preview-customtools. This endpoint is better at prioritizing your custom tools (for example view_file or search_code).

It sounds like there was at least a deliberate attempt to improve it.


As an experiment, I set it up with a z.ai $3/month subscription and told it to do a tedious technical task. I said to stay busy and that I expect no more than 30 minutes of inactivity, ever.

The task is to decompile Wave Race 64 and integrate with libultraship and eventually produce a runnable native port of the game. (Same approach as the Zelda OoT port Ship of Harkinian).

It set up a timer ever 30 minutes to check in on itself and see if it gave up. It reviews progress every 4 hours and revisits prioritization. I hadn't checked on it in days and when I looked today it was still going, a few functions at a time.

It set up those times itself and creates new ones as needed.

It's not any one particular thing that is novel, but it's just more independent because of all the little bits.


So, you don't know if it has produced anything valuable yet?


It's the same story with these people running 12 parallel agents that automatically implement issues managed in Linear by an AI product team that has conducted automated market and user research.

Instead of making things, people are making things that appear busy making things. And as you point out, "but to what end?" is a really important question, often unanswered.

"It's the future, you're going to be left behind", is a common cry. The trouble is, I'm not sure I've seen anything compelling come back from that direction yet, so I'm not sure I've really been left behind at all. I'm quite happy standing where I am.

And the moment I do see something compelling come from that direction, I'll be sure to catch up, using the energy I haven't spent beating down the brush. In the meantime, I'll keep an eye on the other directions too.


> Instead of making things, people are making things that appear busy making things.

Sounds like a regular office job.


Yeah I'm not sure I understand what the goal here is. Ship of Harkinian is a rewrite not just a decompilation. As a human reverse engineer I've gotten a lot of false positives.This seems like one of those areas where hallucinations could be really insidious and hard to identify, especially for a non-expert. I've found MCP to be helpful with a lot of drudgery, but I think you would have to review the llm output, do extensive debugging/dynamic analysis, triage all potential false positives, before attempting to embark on a rewrite based on decompiled assembly... I think OoT took a team of experts collectively thousands of person-hours to fully document, it seems a bit too hopeful to want that and a rewrite just from being pushy to an agent...


Step 1: Decompile into C that can be recompiled into a working ROM. In theory, it could be compiled into the same ROM that we started with. Consistent ROM hash is the main success criteria for the OoT decompilation project. Have it grind until it succeeds.

Step 2: Integrate libultraship. Launching the game natively is the next criteria. Then ideally we could do differential testing on a frame by frame basis comparing emulated vs native.

Step 3: Semantic documentation of source. If it gets this far, I will be very impressed.

This is absolutely an experiment. It's a hard problem with low stakes. There a lot to learn from it.


Not yet. But what's the actual goal here? It's not to have a native Wave Race 64. It's to improve my intuition around what sort of tasks can be worked on 24/7 without supervision.

I have a hypothesis that I can verify the result against the original ROM. With that as the goal, I believe the agent can continue to grind on the problem until it passes that verification. I've seen it in that of other areas, but this is something larger and more tedious and I wanted to see how far it could go.


That sound like being a manager IRL.


$3 z.ai subscription? Sounds like it already burned $3k

I find those toys in perfect alignment with what LLM provider thrive for. Widespread token consumption explosion to demonstrate investors: see, we told you we were right to invest, let's open other giga factories.


It's using about 100M input tokens a day on glm 4.7 (glm 5 isn't available on my plan). It's sticking pretty close to the throttling limits that reset every 5 hours.

100M input tokens is $40 and anywhere from 2-6 kWh.

Certainly excessive for my $3/month.


How's it burned $3k on a $3/month subscription running for a few days?


I simply don't get how it could have run for quite a while and only cost $3. Z.ai offers some of the best model out there. Several dollars per million tokens, this sort of bot to generate code would burn millions in less than 30 minutes.


> Several dollars per million tokens

The flagship, glm-5, is $1/M input tokens. glm-4.7 is $0.60/M input tokens.


They have a coding plan


And the $3 plan also has significant latency compared with their higher tier plans.


What a great use of humanity's adn the earth's resources.


Keep us posted, this sounds great!


Intelligence per token doesn't seem quite right to me.

Intelligence per <consumable> feels closer. Per dollar, or per second, or per watt.


It is possible to think of tokens as some proxy for thinking space. At least reasoning tokens work like this.

Dollar/watt are not public and time has confounders like hardware.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: