Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

google vp here: we appreciate the feedback! i generally agree that if you have a strong understanding of your static capacity needs, pre-provisioning VMs is likely to be more cost efficient with today's pricing. cloud run GPUs are ideal for more bursty workloads -- maybe a new AI app that doesn't yet have PMF, where you really need that scale-to-zero + fast start for more sparse traffic patterns.


Appreciate the thoughtful response! I’m actually right in the ICP you described — I’ve run my own VMs in the past and recently switched to Cloud Run to simplify ops and take advantage of scale-to-zero. In my case, I was running a few inference jobs and expected a ~$100 bill. But due to the instance-based behavior, it stayed up the whole time, and I ended up with a $1,000 charge for relatively little usage.

I’m fairly experienced with GCP, but even then, the billing model here caught me off guard. When you’re dealing with machines that can run up to $64K/month, small missteps get expensive quickly. Predictability is key, and I’d love to see more safeguards or clearer cost modeling tooling around these types of workloads.


Apologies for the surprise charge there. It sounds like your workload pattern might be sitting in the middle of the VM vs. Serverless spectrum. Feel free to email me at (first)(last)@google.com and I can get you some better answers.


> But due to the instance-based behavior, it stayed up the whole time, and I ended up with a $1,000 charge for relatively little usage.

Indeed. IIRC, if you get a single request every 15 mins (~100 requests a day), you will pay for Cloud Run GPU for the full day.


How does that compare to spinning up some ec2s with amazon trainium gpus?


Depending on your model, you may spend a lot of time trying to get it to work with Trainium


Why is that? Can you explain?


The trainium toolchain is not as mature as GPU. Your model may fail to compile out of the box, and even if it does it may be slow and require you to dig into details for reasonable training/inference performance


Has this changed? When I looked pre-ga the requirements were you need to pay for the CPU 24x7 to attach a GPU so that is not really scaling to zero unless this requirement has changed...


Speaking from my experience, it does scale to zero except you pay for 15 mins after the last request.

So if you get all your requests in a 2 hours window then that's great. It will scale to zero for rest of the 22 hours.

However, if you get at least one request every 15 mins then you will pay for 24 hours and it is ~3X more expensive then equivalent VM on Google Cloud.


OK thanks will check out the options again, if it does scale to zero (including CPU) that will make it more reasonably priced.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: