Hacker Newsnew | past | comments | ask | show | jobs | submit | BoorishBears's commentslogin

Well this is the platform that got kicked off Discord for refusing to delete user accounts in a timely manner, then training an AI model on user inputs...

Then tried to weaponize their userbase to mass email Discord over being kicked off.


And then struggled to manage Reddit...then scaled to as many platforms as possible (text, whatsapp, etc) then shut it all down then built out an entire API just to shut it down (apparently, can't confirm). They can't even get dark mode to work by default on their own website. Suspended from Twitter/X. Founder, Anush, making comments about "replacing engineers with AI" (source: https://x.com/anushkmittal/status/1979372588850884724), although that could be a joke, using their absolute dogshit "talk" platform.

And speaking of their chat platform...it's literally horrible. Slow to load, terrible UX, disjointed UI, no accessibility options, AI chatbots everywhere and you can't even tell their AI without clicking their profiles. It's like if Slack was made by a 12 year old.

Seriously, to put it in perspective I'm on a MacBook Pro M3 (so not that old) and a fiber, gigabit network. I click one chat and it can take up to 5 seconds to load the channel. JUST TO LOAD THE CHANNEL. It legit fires off like 30 fetch requests each time you load a channel. It's insane. I can't even blame NextJS for that, it's straight up them probably "vibe coding" everything.


As the other comments pointed out, that's not covering billing...

But also the (theoretical) production platform for Gemini is Vertex AI, not AI Studio.

And until pretty recently using that took figuring out service accounts, and none of Google's docs would demonstrate production usage.

Instead they'd use the gcloud CLI to authenticate, and you'd have to figure out how each SDK consumed a credentials file.

-

Now there's "express mode" for Vertex which uses an API Key, so things are better, but the complaints were well earned.

At one point there were even features (like using a model you finetuned) that didn't work without gcloud depending on if you used Vertex or AI Studio: https://discuss.ai.google.dev/t/how-can-i-use-fine-tuned-mod...


I could've made my comment more clear. Definitely missing a statement along the lines of "and then after creating, you click 'set up billing' and link the accounts in 15 seconds"

I did edit my message to mention I had GCP billing set up already. I'm guessing that's one of the differences between those having trouble and those not.


You can sum it up as: Gemini from AI Studio and Gemini from Vertex AI Studio have independent rate limits.

-

And I guess to add some context, it's because Google seemingly realized that Google Cloud moves so glacially slow, and has so much baggage, that they could no longer compete with scrappier startups like OpenAI and Anthropic on developer mindshare.

So there's a separate product org that owns AI Studio, which tries to be more nimble, and probably 50x'd Gemini adoption by using API Keys instead of Service Accounts and JSON certs that take mapping out the 9th circle of hell to deploy in some environments. (although iirc Vertex now has those)

They definitely do ship faster than Google Cloud, but their offerings actually end up feeling like a product team with fewer resources than OpenAI or Anthropic (like shipping purple tailwind-slop UIs as real features), which is just nuts.


Why? gpt-realtime is finalized gpt-4o. Gemini Live is still 2.5.

Not their fault frontier labs are letting their speech to speech offerings languish.


Not shade, and it's a small thing, but why do you list your investors as social proof here?

Isn't the target persona someone who'd be at best indifferent, and at worst distrustful, of a tech product that leads with how many people invested in it? Especially vs the explanation and actual testimonials you're pushing below the fold to show that?


Totally fair callout and appreciate the feedback. We’re already testing alternative hero layouts focused purely on real customer results and example issues caught. Our goal is to win trust by demonstrating usefulness/results, not who invested in us.

where would my firms documents end up (on whos servers) to do this checking? I dont know how any firm would just hand out their cd's just like that?

Or is being that lax normal these days?

Aside: this field is insanely frustrating, the chasm between clash detection and resolution is a right ball ache...between acc, revizto, and aconex clash detection (and the like)..the defacto standard is pretty much telling me x is touching y....great...can you group this crap intelligently to get my hi rise clashes per discipline from 2000 down to 10? Can you navigate me there in revit (yes switchback in revizto is great) but revizto itself could improve.


Yes one of the biggest values of our system is reducing “noise.” Instead of surfacing 2,000 micro-clashes, we cluster findings into higher-order issues (e.g., “all conflicts caused by this duct run” or “all lighting mismatches tied to this dimming spec”). We’re not a BIM viewer yet, but we do map issues back to sheet locations, callouts, and detail references so teams can navigate directly to the real source of the problem.

Sounds good, what is the typical workflow aggregating sheet sets in question for a certain phase? I assume user collates and drops for analysis?

Today the workflow is simple: users just drag-and-drop the full drawing/spec set (ZIP or PDFs) for whatever phase they want reviewed. The system automatically splits sheets by discipline, reconstructs callout relationships, and runs the checks. We’ll be adding integrations with ACC/Procore/Revit exports so this becomes even more automated.

Yes today users simply gather the sheets for whatever phase they want reviewed (DD, 80% CDs, 100% CDs, etc.), ZIP them or upload PDFs directly, and the system handles the rest. It auto-detects disciplines, reconstructs callout graphs, and runs checks across the full set. We're also adding integrations with ACC/Procore/Revit so sheet aggregation becomes automatic.

We store files securely on AWS with strict access controls, encryption in transit and at rest, and zero sharing outside the file owner’s account. Only our engineers can access a project for debugging and only if the customer explicitly allows it. We can also offer an enterprise option with private cloud/VPC deployment for firms that require even tighter controls. Users can delete all files permanently at any time.

Documents are stored on AWS with strict access controls, meaning they are only accessible to the file owner and, if necessary, our engineers for debugging purposes. After the check, users can delete the project and optionally permanently delete the files from our S3 buckets on AWS.

People have been doing this for literally every anticipated model release, and I presume skimming some amount of legitimate interest since their sites end up being top indexed until the actual model is released.

Google should be punishing these sites but presumably it's too narrow of a problem for them to care.


Black SEO in the age of LLMs

It would need outbound links to be SEO

Or at least a profit model. I don't see either on that page but maybe I'm missing something


Every link in the "Legal" tree is a dead end redirecting back to the home page... strange thing to put together without any acknowledgement, unless they spam it on LLM adjacent subreddits for clout/karma?

Flux 2[dev] is awful.

Z-Image is getting traction because it fits on their tiny GPUs and does porn sure, but even with more compute Flux 2[dev] has no place.

Weak world knowledge, worse licensing, and it ruins the #1 benefit of a larger LLM backbone with post-training for JSON prompts.

LLMs already understand JSON, so additional training for JSON feels like a cheaper way to juice prompt adherence than more robust post-training.

And honestly even "full fat" Flux 2 has no great spot: Nano Banana Pro is better if you need strong editing, Seedream 4.5 is better if you need strong generation.


I didn't even know seedream 4.5 has been released, things move fast, I have used seedream 4 a lot through their API.

They said echoes of the tricks.

I've read thousands of pages of Enron history, this has nothing to do with it.

This is how I found out about HN Classic! https://news.ycombinator.com/classic

“It's the same algorithm as the regular front page, with the difference that the votes are those by users before Feb 13, 2008.“

Clever!

- https://news.ycombinator.com/item?id=24401292


Consumer will eat it all. AI is very good at engaging content, and getting better by the day: it won't be the AGI we wanted, but maybe the AGI we've earned

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: