Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It kind of does matter, actually. Lots of English (and other words!) sound the same. A cat lover who starts getting ads for baseball bats and fat loss pills isn't going to convert. Context matters, too, not just matching words. If I start talking about "my dear father" and get ads for tractors and hunting gear, I'm not going to convert.

Advertisers aren't going to pay for random spoken keywords anyway. They're going to pay to target people by demographic and interest. Things _about_ you, not things you're talking about. Just because I mentioned tampons doesn't mean I'll ever buy a box of tampons (I simply lack the anatomy). And if you start building a profile about somebody based on poorly-overheard bits of speech, you're building a castle on bad foundations. The data is bunk.

Just having a TV or radio on near the device will have suddenly poisoned the data.

> If the device picks up 3 hours of speech per day, it only needs to process at 1/8x speed to catch up.

The Echo currently has a 32-bit processor that is designed to be pretty minimal. OpenAI Whisper tiny runs at about 2/3 speed. That's with a 6-core ~2.3ghz laptop processor. The CPU in the Echo runs 0.6-1ghz, and the system is not designed for general purpose computing. I don't have the ability to benchmark it, but you're not going to get close to 1/8 with the Echo hardware.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: