Hacker Newsnew | past | comments | ask | show | jobs | submit | preston4tw's commentslogin

It's obviously sad that an animal was killed in an accident, but the outrage towards Waymo and media coverage definitely seems disproportionate given statistical context, and I was pleasantly surprised that the article made efforts to point that out rather than dogpiling on Waymo.


past the paywall: https://archive.ph/Axj2B


Valve / Steam presumably has good data on what controllers and peripherals people are using, so I'd imagine their port choices are based around that. Here's a June 2024 post talking about Steam Input and controller market share: https://steamcommunity.com/games/593110/announcements/detail... . At the time of the post they say "59% of sessions are using Xbox controllers, 26% are using PlayStation controllers, 10% are on Steam Decks"


Steam input controller says nothing about the interface being used (USB A vs USB C). A single USB C (with DP support, I hope) port in 2026 sounds like a bad design.


Almost everyone is using these controllers wirelessly if I had to hazard a guess.

The USB interface is used for initial pairing and charging, in which case the port location doesn't matter nearly as much.


Yes wirelessly via an USB dongle.


Steam Machine has a built-in antenna for Valve controllers.


I'm pretty sure most PS controllers use Bluetooth natively if they're not connected via USB.


People know that USB hubs exist and are inexpensive right?


https://archive.ph/BBbtH past the paywall


past the paywall: https://archive.ph/pIEgu


This is one thing I've been wondering about AI: will its broad training enable it to uncover previously covered connections between areas the way multi-disciplinary people tend to, or will it still miss them because it's still limited to its training corpus and can't really infer.

If it ends up being more the case that AI can help us discover new stuff, that's very optimistic.


In some sense, AI should be the most capable at doing this within math. Literally the entire domain in its entirety can be tokenized. There are no experiments required to verify anything, just theorem-lemma-proof ad nauseam.

Doing this like in this test, it's very tricky to rule out the hypothesis that the AI is just combining statements from the Discussion / Future Outlook sections of some previous work in the field.


Math seems to me like the hardest thing for LLMs to do. It requires going deep with high IQ symbol manipulation. The case for LLMs is currently where new discoveries can be made from interpolation or perhaps extrapolation between existing data points in a broad corpus which is challenging for humans to absorb.


Alternatively, human brains are just terrible at "high IQ symbol manipulation" and that's a much easier cognitive task to automate than, say, "surviving as a stray cat".


If they solve tokenization, you'll be SHOCKED at how much it was holding back model capabilities. There's tons of works at NeurIPS about various tokenizer hacks or alternatives to bpe which massively improve various types of math that models are bad at (i.e. arithmatic performance)


This line of reasoning implies "the stochastical parrot people are right, there is no intelligence in AI". Which is the opposite of what AI thought leaders are saying.


I reject the Stochastic Parrot theory. The claim is more about comparative advantage; AI systems already exist that are superhuman on breadth of knowledge at undergrad understanding depth. So new science should be discoverable in fields where human knowledge breadth is the limiting factor.


> AI systems already exist that are superhuman on breadth of knowledge at undergrad understanding depth

Two problems with this:

1. AI systems hallucinate stuff. If it comes up with some statement, how will you know that it did not just hallucinate it?

2. Human researchers don't work just on their own knowledge, they can use a wide range of search engines. Do we have any examples of AI systems like these that produce results that a third-year grad student couldn't do with Google Scholar and similar instructions? Tests like in TFA should always be compared to that as a baseline.

> new science should be discoverable in fields where human knowledge breadth is the limiting factor

What are these fields? Can you give one example? And what do you mean by "new science"?

The way I see it, at best the AI could come up with a hypothesis that human researchers could subsequently test. Again, you risk that the hypothesis is hallucination and you waste a lot of time and money. And again, researchers can google shit and put facts together from different fields than their own. Why would the AI be able to find stuff the researchers can't find?


I think this may be the first time I've seen "thought leaders" used unironically. Is there any reason to believe they're right?


What makes you think it was used unironically? :)


This is kinda getting at a core question of epistemology. I’ve been working on an epistemological engine by which LLMs would interact with a large knowledge graph and be able to identify “gaps” or infer new discoveries. Crucial to this workflow is a method for feedback of real world data. The engine could produce endless hypotheses but they’re just noise without some real world validation metric.


room presence detection has been a long standing challenge in the smart home / home assistant community. it's cool to see home assistant used and adapted for this use case.



It's been somewhat surprising Twitter has remained as highly available as it has been despite all the firings. It makes me wonder what the cause of this outage was, for being a relatively short one. Is it a sign of things to come, or just unusually high activity related to the ongoing Israel/Palestine crisis?


> as highly available as it has been

So, er, not very, then?

> It makes me wonder what the cause of this outage was, for being a relatively short one.

Business as usual; intermittent Twitter downtime has been increasingly common throughout 2023. Almost nostalgic; a return to the failwhale era of the late noughties, though without the amusing graphics.


Not sure why the downvote - I tend to agree, granted they don't seem to have an official status page. I hope the reason is not political, but who knows.

EDIT: I stand corrected - status.twitter.com redirects to status.twitterstat.us (with expired certificate), but api.twitterstat.us works, and shows no outages today.


I thought it was an interesting read and it would be interesting to hear a summation of why you left the industry.

It also made me consider the possibilities for a sort of vertically integrated debt collection company. Someone that's had a debt sent to collection is possibly a good candidate for financial literacy education, credit rehabilitation, debt consolidation and refinancing etc. Can debt collection serve as a sort of loss-leading subsidiary for a bigger business that provides financial literacy and services to people that need it?


I left because I fell in love with building software and I hated debt collections.

The financial literacy angle is interesting but there's an even arguably more predatory industry around credit repair. People tend to stop caring about their credit until there's a life event like wanting to buy a house. Then they only care until the score hits 640.

Arguably Credit Karma uses credit scores and credit literacy as the hook which is probably a loss-leader because they need to pay for the scores.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: