Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> I'm increasingly concerned that the impact of ML is going to be limited.

You say likely typing on a keyboard with ML predictive algorithms on it, or dictating with NLP speech to text. On a phone capable of recognizing your face, uploading photos to services that will recognize everything in them.



And that seems to be the limits of ML. We might eek out self driving cars, but I don't think we will get much more than that. It is pretty significant, but still limited compared to general purpose AI.


One step at a time. As of the previous decade, we have

- Learned how to play all atari games [1]

- Mastered GO [2]

- Mastered Chess without (as much) search [3]

- Learned to play MOBAs [4]

- Made progress in Protein Folding [5]

- Mastered Starcraft [6]

Notice that all these methods require an enormous amount of computation, in some cases, we are talking eons in experiences. So there is a lot of progress to be made until we can learn to do [1,2,3,4,6] with as much effort as a human needs.

[1] https://deepmind.com/blog/article/Agent57-Outperforming-the-...

[2] https://deepmind.com/research/case-studies/alphago-the-story...

[3] https://deepmind.com/blog/article/alphazero-shedding-new-lig...

[4] https://openai.com/projects/five

[5] https://deepmind.com/blog/article/AlphaFold-Using-AI-for-sci...

[6] https://deepmind.com/blog/article/alphastar-mastering-real-t...


AI playing games is cool, but applying those techniques to real world scenarios would require a huge breakthrough. If a huge breakthrough happens then sure, but my point was based on us continuing using techniques similar to what we currently use.

Huge breakthroughs happens very rarely so I wouldn't count on it.


We can learn a lot by observing what we learned from games. With starcraft in particular, we learned that RL agents can achieve godlike micro play but they are weaker at macro. Dota Five showed that it is possible to coordinate multiple agents at the same time with little information shared between them.

This suggests that human theory crafting and ML accuracy should be able to achieve great things. One step at a time.


No, the bots playing complicated games like starcraft weren't ML but human coded behaviour that used ML based position evaluation to handle movement. Position evaluation is just image recognition, so I don't see those bots as doing anything novel ML wise, same with chess and GO.

Why isn't this interesting for real world applications? Because games can be simulated perfectly, the real world can't. The bots relied on simulating the entire game from start to finish in every frame, that method can only ever work in a game where human coders can write down exactly what happens for every single scenario. Training was also dependant on being able to simulate the world perfectly.

And, even worse, it actually took them way more resources to do this than you'd expect, they needed way way way more human based coding to get decent results. So I am disappointed, those games showed how weak ML really is, that even with a team of world class experts spending billions of dollars that is all they could do. Most of it could already be done by amateurs, the only new thing they solved was troop placement, and troop placement is image recognition as I said, and training that troop positioning evaluator requires being able to run the game perfectly and simulate billions of games.

You could say that I am just raising the bar, but really the things they did in those games didn't change anything. It showed that you can apply image recognition to troop placement and then use that to build a game AI. But it also showed how expensive it is to train and run an ML model capable of evaluating troop placement even in extremely simple things like games. So to me all those games proved was that current ML methods will never ever achieve anything interesting outside image recognition tasks or similar like speech recognition.

Protein folding sure, but that hasn't happened yet. Also if ML ultimately lets us become godlike genetic engineers then it is the genetic engineering that is cool, not the ML.

Edit: To make it clearer, Deep Blue marked the end of that style of AI. I am pretty sure the achievements we got the past few years marks the end of the current ML era of AI. The next era might be interesting, but the current era has already ended. People have already done most of the things possible with current methods, the rest is just coding up the different programs capable of using image metadata produced by current ML.


You don't know what you are talking about, your whole premise is that everything is simulated therefore all it does is searching, so I will refute that and not bother with the rest of the comment because it is not worth it and you are not talking in good faith, instead you assume and support your assumption.

> No, the bots playing complicated games like starcraft weren't ML but human coded behaviour that used ML based position evaluation to handle movement. Position evaluation is just image recognition, so I don't see those bots as doing anything novel ML wise, same with chess and GO.

From the alpha star article

> Although there have been significant successes in video games such as Atari, Mario, Quake III Arena Capture the Flag, and Dota 2, until now, AI techniques have struggled to cope with the complexity of StarCraft. The best results were made possible by hand-crafting major elements of the system, imposing significant restrictions on the game rules, giving systems superhuman capabilities, or by playing on simplified maps. Even with these modifications, no system has come anywhere close to rivalling the skill of professional players. In contrast, AlphaStar plays the full game of StarCraft II, using a deep neural network that is trained directly from raw game data by supervised learning and reinforcement learning.

> Why isn't this interesting for real world applications? Because games can be simulated perfectly, the real world can't. The bots relied on simulating the entire game from start to finish in every frame, that method can only ever work in a game where human coders can write down exactly what happens for every single scenario. Training was also dependant on being able to simulate the world perfectly.

Dota-Five is literally a reinforcement learning algorithm, PPO, on steroids, without simulating what will happen but just playing the game, same for AlphaStar and Atari/Agent57.


The protein folding thing could be big.


For the average person that is their greatest exposure - but were already seeing huge movements in medicine and defense and plenty of places where ML is not in average consumer use (the biggest applications are not for consumers). Add in your note on transportation and it is at the front of a huge section of the world economy. That’s all on track today - what will tomorrow’s innovations bring (the question asked by sci-fi)?


Image recognition is just image recognition no matter where it is used.


Wrong sub thread?


No, all applications of ML currently are image recognition. The rest are just statistics labelled as ML.


That’s hilariously uninformed - nlp is the biggest counter example but you also have neural nets working in domains like RF signal processing, drug discovery all the way to video games - none of these applications are using just traditional stats.


Predictive text and image classification are not the extent of ML's limits, even going by what is currently in production. Recommendation engines, ETA prediction, translation, drug discovery, medicinal chemistry, fraud detection—these are all areas where ML is already very important and present.

Sure, it's not artificial general intelligence, but what technological invention in history would compare to the impact of AGI? That's sort of a weird bar.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: