Hacker Newsnew | past | comments | ask | show | jobs | submit | kwikiel's commentslogin

Wondering why the obvious solution isn’t applied here - instead of giving already well known problems that have been solved thousand times give students open research opportunities- stuff which is on the edge of being possible, no way to cheat with Ai. And if Ai is able to solve those - give harder tasks

The same reason we give beginner math students addition and subtraction problems, not Fermat’s last theorem?

There has to be a base of knowledge available before the student can even comprehend many/most open research questions, let alone begin to solve them. And if they were understandable to a beginner, then I’d posit the LLM models available today would also be capable of doing meaningful work.


Made this mistake years ago: figured I’d just throw it in a Docker with Python 2.7, problem solved. 8 years later nothing builds anymore. Base images gone, dependencies don’t resolve. Turns out containers don’t actually freeze time, they just delay the pain.


That’s just the illustration. But this is misleading - I will fix it asap and show real examples. I’ve run the mistral ocr on other benchmark


I wish Zed would implement support for Jupyter notebooks first. Maybe this sounds like a thing I can contribute


I migrated to using the # %% syntax in plain .py files.

For me, it's a superior experience anyway. I also prefer it in editors that support both (like VS code).

You can run the REPL with a Jupiter kernel as well.

https://zed.dev/docs/repl#cell-mode


It’s coming, there is already basic support for Jupyter kernels https://zed.dev/docs/repl


If you assume that the costs of inference would continue to decrease while they would be able to get billion people hooked on 42 per dollar plan..

That’s $0.5 trillion revenue rate


With 3.6B people in the workforce I'd argue there isn't a billion people in need of a computer, not to mention an ai subscription plan. I'm of course assuming most subscriptions for ai are work related.


I use free AI tools and will never pay for it. There are many people like this.


Indeed, there is a massive gap between free and $1/month. Personally I outright refuse to buy anything digital involving monthly payments (except where there is no alternative like domain names, etc.)


Additionally, I pay for Anthropic's products and will not consider OpenAI offerings. There are many people like this.


I pay more for a faster PC and a bigger screen.

I also pay for better AI. My - and probably your time - and the time saved by using superior tooling, is worth far more than the meagre few dollars spent each month on some subscription.

You're stepping over dollars to pick up dimes.


AI tooling that costs money does not provide anything better than the freely available tools. If you built some product upon it, I'm sorry but your product is not worth a dime.


Oh, just a billion? Why not 100 billion while we're at it?


Because there's only 6-7Bn people in the planet?

Come on, man.


You are not thinking intergallactially.


We kind of actually trade with ants - in some forests there are structures for them and we build wooden structures to protect them, in exchange for them keeping the ecosystem safe from rotting animals and kind of cleaning / rotating biomass


That's not a trade, we used them like a tool without them even knowing or understanding it.


Will share python implementation soon as a kind of executable pseudo code which then can be ported to any platform.

This project is kind of like ultimate nerdsnipe as math is quite simple, you don’t need PhD to understand it and actually implementing things would teach you linear algebra faster vs just mindlessly doing exercises sets.


Haha yes :) Publish it, Kacper!

The project is a nerdsnipe for math geeks, because there are multiple small things that beg to be proven / described by math there. For example - what's the tradeoff between the number of bits we loose when embedding position vs the bits of information that we gain by knowing which bucket a weight belongs to?

In other words - is it possible that when storing weights in the bucketed form we can actually end up having a higher precision than using a regular form? For Q8 we get just 4 bits to store the weight (and 1 bit for sign, and 3 bits for location), but these 4 bits need to express numbers from a smaller range than before.


This rule is very wrong for the things like an iPhone which while expensive won’t be using a lot of electricity in its lifetime


I think they are including all the input resources (e.g. power for the machine that is capable of manufacturing the parts for the iPhone, power for the computers of the designers and engineers that designed it). It seems to be some sort of extrapolation of the Second Law of Thermodynamics, which states that the entropy of the universe is always increasing


Especially considering how power-hungry it is to mill/finish the phone chassis, plus the cost of cutting-edge lithography and low-yield wastage.


Seems like you can still buy sheets like that: https://catalog.usmint.gov/2-four-note-sheet-B9491.html?cgid...


Software compensation for non uniform screen brightness issues ( for example software fix for Apple flexgate )


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: