Hacker Newsnew | past | comments | ask | show | jobs | submit | ph4rsikal's commentslogin

Markdown is the secret winner of the AI early years.

I'm not so sure. It's definitely the de facto standard, but I suspect minimal HTML is better. Just enough tags to add structure and meaning (H1-H6, p, a, em, section for structure including nesting, maybe more). LLMs were trained on a lot of HTML, they're good at processing it. HTML requires more tokens than markdown but I believe it's worth it. I'll find out in a few weeks as I experiment with both.

cries in org-mode

org-mode is amazing for humans. I, as a real human and not a robot, use it every day.

I feel you on this too.

My observation is that research, especially in AI has left universities, which are now focusing their research to a lesser degree on STEM. It appears research is now done by companies like Meta, OpenAI, Anthropic, Tencent, Alibaba, among many others.

Universities (outside a few) just have much weaker PR machines so you never hear what they do. Also their work is not user facing products so regular people, even tech power users won't see them.

I came across a good example of that a few years ago. Caltech had a page on their site listing Caltech startups.

There were quit a few off them--by number of starts per year per person Caltech was actually generating startups at a higher rate than Stanford. But almost none of those Caltech startups were doing anything that would bring them to the public's attention, or even to the average HN reader's attention.

For example one I remember was a company developing improved ion thrusters for spacecraft. Another was doing something to automate processing samples in medical labs.

Also almost none of them were the "undergraduates drop out to form a company" startup we often hear about, where the founders aren't actually using much that they actually learned at the school, with the school functioning more as a place that brought the founders together.

The Caltech startups were most often formed by professors and grad students, and sometimes undergraduates that were on their research team, and were formed to commercialize their research.

My guess is that this is how it is at a lot of universities.


Every university I've worked in has been dominated by this paradigm, has an office set up to support it, and a bunch of policies around what it means for your doctoral supervisor to also be your employer, etc.

Not sure about that. How would a university test scaling hypotheses in AI, for example? The level of funding required is just not there, as far as I know.

Universities are also not suited to test which race car is the fastest, but that does not obviate the need for academic research in mechanical engineering.

Perhaps but the fastest race car is not possibly marshalling in the end of human involvement in science, so you might consider these of considerably different levels of meriting the funding.

>marshalling in the end of human involvement in science

Good riddance! But not relevant in the least.


Impact size is not relevant to funding allocation?

Your attempts to smuggle your conclusions into the conversation are becoming tiresome. Profiling a private company's computer program is not impactful research. The best-fit parameters AI people call scaling exponents are not properties like the proton lifetime or electron electric dipole moment. Rest assured, there remain scientists at universities producing important work on machine learning.

There are a million other research things to do besides running huge pretraining runs and hyperparam grid search on giant clusters. To see what, you can start with checking out the best paper and similar awards at neurips, cvpr, iccv, iclr, icml etc.

This issue of accessibility is widely acknowledged in the academic literature, but it doesn’t mean that only large companies are doing good research.

Personally I think this resource mismatch can help drive creative choice of research problems that don’t require massive resources. To misquote Feynman, there’s plenty of room at the bottom


That's a specific field at a very specific time. In general there is a difference between research and development, you're going to expect the early work to be done in academia but the work to turn that into a product is done by commercial organizations.

You get ahead as an academic computer scientist, for instance, by writing papers not by writing software. Now there really are brilliant software developers in academic CS but most researchers wrote something that kinda works and give a conference talk about it -- and that's OK because the work to make something you can give a talk about is probably 20% of the work it would take to make something you can put in front of customers.

Because of that there are certain things academic researchers really can't do.

As I see it my experience in getting a PhD and my experience in startups is essentially the same: "how do you do make doing things nobody has ever done before routine?" Talk to people in either culture and you see the PhD students are thinking about either working in academia or a very short list of big prestigious companies and people at startups are sure the PhDs are too pedantic about everything.

It took me a long time of looking at other people's side projects that are usually "I want to learn programming language X", "I want to rewrite something from Software Tools in Rust" to realize just how foreign that kind of creative thinking is to people -- I've seen it for a long time that a side project is not worth doing unless: (1) I really need the product or (2) I can show people something they've never seen before or better yet both. These sound different, but if something doesn't satisfy (2) you can can usually satisfy (1) off the shelf. It just amazes me how many type (2) things stay novel even after 20 years of waiting.


Gemini? Not anywhere near.

I bought shares after the IPO but sold them all after trying their patty and then forgetting the rest in the freezer for 6 months.

I love investing based on feels, rather than DD

I think the age of SaaS and Software companies is over. Given by all the overhyped TikTok videos there are lots of roles which are not needed.

LangChain is not over-engineered; it's not engineered at all. Pure Chaos.

Much like how "literally" doesn't literally mean "literally" anymore, "over-engineered" in most cases doesn't mean "too much engineering happened" but "wrong design/abstractions", which of course translates to "designs/abstractions I don't like".

Under-engineered is a much better term.

I wish job openings for anything LLM related would stop asking for experience with langchain

"dehumanizing"?

What is human about a career website where you can upload your document and answer questions about your sex life, race, religion, and gender?


Why do you work on this when Claude Cowork for Finance exists>


Tech jobs are up, just not in the US. I know personally 2 people who got hired without an interview to fill two open roles.

https://muneebdev.com/software-development-job-market-india-...


Because these are our societies. We build them. If this door were to swing both ways, I would not have an issue. But it never does. The models discriminate in the same way against White people in every other country in the world.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: