Hacker Newsnew | past | comments | ask | show | jobs | submit | KLK2019's commentslogin

How timely, a great video from one of favourite urban bloggers About Here: Did these designs crack the code to wood towers?

https://www.youtube.com/watch?v=gQ42KhybIUk

The go through different design themes from a competition, as well as look at real world examples.


Working for US multinational outside the US, they always had us take mandatory training sessions on this (bribery and FCPA act). Always found it very admirable that they were enforcing this and that it probably had a net benefit for my country.(ie criminalizing corruption). Sad to see this change.


> that it probably had a net benefit for my country.(ie criminalizing corruption

Corruption is everywhere criminalized. The main issue is that it usually afects the poor.


In light of the meta context that this article reinforces the view that ai can replace researchers job I found this part of the artcile very true to how I use AI tools at work.

"Stokes stresses that while the prediction was intriguing, it was just that — a prediction. He would still have to conduct traditional MOA studies in the lab.

“Currently, we can’t just assume that these AI models are totally right, but the notion that it could be right took the guesswork out of our next steps,”...so his team, led in large part by McMaster graduate student Denise Catacutan, began investigating enterololin’s MOA, using MIT’s prediction as a starting point.

Within just a few months, it became clear that the AI was in fact right.

“We did all of our standard MOA workup to validate the prediction — to see if the experiments would back-up the AI, and they did,” says Catacutan, a PhD candidate in the Stokes Lab. “Doing it this way shaved a year-and-a-half off of our normal timeline.”


AI is becoming a difficult term to grapple with, especially because the public just assumes AI = ChatGPT = "ChatGPT discovered a new medicine"

In reality, a lot of research uses a variety of different general ML tools that have almost nothing to do with transformers, much less LLMs.


You know this water-muddling technique is being used on purpose, don't you? Most of the time to attract money. At least in this case the aim is noble.


Same. I note in-advance that I'm not sure whether you yourself are referring to use of LLM tools in your research or rather the results of your own domain specific application of deep learning etc -- here, I assume the former.

I feel like the common refrain of most LLM success stories over the past year is that these tools are of significantly greater help to specialists with "skin in the game", so to speak, than they are to complete amateurs. I think a lot of complaints about hallucinations reflect the experience of people who aren't working at the edge of a field where they've read all the existing literature and there simply aren't other places to turn for further leads. At the frontier, moreover, the probability that there exists a paper or book that covers the exact combination of topics that interests you is actually rather low; peer discussions are terrific, but everyone is time-starved.

Thus I find the synthetic ability of LLMs to tie together one's own field of focus with those you've never thought about or are less familiar with to be of incomparable utility. On top of that, the ability to help formulate potential hypotheses and leads -- where of course you the researcher are ultimately going to carry out the investigation or, in the best case, attempt to replicate results. Conversely, when I'm uncertain of my own conclusions, I often find myself feeding the best LLM I have access to the data I reasoned from to see whether it independently gets to the same place. I'm not concerned about hallucinations because I know there's nobody but me ultimately responsible for error -- and, at the fringe of knowledge, even a total fabrication can inspire a new (correct) approach to the matter at hand.

I think if I had to succinctly describe my own experience it would be that I never get stuck any more for days, weeks, months without even a hint of where to turn next.

Related, there's an ancient Palantir blog post (2010!) that always stuck in my memory about a chess tournament that allowed computers, grandmasters, amateurs and any combination of the above to enter [0]. At that time, the winning combination turned out to be amateurs with the best workflow for interfacing with machine. The moral of the story is probably still true (workflow is everything), but I think these new tools for the first time are really biased towards experts, i.e. the best workflow now is no longer "content neutral" but always emerges from a particular domain.

[0] https://web.archive.org/web/20120916051031/http://www.palant...


While I agree, one must be careful anyway. I'm ignorant in most fields, reasonably good at two, and quite good (but far from excellent) in one. So while there is a lot to learn in the former, when it comes to the latter, all LLMs, including SOTA models, give me a very high percentage of answers that are misleading, wrong, dangerously incomplete, only superficially correct, amalgamate of correct and incorrect bits etc. Knowing this first hand, repeatedly, on hundreds and hundreds of issues, I basically built a deep, methodological distrust towards LLMs answers. In the end, I assume the answer to be wrong, but I look for verifiable hints that could lead me in the right direction. This is my default working mode in my niche.


Here is the original study published in nature microbiology.

https://www.nature.com/articles/s41564-025-02142-0

Wanted to share what I thought the interesting parts. From the university press release.

"To date, AI has been leveraged as a tool for predicting which molecules might have therapeutic potential, but this study used it to describe what researchers call “mechanism of action” (MOA) — or how drugs attack disease.

MOA studies, he says, are essential for drug development. They help scientists confirm safety, optimize dosage, make modifications to improve efficacy, and sometimes even uncover entirely new drug targets. They also help regulators determine whether or not a given drug candidate is suitable for use in humans... A thorough MOA study can take up to two years and cost around $2 million; however, using AI, his group did enterololin’s in just six months and for just $60,000.

Indeed, after his lab’s discovery of the new antibiotic, Stokes connected with colleagues at MIT’s Computer Science and Artificial Intelligence Lab (CSAIL) to see if any of their emerging machine learning platforms could help fast-track his upcoming MOA studies.

In just 100 seconds, he was given a prediction: his new drug attacked a microscopic protein complex called LolCDE, which is essential to the survival of certain bacteria.

“A lot of AI use in drug discovery has been about searching chemical space, identifying new molecules that might be active,” says Regina Barzilay, a professor in MIT’s School of Engineering and the developer of DiffDock, the AI model that made the prediction. “What we’re showing here is that AI can also provide mechanistic explanations, which are critical for moving a molecule through the development pipeline.”


> Indeed, after his lab’s discovery of the new antibiotic, Stokes connected with colleagues at MIT’s Computer Science and Artificial Intelligence Lab (CSAIL) to see if any of their emerging machine learning platforms could help fast-track his upcoming MOA studies.

It must be so cool to work at a university. You can just walk across campus to meet with experts and learn about or apply the cutting edge of any given field to solve whatever problem you're interested in.


Can confirm it is often very cool to be able to do this.


Working in a university that has significant research facilities is awesome.

Even when you work in administration, learning opportunities abound and are easy to seize.

I’m too shy to just walk into a random lab and ask questions, but three times a year, my boss likes to organize a tour of a different research facility and I really appreciate that.


That's only if there are experts there. The average college is not really what you're thinking it is.


You can email them and they are usually quite happy to talk.

Of course not when they get a media storm like these people. But I regularly correspond with experts in adjacent fields who have interesting papers put out.


Can confirm that most of the times when reproducing/implementing a paper or trying to extend it to another field, researchers are pretty OK (some very enthusiastic) to chat over email about it. As long as you've actually read the paper(s) or read the code (if any), and there's no expected free-work...

I sometimes get unpublished artefacts (matlab/python/fortran code, data samples) just by... asking nicely, showing interest. And I'm not even in academia or a lab.


In my sample of 1 with cold email to a researcher was that they are enthusiastic when someone has read their paper and ask relevant questions.

I don't remember the paper subject neither the researcher name (more than 20yr ago) but I remember that she was an ornitologist, the subject was quite niche and the response I received to my questions was longer than the article that prompted me to asks them.


The Reverse Gell-Mann Amnesia Effect-- one vastly underestimates the work it took to reach a conclusion in a book they haven't finished reading. Then, without reading another page, they assume everything in the entire bookshelf is at the same shallow level as their own misapprehension of the book they didn't finish.

I rankly speculate: for the set of low-effort comments on HN, there are more Reverse Gell-Manns than there are Gell-Manns.


There are 1000s of state and community colleges. Not everyone is doing groundbreaking research. Some places, they just teach, which is fine. Additionally, I have no idea what you're trying to say.


There are experts at every single state and community college, with PhDs being thr typical floor.

> Not everyone is doing groundbreaking research.

They don't need to be, they only need to be able to understand recent, groundbreaking papers in their fields, and be able to bounce ideas with colleagues in different disciplines who walk up to them.

Insinuating that faculty staff are either on the bleeding edge or useless is a false dichotomy.


Is DiffDock a large language model?

Because that is what the general public believes AI means, and Open AI say they are building thinking machines with it, and this headline says ”predicted”.


It's a 3D equivariant graph neural network; a class of models that was hot before LLMs stole the limelight. https://en.wikipedia.org/wiki/Graph_neural_network


No it’s a diffusion model trained on proteins


What????

We knew that LolCDE was a vulnerability to e coli since well before 2016 and knew inhibitors of the complex, globomycin being one of them, which they knew about since 1978

https://journals.asm.org/doi/full/10.1128/jb.00502-16

https://pubmed.ncbi.nlm.nih.gov/353012/

Is enterololin just another from of globomycin?

Is AI smart or are scientists just getting dumber?


AI just picked up these references and gave an answer. Scientists in this field should have read these papers instead of relying on AI.


From what I understand they used a diffusion model (diffdock) to predict the mechanism. These types of models are not LLMs that need to be trained on text


There are probably 37,392 papers on this. So your "should" is probably just impossible for humans.


This is such a ridiculous argument. They could have read five papers, couldn’t they?


Picking up five needles is trivial. Picking up five needles in 50000 haystacks is difficult.


How do they know which five papers to read?


Yes.


Does anyone have the pre-print? I'm not affiliated with a university any more and the usual suspects don't upload papers overnight any more.


Their inboxs might be overflowing but researchers usually happy to email a copy if you don't have access elsewhere


> A thorough MOA study can take up to two years and cost around $2 million; however, using AI, his group did enterololin’s in just six months and for just $60,000.

Beautiful, finally something for ai/machine learning that is not a coding autocomplete or image generation usage.

It would be very interesting to keep track of this area for the next 10 years, between alpha fold for protein folding and this to predict how it will behave, how cost is reduced and trials get fast tracked


Based on the paper for DiffDock (https://arxiv.org/abs/2210.01776) it looks like it was a great use case for a diffusion model.

> We thus frame molecular docking as a generative modeling problem—given a ligand and target protein structure, we learn a distribution over ligand poses.

I just hope work on these very valid use cases doesn’t get negatively impacted when the AI bubble inevitably bursts.


Most people consume CBC very differently. I mostly listen to the radio and the content is pretty good and has great local news coverage. I don't think I know anyone else who follows CBC closely but I do think its overall seen as a trusted source of news.

Personally, when I think of the recent content from CBC I remmeber reading more about climate change and reconcilation with our native history. Both of Which I think are fairly important to be informed of. But I do wish they do more investigative work.


Very fascinating article and appreciate the policy and business context that brought some of the new faces I see in my community. For context I live close to a Canadian university campus which has recently gone through an explosion in foreign enrollment. My partner works as physcian on campus so I hear a anecdotally how it has changed her practice, shifting from sexual health issues (and of course exam deferrments) to more with mental health issues arising from the challenges and pressures these foreign university students face (33% of student body now and probably 50%+ of her cases). The responsibilities they are faced with are immense at such a young age, where their families put so much on the line and expect these students to succeed.

I have always been fascinated by the immigrant story, and in Canada I feel there is evolution happening. Like the story mentioned it seems these young, desperate and motivated individuals represent the ideal worker for Canadian employers/government. I have encountered uber drivers, retail workers and tech proferssionals, all from this demographic and they have so much grit. This feels different the previous generation (my parents) were they filled mostly underskilled labour but had childern who mostly went on to more skilled professions.

I just hope some form of the Canadian dream is still viable. My parents were able afford a house and have kids who did well in school, to be honest that was thier saving grace both in terms of happiness, retirement and giving them a sense of pride and accomplishment for thier many years of struggle. I know some of my peers fear if they are able to acoomplish even that (even with thier 6 figure incomes) I can't imagine what these newcomers are facing.


Citation below:

https://www.canada.ca/en/employment-social-development/news/...

With further breakdown:

https://www150.statcan.gc.ca/n1/daily-quotidien/190226/dq190...

It seems to me the key drivers are increase in guaranteed income supplements and the child tax benefits. Also believe the economy in urban centres have been doing quite well combined with highly educated workforce. As others have mentioned nobody feels better off, the dramatic increase in housing prices makes everyone feel as if we are living worse off then before. However I believe the housing crisis isn't due to immigration alone but a combination of factors including low interest rates, increase in personal income, limited desirable housing supply due to poor public transit investment and bad planning policy in the urban centres.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: