Hacker Newsnew | past | comments | ask | show | jobs | submit | pfisherman's commentslogin

That is because there has been an absolute massacre in biotech in the bay area. Between tariffs (higher COGS), chaos at the FDA, cuts to NIH funding for basic and translational research, and competition from China biotech ventures are getting squeezed from both ends.


My pet theory is that we are experiencing stagflation, but only people >70 years old have ever really experienced it before, so most people are just scratching their heads wondering how it’s possible that stocks keep going up (inflation) while jobs are disappearing (stagnation). I am most definitely not an economist, nor am I qualified to play one on tv.


> My pet theory is that we are experiencing stagflation, but only people >70 years old have ever really experienced it before, so most people are just scratching their heads wondering how it’s possible that stocks keep going up (inflation) while jobs are disappearing (stagnation).

We do not seem to be technically experiencing stagflation,ir really either half of it, on a national scale, as we appear to still be in a weak aggregate economic expansion and inflation, while higher than the 2% target, is fairly mild at around a 3% annualized rate [0], and, in any case, stocks going up is not inflation (unqualified inflation, which is the inflation part of stagflation, in consumer price inflation, not asset value inflation.)

OTOH, we are in a very weak economy especially outside of the leading AI firms, and there are quite likely both wide regions and wide sectors of the economy which, considered alone, would be in recession, and while inflation is fairly mild, it is high for the last couple decades and being in near-recession conditions. So, for a lot of people, the experience is a something like stagflation (and there are lots of signs that the economic slowdown will continue alongside rising inflation.)

[0] though as economic statistics are only available after the fact, either of these could have changed, but the real defining period for “stagflation” in the US is the 1973-1975 recession, years which saw a minimum of 6.2% inflation (the term was actually coined in the UK for conditions which saw a massive drop in GDP growth rate, fron 5.7% annually to 2.1% in successive years, but not an actual recession, alongside 4.8% inflation.)


>we are in a very weak economy especially outside of the leading AI firms

Isn't that part of the cause? It sucks up so much investment, there's nothing left for anything else. Or at least nothing without such perceived upside.

Either they pull it off and you're replaced by AGI, or they fail to pull it off and you lose your job to the resulting economic implosion.


> Isn't that part of the cause?

Probably not significantly, IMO.

> It sucks up so much investment, there's nothing left for anything else.

Tariff-inflated input costs combined with weak consumer demand are the reason the rest of the economy is slow, and the reason there aren’t places woth strong and near-term upsides for investment dollars to go. AI being the only thing attracting investment is the effect, not the cause.


My sense is that AI is the one area where boards cannot justify cutting back on investment. If there were no AI boom the rest of the economy would still be getting hammered.

There is still a lot of tech investment, deal making, and hiring going on. It has just left the USA.


The definition of inflation since the 1970s has changed significantly e.g. owner equivalent rent. What many people describe as inflation is the change in base cost of capital purchases which has demonstrably risen at a substantially faster pace than baseline inflation see housing/car/education/healthcare prices.

Economic thought today is that rising asset prices relative to wages are not a sign of inflation. They can be attributed to lower cost of capital, increased dollar production of assets, lower risk profiles, and other aspects. However when observing housing prices and education prices which have seen declining utility over the years - it may be that we simply lack an appropriate word for the divergence of capital prices from wages, productivity, and risk.

There is an undeniable societal impact of this divergence, individuals become less economically and socially mobile. They maintain a net debt rather than net asset count for longer, they may be either practically or perceived as locked out of societal progress.


My theory is that all money which could have been invested else where went to AI. It can end up either in investments paying off which will result in AI investors becoming even more rich (poor don't invest in the 1st place) and the rest of society poor or investments will have no returns and it will be wealth destruction on a grand scale and everyone will come poorer afterwards.


> nor am I qualified to play one on tv.

No worries, the ones that are playing economists on TV are not qualified either.


>only people >70 years old have ever really experienced it before,

Part of the observations from the HN reader on the current situation in Finland (believe everything he says, "canary" of Europe?):

>things really suck especially for fresh grads. There's fierce competition for jobs like cashier at supermarket, hundreds of applications for one position is normal. Lots of fresh grads with bachelor's or master's degree compete for those jobs too, since they can't find anything better.

In the USA this was one of the exact "unexpected" developments in the Nixon Recession that was like no-one had seen since the Great Depression. Except that depression there were not yet enough college grads in existence to contribute as a major statistic.

By the mid-70's I'll never forget the crowds vying for a single job opening at a gas station. Pumping gas and cleaning windshields when most stations had only converted half their pumps to self-service. Some with advanced degrees, it was not pretty. These were always minimum-wage jobs too, like supermarkets and fast-food.

When I started working there was a chemical plant within 25 miles where I could have gotten a job easily if I had graduated a few years earlier. Founded in the 1950's by one of my professors, it was actually pretty advanced. The placement office said they hadn't seen an opening in over a year. I was lucky to get a job at an appliance dealer because he liked my ability to program, but he never got a computer the whole time. Otherwise I wouldn't have got noticed, but the job was to prep merchandise for delivery so I was in the warehouse installing a lot of icemakers and doing minor repairs, plus rode along with a service operator one or two days a week to help when it was commercial refrigeration. Which I was learning, but also learned that I was kind of replacing an experienced repairman because they had let too many people go when things first got bad.

About a year later things got worse and he had to kick out the new people and we were gone.

I then began to get unemployment and the need had gotten so great that it must have been the first computerized institutional job boards for that reason. Slim pickings doesn't describe it. But you had to check in every week and apply to whatever you may be qualified for. I had gotten a cheap car (was riding bike to the appliance co) and was selling fruits and vegetables when a job came up at the plant. Not in a lab but out on the large reactor areas working with chemicals and taking readings. The posting had been badly mangled by the typist and it was not obvious it was a chemical job. You could still tell it was technical though. There were over a hundred applicants anyway, and not a realistic chance at all.

Months later I got a lead from my uncle that a lumber company near him needed somebody full time. This was about 35 miles away. All they would do is take my application without getting to talk to anybody, so it took longer to drive there. Still there were about a dozen people applying while I was there so it must have been hundreds of applications overall too.

Now I was already wearing a tie so I went back to the chemical plant and what really made the difference was that there were no new job openings so that time I was the only one showing up in a while. There was only one office building outside the gate, the manager was in and came up to see me but said right away they had no openings. He invited me for a quick tour of the labs and plant anyway, it was good getting inside the gate but like everybody else it was just optimism in the face of declining prosperity.

Surprisingly, he called me back a few days later and offered a part-time job, 4 days a week. He had talked to my professor, and I was a good student. I started out doing a lot of different things for different people, mainly for the analytical lab. In less than a year, they had me come in 5 days a few times when it got heavy and months later I was full-time.

I still wasn't getting twice the minimum wage, but I was so lucky.

After that I only sold produce on the weekends, and only seasonal things I picked myself like avocados and blueberries.


This is pretty ridiculous, just stupid enough for a bit of silly Friday watercooler conversation.

I have questions. How do facial expression, clothes, and hairstyle impact the model’s predictions? How about Facetune and insta filters? Would putting a clickbaity YouTube thumbnail at the top of my resume make me more employable?

This lines up with what I once heard “second hand” from faculty at a business school about publishing in academic business journals. It was something along the lines of being a bunch of dancing monkeys pumping out entertaining, to readers of HBR and such, content.


The word “smart” is doing a lot of work here. And I think both the author and a lot of commenters in this thread are equating it with IQ.

What about EQ? I would also consider people with high EQ to be smart. It is a different kind of smart, and provably one more correlated with happiness.


This article kind of grinds my gears. I feel like there is an unstated assumption that people in pharma R&D are idiots and haven’t thought of this stuff.

Pharma companies care very much about off target effects. Molecules get screened against tox targets, and a bad tox readout can be a death sentence for an entire program. And you need to look at the toxicity of major metabolites too.

One of the major value propositions of non small molecule modalities like biologics is specificity, and alternative metabolism pathways; no need to worry about the CYPs.

Another thing they fail to account for is volume of distribution. Does it matter if it hits some receptor only expressed in microglia if it can’t cross the blood brain barrier?

Also the reason why off targets for a lot of FDA approved drugs are unknown is because they were approved in the steampunk industrial era.

To me this whole article reads like an advertisement for a screening assay.


I work in drug discovery (like for real, I have a DC under my belt, not hypothetical AI protein generation blah blah) and had the opposite experience reading it. We understand so little about most drugs. Dialing out selectivity for a closely related protein was one of the most fun and eye opening experiences of my career.

Of course we've thought of all these things. But it's typically fragmented, and oftentimes out of scope. One of the hardest parts of any R&D project is honestly just doing a literature search to the point of exhaustion.


I side with you. The more you know, the more you discover what you don’t know.

Every attempt to consider the extremely complex dynamics of human biology as a pure state machine, like with Pascal, deterministic of your know all the factors, is simplification and can safely be rejected as hypotheses.

Hormons, age, sex, weight, food, aging, sun, environmental, epigenetic changes, body composition, activity level, infections, medication all play a role, even galenic.


Put it this way: even in Pascal (especially in Pascal) you generally work in source code. You don't try to read the object code, and if you do, you generally might try to decompile or disassemble it. What you don't do -unless you're desperate- is try to understand what the program is doing by means of directly reading the hexdump (let alone actually printing it out in binary!)

Now imagine someone has written a Compiler that compiles something much more sophisticated into Pascal (some 'fourth generation language' (4GL) ) . Now you'd be working in that 4GL, not in Pascal. Looking at the Pascal source code here would be less useful. Best to look at the 4GL code.

Biology is a bit like that. It's technically deterministic all the way down (until we reach quantum effects, at least). But trying to explain why Aunt Betty sneezed by looking at the orbital hybridization state of carbon atoms might be a wee bit unuseful at times. Better to just hand her a handkerchief.

(And even this rule has exceptions: Abstractions can be leaky!)


You might be interested in this if you've never seen it: https://berthub.eu/articles/posts/reverse-engineering-source...


>molecules get screened against tox targets

sure! i cover this in the essay, the purpose of this dataset is not just toxicity, but repurposing also

>toxicity of major metabolites

this is planned (and also explicitly mentioned in the article)

>no need to worry about CYP’s

again, this is about more than just toxicity

>volume of distribution

i suppose, but this feels like a strange point to raise. this dataset doesnt account for a lot of things, no biological dataset does

>advertisement

to some degree: it is! but it is also one that is free for academic usage and the only one of its kind accessible to smaller biopharmas


My main point of skepticism about repurposing is whether this is giving any of new and actionable information. It seems to be reliant on pre existing target annotations, and qualified targets already have molecules designed for them. Is the off-target effect strong enough to give you a superior molecule? Why not just start by picking a qualified target and committing to designing a better molecule without doing all the off target assay stuff first?


The list price is mostly a starting point for negotiations with PBMs and payers. Drugs are also often aggregated and bundled. So in a lot of cases is unclear what a drug actually costs.


> One of the realities is more unyielding than the other.

Also ten or one hundred (people) is more than one.

That math can’t be reasoned out of existence either.


> They will follow instructions from a workplace superior with zero push-back

Zero push-back? Or zero push-back in front of the rest of the group?

Humans are pack animals, highly evolved for social connection, and ostracism can be life threatening. The benefits of group membership and cohesion are enough that it is worth tolerating some mistakes and suboptimal outcomes because over time the expected utility for individuals and in the aggregate is much higher when people are working together harmoniously as a group.


I totally agree that if it’s “just” politics or some purely social situation, then sure, the optimal behaviour is the one that prioritises the group dynamics and social pecking order. Even in practical matters like hunting or war, obeying more senior leaders can have a net positive outcome because of their greater experience, etc… This is likely true in many “low information; high variability” situations… which is a lot of them… but not all.

The problem is that we have one set of wiring, one set of instincts, and one set of common social behaviours. These just don’t work in “unnatural” scenarios for which we aren’t evolved, such as pure mathematics or computer science.

The maths just doesn’t care about your seniority and a proof is a proof irrespective of the age of the author.

To truly excel in those “hard sciences” the default wiring isn’t optimal.

The article states that non-default wiring has the downside of also causing autism.


Ime, there are two causes of heated scientific debates. (1) Conflicting or insufficient data. (2) Communication issues.

Cause (1) cannot usually be resolved without some sort of technological innovation.

Cause (2) is quite interesting because it is a social problem.

For example, someone comes to you with a markov decision problem and insists that no form of reinforcement learning could be a viable solution. Why would they do this? Probably because their understanding of RL differs from yours. Or your understanding of the problem differs from theirs. This can be solved by communication.

Stated differently, the topology of your “semantic map” of the domain differs from theirs. To resolve it you must be able to obtain an accurate mapping of their local topology around the point of disagreement onto yours.


(FYI— I’m not trying to be sharp, I’m trying to be direct because many of the autists I know hate people beating around the bush. I apologize if that’s presumptuous.)

> The problem is that we have one set of wiring, one set of instincts, and one set of common social behaviours. These just don’t work in “unnatural” scenarios for which we aren’t evolved, such as pure mathematics or computer science.

Social behavior is so complex that this is not a useful way to frame it. Most people see nonsense when they examine something they don’t understand.

> The maths just doesn’t care about your seniority and a proof is a proof irrespective of the age of the author.

You’re conflating sycophancy with tact. They are extremely different.

> To truly excel in those “hard sciences” the default wiring isn’t optimal. […] The article states that non-default wiring has the downside of also causing autism.

Statements like this are like bubble wrap people subconsciously wrap around their egos to protect it from things they’re insecure about. Most disagreements in the hard sciences don’t stem from people’s feelings obfuscating math. And when you’re trying to organize a team, solicit people’s best efforts to find a creative path forward with a nebulous problem, inspire people about your research to secure funding, inspire people to work on your problem rather than some other problem, mediating conflicts… all of those dreaded “soft skills” are every bit as important to science as the math as soon as your team is larger than one.

If your mental makeup affords you the ability to step back and say “hold on, I think we’ve got the numbers wrong, here,” then that’s fantastic. If you feel compelled to tell people they’re wrong, you’re probably getting something out of that, emotionally, and you just don’t realize how incredibly counterproductive doing so is. Not being able to effectively leverage a team to collaboratively solve a problem is very very bad for hard sciences, no matter how precise the numbers are, because you’re going to generate a lot fewer of them if nobody’s willing to work with you. Beyond that, in my experience, autists can often communicate really effectively together, but it can break down really quickly as soon as a less cut-and-dried conflict arises, especially if one of them has difficulty regulating their emotional responses, or easily feels alienated. Mediating that requires someone that’s able to recognize how and why someone might be hurting someone else’s feelings, and say “ok, let’s hold on for a second.”

And there are so many kinds of non-default wiring that trying to associate one with hard sciences doesn’t make sense. I went to art school with a ton of autists doing tech art: as a non-autist (with a mean case of ADHD,) I was the most technical one there by a mile. My friend’s wife is an autist artist that is absolutely allergic to math.

You should really challenge your assumptions, here. Consider your susceptibility to selection bias, overconfidence in your ability to gage the causes and effects of social motivations, and consider that many of your strengths may be far less coupled to autism than you imagine they are.


> ... Most disagreements in the hard sciences don’t stem from people’s feelings obfuscating math.

I didn't clarify my point sufficiently, we ended up "talking past each other" a bit because of this.

I'm not referring to people within the hard sciences having arguments! That happens, but like you said, typically for good and valid reasons.

I was referring to the general population of office workers and the like, outside of the highly-selective Silicon Valley startup bubble that many HN readers might find themselves in.

> many of the autists I know hate people beating around the bush

I'm not on the spectrum, but I do appreciate "direct" communication!

More to the point, you seem to be in the bubble I mentioned, so you may not even be aware of what a typical large corporate or government office worker's experience is like.

In my $dayjob I regularly see objectively bad projects moving forwards effortlessly with zero resistance. I see dozens of supposedly important people just "going with the flow" and nodding in agreement with their superiors because they're terrified of taking an objective stance against the "tribe leader". There are zero pointed questions asked. No technical analysis of any kind. No objective metrics or numbers, ever. No graphs. No charts. Nothing you might recognise as "science".

Just a few weeks ago I was in a meeting where they were presenting a new network security design that had already been signed off and approved for implementation by dozens of senior leaders including the CIO, CTO, CISO, etc...

This multi-million dollar project was already in motion for six months, and I was the only one to ask pointed questions: "Won't routing all outbound traffic via another cloud provider tank network performance? Won't that result in hairpin networking where we go out and back in to talk to ourselves? Won't this break out server-to-server firewall rules? What about egress bandwidth costs, have they been estimated? Has anyone tested any of this?"

"No, we didn't test it, the vendor selling it to us assured us it was good, its in the top right Gartner magic quadrant, and it has been signed off, so there's no concerns."

Translated: "Authority, authority, authority."

This is what the "rest of the world" is like, the vast majority of the general population out there working in typical jobs.

You yourself said you know "many autists". You're in the 5% highly selected weird corner of the world, probably a startup or something akin to it.


Big caveat here is how people are using the LLMs. Here they were using them for things like information recall and ideation. LLMs as producer and human as editor / curator. They did not test another (my preferred) mode of LLM use - human as producer and LLM as editor / curator.

In this mode of use, you write out all your core ideas as stream of consciousness, bullet points or whatever without constraints of structure or style. Like more content than will make it into the essay. And then have the LLM summarize and clean it up.

Would be curious to see how that would play out in a study like this. I suspect that the subjects would not be able to quote verbatim, but would be able to quote all the main ideas and feel a greater sense of ownership.


> Personally, I’ve been seeing the number of changes for a PR starting to reach into the mid-hundreds now. And fundamentally the developers who make them don’t understand how they work.

Could this be fixed by adjusting how tickets are scoped?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: