I don't think "most appeals" is accurate. It's the aesthetic that results in the most profit, which just means that it's optimal in terms of making tradeoffs between level of appeal to various consumer segments, price points, cost to manufacture, etc.
But maximizing profit and the tradeoffs that result from that are definitely not equivalent to most appealing to the general population.
The statement about the lack of inflation from QE and other stimulus programs from 2008 is pretty questionable. There's been little inflation as measured using usual consumer price indices, but the construction of those indices is typically fairly focused on consumer goods and underweights the assets that rich people tend to invest in (stocks, real estate, bonds, etc).
The QE and stimulus programs from 2008 were significantly more targeted toward the upper and upper-middle classes (arguably without that much trickle-down), and so there wouldn't be much significant inflation as measured by consumer price indices.
But if we look at the assets that rich people invest in (since it's mostly wealthier people who benefited from the 2008 stimulus programs), then I'd say there's been a significant amount of inflation - P/E ratios for stocks have been historically high in the last few years, real estate in desirable cities has gotten significantly more expensive, and bond yields have been low.
We seem to be already seeing some of the same, with the stock market being pushed up by the Fed's commitment to 4T+ in stimulus this time around and ever lower interest rates.
QE never really worked, bailed out a bunch of corrupt and broken companies that should have gone bankrupt, and kicked the can down the road. They were supposed to unwind QE1 but they never did. And $4T in toxic QE1 assets sat on the Fed's balance sheet going into this mess. The Fed is propping up the bond market and toying with the idea of buying equities. We just had 17 million people file for unemployment in 3 weeks and the Dow went up a few hundred points. Our markets have been completely decoupled from economic reality because the Fed is faking demand and not letting the markets crash like they should. Our fiscal deficit is already $3 trillion this year and it's only April. This is a recipe for disaster. How much of corporate America will the Fed own when this all comes crashing down? Will we have a nationalized economy by default?
The markets dropped because other countries already had similar issues.
The stock has it mostly priced in, but the rise of the stock is attributed to the fact that European countries are flattening, vaccines and ramped up testing in the US.
To paraphrase Luke Skywalker in The Last Jedi, everything in that sentence is wrong--or at least seriously questionable.
> The stock has it mostly priced in
It does? I've seen about the same number of analyses indicating that companies are doing dubious things to bolster their earnings on paper (and haven't priced in the full cost of the crisis) as I've seen supporting your argument.
> but the rise of the stock is attributed to the fact
Explanations for this abound. From basically every corner, qualified, popular, neither, and both. Post-Black-Swan shockwaves are usually accompanied by a lot of confusion and after-the-fact pseudorationalization, and this situation is no different.
> European countries are flattening
Sort of (recorded clusters/outbreaks keep getting reported),
partially (some countries are doing really well, some are not), and only according to really, really new data.
> vaccines
Presumably you mean vaccine research getting underway? This is happening, to be sure. Still has the attribution (after-the-fact potential for false correlation) problem mentioned above.
> ramped up testing in the US
There's not consensus on the effect of more testing on the markets either. Testing builds confidence/information on the one hand but, in many US states, it has revealed worse-than-expected (well, really worse-than-hoped) numbers of sick people on the other.
I guess the upshot of the above is: you make a very authoritative statement about why the markets have behaved in a certain way. That information isn't even beginning to be known, or, at this early phase, knowable, with any sort of confidence.
QE1 worked well and the banks are not corrupt. It's in the later years, while stocks and the economy were on a tear, that the Fed at. al. refused to raise interest rates ... this perpetuated the housing bubble among other things, which is the #1 source of inequality (hint, it's not between the billionaires and the rest of us, it's between the propertied and the unpropertied).
QE1 did not work at all. And the Fed's 0 percent interest rates have distorted capital markets causing corporate debt to skyrocket. That alone is propping up zombie companies and this overleverage is what will make the coming recession / depression even worse than 2008. Note: the Fed is violating the Federal Reserve Act by using BlackRock as a proxy for bond purchases. So please do not tell me they are not corrupt. They're beyond corrupt.
This. The 'race to the bottom', committed globally by central banks, to get to 0% interest rates, and keep them there for the sake of keeping the markets 'up', exacerbated by top level politicians' desires/directives.
If there's no cost to borrow, why not keep borrowing? And keep borrowing? And lending? Until things really unwind in a recession/depression that will de-lever everything/everyone.
The moral hazard? I try to maintain my own family finances, keep a budget, save for expensive things. And in the span of 3 weeks the US has spent $7,000 of every individual's future ($2T / 340M people).
> And in the span of 3 weeks the US has spent $7,000 of every individual's future ($2T / 340M people).
You can’t really claim that. The reality in the progressively taxed US is the higher tax brackets pay the overwhelming majority of the total tax bill.
In reality, the payback of that $2T will fall on the top taxpayers. So it’s more like $10 for most Americans and $100000+ for the top percentiles. Bezos types will pay tens of millions.
We're about to find out what global QE to infinity does.
I'd expect more events like the recent boom in stock prices in spite of massive global unemployment and the wave of unrest nicknamed the Arab spring which came after 2008 and had origins in economic disruption. Revolutions often come after the unbearable has passed.
Even if we quickly overcome the virus the global economic impact of the lockdown and QE will be severe and long lasting. It's quite possible that due to QE/stimulus none of that will show up in stock prices and they will shoot up, boosting inequality again.
Based on past experience post-dot-com-bust-bailout and post-2008-bailout:
Prepare for the $1.5M starter home financed by an 0.08% interest 90 year no money down mortgage, $800k student loans at 0% interest, and business lines of credit at 2% to any business that has shown any revenue at all in the last month and that can produce one mammal capable of fogging glass. Any mammal with provable respiration will be able to get a car loan and a credit card.
Go long on real estate, stocks, bonds, Bitcoin, Litecoin, Dogecoin, all the shitcoins that don't even work, gold, oil, cow farts, and pogo stick futures.
Expect more monstrously overfunded "unicorn" startups that lose massive amounts of money and produce very little. These are basically Ponzi schemes targeting the very rich and venture funds.
Wages however will continue to stagnate. Everything always goes up but wages.
As a result of wage stagnation in 2024 or 2028 another even more asinine Populist than Trump will be elected; whether they are "right-wing" or "left-wing" will depend on which side is able to produce a louder demagogue and better memes. Meanwhile the rich will build orbital bases and prepare to leave the planet.
You've put into words what I was thinking about the other day.
I always hear folks say "Weird there was no inflation after the 2008 stimulus" or even experts try to claim that there is none.
But it's not about inflation of everyday consumer goods, it's about the massive inflation in things that consumers don't purchase regularly (things like houses). And at this point, I think we've done a lot to essentially price regular consumers out of that market.
Think about that. We've priced regular folk out of things that build wealth. It's going to lead to something terrible down the road. Even more terrible than what we're already living in.
Every HN thread on economics has a bunch of comments like these that are earnestly misinformed about economics. When commenting on something outside of your wheelhouse, please recall Socrates from the Apology: "I observed that even the good artisans fell into the same error as the poets; because they were good workmen they thought that they also knew all sorts of high matters, and this defect in them overshadowed their wisdom."
I'll point out just two deficiencies in these threads and leave the rest to you. If P/E ratios in the US are "too high," then people would invest their money elsewhere for better return, right? Maybe international stocks or bonds or whatever. And why don't they, if they have every incentive to seek a better return? Because there are no better returns, even in countries with higher interest rates. So how could P/E ratios be too high? The more likely explanation is that this is the "new normal" - savings outpaces investment opportunities for many reasons (aging populations, growth in countries with stronger saving cultures, etc.), which pushes up the premium on assets.
Second, on the subject of interest rates and QE, a little international perspective would make you reconsider the effect on the overall economy. All other developed economies have lower interest rates, more QE, and slower growth than the US. Look at Europe, look at Japan. The issues of "why are asset prices rising" and "why is inflation low" are much larger than just US policy. We are talking global trade and demographic factors that influence these things. The current stance of fiscal and monetary policy is the symptom, not the cause. And in fact the US has been significantly more successful than our counterparts on that topic, as a fast and strong response in 2008 pre-empted the kind of drawn out economic malaise seen in Europe, where the ECB waited years before easing policy to the degree that we had. Now Europe has lower rates, more QE, lower inflation, a worse labor market, and less growth than the US. And that's before factoring in the coronavirus crisis. Policy may have increased inequality in some ways, but if you're going to make that claim you have to also answer the corresponding counter-factual: potentially the poor would have been even worse off (relative to the rich) if there were no policy interventions and the labor market collapsed. There have been some papers on the topic, and it is not at all obvious that inequality is worse now than it would have been if there was less policy intervention.
I never said that P/E ratios in the US are "too high". I said that they've been higher in recent years than in past history.
This is, as you said, because the amount of money chasing investment opportunities is increasing. I agree that the reasons for this are complex, but one significant reason that the amount of money chasing investment opportunities is increasing is central bank stimulus (not just in the US, but worldwide).
The rise of asset prices definitely is a much more complex issue than just US policy, but I'd still argue that stimulus by the US is one of the causes, not only a symptom.
I agree with you that the US has been more successful in managing the 2008 crisis than the ECB, and that part of that was because we recognized that we needed stimulus earlier on in the crisis. But this doesn't contradict anything else that I said.
I also never made the claim that the counterfactual of no stimulus would have been better - I personally believe it would have been worse, since the economy and labor market would have likely went through a longer and more serious collapse. But again, this does not contradict what I said about central bank stimulus being one of the significant causes in the rise of prices in financial assets.
I appreciate your response and think you're mostly on the right track. I didn't intend to call you out, more to point out a few mistakes in thinking that are common in these kinds of threads on HN (and indeed are more egregious elsewhere in the responses to this OP).
> I also never made the claim that the counterfactual of no stimulus would have been better
To be fair, that wasn't the only option. There was the debate in 2007/08 about how much of the stimulus should be monetary vs handouts directly to taxpayers vs infrastructure. There's some who believe that the "infrastructure" portion was too small of the pie, hence the stomach for things like "Infrastructure Week" from Trump or "Green New Deal" from the left wing of the Democrats.
In the abstract, I liked the idea of spending it on infrastructure, but over the ensuing decade I’ve become deeply skeptical of the ability of the US government to effectively spend more money. California high speed rail and the NY subway are exhibits a and b.
I agree with you (to your point) about California high speed rail: a solution looking for a problem. But NY subway is arguably something that could generate huge returns if the capex was committed to modernizing and automating the public transport of the biggest city in the USA and a major financial capital.
CAHSR is mostly proof of a few things that America gets wrong
- heavy reliance on consultants is not a financially prudent exercise compared to building up a competent civil service, particularly since consultants want to keep the gravy train running
- sustained funding for projects is the way to build up a civil service that can push out projects; conversely, "get a ballot measure passed first and ask questions later" is a very bad model.
Economics is a weird discipline where everyone becomes an expert on it at age 15. I've even seen people confidently expound on economics when it's clear they don't even understand the difference between revenue and profit.
In contrast, nobody is willing to argue with a physicist unless they are at least as educated in physics as their counterpart is.
It's really a shame that although we live in a market economy, there is no attempt whatsoever in K-12 to explain how markets, business, and accounting work. A high school graduate is unlikely to even grasp what compound interest is.
There are a number of subjects like this. I feel it has something to do with not making predictions about the future -- leaves no clear indication of good/bad decisions that are tied to skill.
The really interesting areas like this is something like fringe/doomsday religions, that do make predictions about the future. When these predictions inevitably fail to pass, the believers double down.
Because economics isn't a real science. As soon as someone starts talking about it like it is a science my bullshit alarm rings and I politely extract myself from the conversation.
This is basically my point. Didn't particularly mean to criticize davidxc, more the tendency for people on HN to assume that because they're good at programming they must also know a lot about economics, the stock market, and etc.
To be fair, there might not be anyone who can opine on economics and be correct. It's all either survivor/hindsight bias, or making indistinct, unverifiable predictions.
Economics happens to have high stakes, so people flock to it. Nobody really knows what's going on.
We do know a few things, such as attempts to repeal the law of Supply & Demand fails again and again. We also know there is no Free Lunch, as every effort to implement one has failed.
It's like I am no physicist, but I know that anyone who claims he's invented a Perpetual Motion machine is either a fraud or made a mistake.
"a bunch of comments like these that are earnestly misinformed about economics. "
" savings outpaces investment opportunities for many reasons (aging populations, growth in countries with stronger saving cultures, etc.), which pushes up the premium on assets."
The savings rate is not correlated with stock prices. [1]
"All other developed economies have lower interest rates, more QE, and slower growth than the US. "
No, they have similar rates per capita. US grows because it brings in more bodies [2]. Moving warm bodies from A->B implying a loss somewhere and again somewhere else isn't exactly growth. (I mean - yes, they probably can be more productive in America). But this is not an economic marvel.
The OPs statements concerning inflation of financial assets is very, very reasonable economics.
The argument is not that the US savings rate pushes up the value of financial assets in the US, I'm talking about globally (i.e. the global savings glut hypothesis). Countries like China, Saudi Arabia, and Germany have significant capital account surpluses which continue to get invested in assets in the US, particularly the stock market.
This is a fair point. But the Fed has been playing funny money during all this time, backing dollars with garbage real-estate, so I don't think it's fair to say this is just a regular 'new normal' as in asset prices were marked properly.
'New Funny Money Normal' - maybe, but the massive Fed balance sheet first enables those with assets, not those who don't i.e. 'the rich'.
This is unlikely to be read given how late it is -- but one major flaw in your argument is the statement "people would invest their money elsewhere". It is very clear that the largest investors are not using "their money" but borrowed money. Ultra-cheap debt has enabled hedge funds and companies to leverage up and purchase far more stock than they could otherwise afford, massively driving up demand and pumping up equity prices to the high P/E ratios that you dismiss -- and low interest rates are the enabler.
Stock Buybacks By Corporations The Largest Share of U.S. Equity Demand
you can't make such a grandiose condemnation of "earnest misinformation" and then not make perfectly defensible arguemnts, lest you make the exact same mistake you condemn.
p/e ratios at historical highs is a statement that they've disconnected from their fundamentals, i.e., the price of a share of a company is (often much) more than the expected present value of all future cash flow for that share.
that there are no better alternative investments just strengthens the case that those p/e ratios are irrationally high for those assets, not that the strategy of investing in the best available alternative is irrational.
I don't understand what you're trying to say. You can't easily say that asset prices are "disconnected from their fundamentals" - the price is what people are willing to pay for the future earnings of those companies. People are willing to pay a higher premium for those earnings now than they have in the past. Instead of the comparison to historical highs, try looking at developing countries, with lower P/E's and higher interest rates. Yet investors are still willing to pay a premium for US stocks. If you're going to say that the price is wrong, you need to account for that discrepancy. It seems that the argument you want to make is that people buy overvalued US companies because they think everyone else will continue to buy overvalued US companies?
i'm saying the premium you mention is the additional willingness-to-pay for those equities being the best available, not something intrinsic to the underlying business. it's on top of the value of the cash flows.
in a hypothetical market with 2 relatively correlated (similar beta) stocks, one that historically returns 10% and one that returns 2% and an expectation that those returns continue in the near future, you'd put all your money on the first stock, regardless of the price and regardless of systemic conditions.
in that scenario, you'd expect to be making your most rational choice even if you overpay severely. in the case that the market crashes, you'd lose less money than the opposite scenario. the price says nothing about the value of the underlying cash flows (the fundamentals).
Edit: is this not a fair question? Putting one's own money on the line is a reasonable test of what one's convictions are. For example, I'm optimistic about the market, and have put my money where my mouth is. Of course, that doesn't mean I'm right, but I have a level of confidence that I am.
that's mostly a gamble about timing, not the validity of the p/e ratio--that you can both get into and get out of a short position with sufficiently precise timing.
I think your question is riding the edge of what might be considered too confrontational on HN.
the overall sentiment is reasonable, of course. you shouldn't listen to people who say the sky is about to fall while they continue buying $SPY every other friday.
yes, basic consumer goods didn’t see much inflation. But what about housing, education, even stuff like cars and travel. Of course, other factors are at play too, but abundant “cheap” money certainly increase prices of those
It's American consumerism fueled by available credit.
What's really fascinating is that what you said applies to McMansions, and ... private aircraft too.
Cessna cancelled their basic $300,000 new 172 because people were only ordering the $400,000 glass panel model. The price is so high they took it off the website for the first time. (And older pilots were expecting it come in at $80,000. lol.)
Cubcrafters makes composite Cubs(!) that are priced starting in the $190,000 range for LSA and $317,000 for Part 23, and again, mainly sell the maxed out versions.
The increasing volatility of future economic prosperity in most places, as well as concentration of burgeoning businesses into a few select cities is not factored into the CPI, and is probably not able to be factored.
The difference in probabilities of future life outcomes for living in certain prosperous neighborhoods and cities and the compound effect of children growing up in those is very material nowadays. This especially effects how much more housing and land costs in certain cities, and how much people are willing to gamble on it by leveraging more to "buy in" to those probabilities of future success.
Not only are they included they account for almost a third of CPI. Thing is it is measuring nationwide housing costs which may not be moving in sync with Bay Area rents
If we look at inflation for consumer goods (not oil, healthcare, housing etc.) there may be another, non-fiscal policy aspect coming into play shortly.
Formerly Chinese (and sometimes subsidized I suspect) dirt cheap items might become a lot more expensive. Non stick frying pans and pool swim noodles type things. I can't believe that returns exactly to how it has been.
Another way of looking at it: consumer goods (as measured by CPI) have radically dropped in price relative to non-CPI goods over the past few years (perhaps driven by automation and optimization of global supply chains) while, at the same time, the general price level has risen as we've exploded the money supply.
I believe my real analysis class made me a significantly better programmer and thinker in general, but I don't use anything specific from it (not yet, anyway).
Genuinely curious how real analysis made you a better programmer...? I've taken it and loved it, but my interest was more due to physics/engineering applications. I can't see a connection to programming.
I suspect it could be that real analysis truly makes you challenge your assumptions and forces you to build a habit of thinking through corner cases.
Other maths courses do this as well, but real analysis is particularly vivid for some people because of all the fun counter-examples you get to see in a well-taught course, such as a function that is everywhere continuous but nowhere differentiable.
This habit of thinking through corner cases is something I miss from a lot of (junior) programmers.
Look up tax incidence and elasticity of supply / demand curves. Yes, it could definitely be reasonable to assume, from a modeling perspective, that companies currently charge the optimal price for maximizing profit. However, once you impose extra transaction costs on companies (like a tax), the equilibrium price will then change (previous equilibrium is no longer the equilibrium because external state has changed).
By your argument, taxing a transaction in a market would not ever raise the price charged to the demand side. It's pretty easy to see by reading a chapter about tax incidence (which is different from where the tax is legally placed) and elasticities of supply/demand curves that this is definitely false (not just in theory, but also in practice)
As someone else said below, treating STEM as one category is absurb and lumps together way too many different majors and careers (that have drastically varying levels of attractiveness and compensation growth).
Majors like biology and chemistry have fairly terrible prospects with just a BS degree, but CS and the engineering majors are still quite good. Physics and math are more iffy, but if you know what you're doing and pick up some employable skills on the side, then those majors will at least get you into interviews for good jobs.
There's also the question of what "not enough jobs" means. There are definitely struggling CS majors, but I think that a statement about there not being enough jobs needs to be looked at in a relative way - that is, one needs to consider what the alternative options are and whether those alternatives have better prospects. Many careers have been on the decline, and it's difficult to really identify career paths that are significantly better than computer science / software engineering (at least, at the undergraduate level). Even if we compared careers that required graduate school, the only paths that one could plausibly argue are significantly better than tech are medicine, law, and business (in my opinion). Those three careers all come with their own serious tradeoffs and downsides.
If anyone has information on what career paths are significantly better than CS / engineering, I'd be interested to hear your opinion. Right now, I'm unfortunately not seeing significantly better alternatives.
I think the hype is really, really dangerous. I work as an external examiner for CS students at an academy and bachelor level. 10 years ago, maybe 20 students would finish from a single school, in 2019 that number is in the several hundreds some places thousands. If I look at my average grading over the years, there is a clear trend too. People are either really good or really bad, where 10 years ago it was far more spread out, and a lot more people were “average”. It’s anecdotal but I think it’s because hype has pushed too many people into CS.
There will always be a need for excellent CS students, preferable with candidate or masters, just like there will always be a need for excellent biologists or great escimologists. I don’t think there will always be a need for below average CS students, especially not at the rate of which we’re producing them right now, again thanks to the hype.
One of the reasons I say this is because of automation. If you look at operations, the cloud has really killed a lot of jobs in enterprise IT departments, because it’s so much easier to operate your stuff in AWS or Azure than when you had to have your own infrastructure. Sure there are still operations people around, but notice how they are the best operations guys not the averages, because the people who were average 10 years ago are unemployed today.
It’ll be the same for development. We already see bits of it, at least if you’ve been around for a while. 19 years ago we build our first web based enterprise tool to handle employee vacation, sick leave and tax-refund on corporate related driving. It was a massive JSP undertaking that took 20 guys and 6 months. In 2017 it was replaced by a modern web tool build in .NET framework web-app and Angular, it took an intern three weeks to do it.
If you look at what areas are becoming useful, it’s not really CS. Sure you’ll be able to use some CS students for ML, but you’d rather have a mathematician or a statistician who can code. Sure you can use some CS students for robotics, but you’d rather have an electrical engineer.
I’m Danish, our jobmarket is different, but we too hype STEM and especially CS, but the truth is, that what we’re really going to need is electricians, pumblers and other craftsme because every young person wants to learn to code.
One thing that had happened, especially in SE Asia, is the wage has been driven down outside of FAANG. Many non-tech people view development as plumbing, which it kind of is, but at least plumbers are certified which guarantees a minimum level of competency. Buyers don't view development work in terms of value delivered but only in terms of price. This destroys the middle market for good but not industry leading devs.
This is not limited to SE Asia. Outsourcing, rather over-simplified and limited definitions of cost, buyer's market for companies (and no, the bigger players don't care for real talent at large) drive this in other environments too.
I am from Germany and in this market for 20 years as a freelancer/contractor/consultant. Personally - having a major in mathematics and relevant project experience - I have no significant problems. But 'dev-only' people certainly face the mass market effects.
Also from germany. CS degree = 75% mathematics + 5% coding + 20% other theory. Fresh out of university i hardly could write code at all and had zero experience with databases.
What I see happening that even though productivity has increased by e.g. an order of magnitude, projects still remain big since there's a latent demand for more features.
There might be latent sales department demand for more features. I don't think there's latent market demand for more features, at least not to the extent consumers would choose to pay for new features.
How many Facebook users were willing to pay to have to download a separate Messenger app, or for Oculus integration? How many Apple users would pay extra for the Touch Bar if it were a standalone feature?
My guess is, not many. Probably not enough to justify adding the feature.
>2019 that number is in the several hundreds some places thousands.
wow - how are the uni's getting the space / resources to do that? I have some knowledge of the investment required in labs and lecture theatres, and I know of several CS programs in the UK capped by the constraints that these impose. Basically if you can't get the class into a "standard" lecture theatre at your institution you are capped. Attrition is ~5 and 15% (sometimes higher - but the teaching quality stuff kicks in) per year so by the time you get to a graduation class it's rare that >100 graduate - more like 60.
Good question - I think that hands on hardware should be at least one bit of a proper CS degree, for things like massive parallelism and FPGA's perhaps you could use cloud resources - does this cost in?
I’m not sure that you’d rather have a mathematician or statistician who can code for all cases with ML. At least at universities I’ve gone to, even the main research in these areas is going on in CS/ECE departments or some in the stats department. Implementing non-novel ML stuff seems like most of the difficulty would be in data movement etc. since you can use the big frameworks for most easy things. Even testing new stuff at small scales might involve lot of eg fiddling in MATLAB and testing on examples rather than only proving theorems
Of course the “CS students” I work on these topics with are more math heavy CS graduates (some sort of have a complex about not being in the math department) rather than someone who is extremely good at coding and only took the calculus sequence necessary to graduate with his CS degree.
Careers that have higher status and entry requirements have the advantage that they don't cannabalize their middle-age job prospects like CS does.
CS is great for many years, until you discover it's a year too late, with only a limited number of things to show for it, along with largely useless random domain knowledge; and your peers who didn't major in CS are gaining more leverage relative to you, due to compounding nature of advancement in those other professions compared to constant skill (re) acquisition in tech
I largely agree with what you say for generic CS skills. A way to accumulate leverage might be focusing on hard and deep skills like distributed computing and machine learning for complex real-world problems. Although details change significantly over time, the fundamentals evolve much more slowly and take years to truly understand and be able to apply them effectively.
Alternatively, moving into technical management with deep domain knowledge might be suitable for some.
Machine learning effectively didn't exist a decade ago. A decade from now, it's likely to be where building HTML pages was in the nineties, or database-backed web applications in the 00's.
I understand it's a deeper skill set, but depth has little to do with supply-and-demand. Physics, biology, and similar have a lot of depth, and the fundamentals evolve much more slowly and take years to truly understand and be able to apply them effectively as well, but employment prospects are grim.
If blockchain had turned out to be the Next Big Thing, the important fundamentals would have been in cryptography. And so on. Someone young, without family, mortgage, etc. obligations, will be able to get into the current hot field much more quickly than a 40-year-old or 50-year-old with three kids in school and possibly starting medical problems.
It doesn't help that there is massive age discrimination.
If you want to be employed older, the trick seems to be to move into a cross-disciplinary role, such as management (people+tech). A lot of other cross-disciplinary roles will do, though (medicine+tech, bio+tech, chemistry+tech, EE+tech, etc.). Pure tech doesn't seem to cut it after thirty for career growth, after forty for job stability, or after fifty for having a job at all.
Machine learning has been applicable to businesses since the late 90's and early 00's (e.g. recommendation engines, basic speech recognition for people with special needs) and existed as a research field decades earlier.
The star researchers who command top compensation in the field today often have over a decade of experience.
I agree that a much larger pool of graduates might dampen average compensation in the future somewhat. A key distinguishing feature of these more complex skills (relative to HTML, etc) is that much fewer percentage of people are capable of mastering them, and it takes longer commitment as well.
Relative to pure physics and biology, the applications are much broader and thus better prospects for the experts.
Do senior petroleum engineers face the career issues you suggest?
I’m 40, all my significant career growth was after 30 with most of it after 35, I’m more employable now than I was 3 years ago and am paid well. I was really worried about a Logan’s Run-esque slaughter at 40 and I’m not seeing it, with the usual caveats on luck and anecdotes. I do believe there is intense ageism in our industry but I think it may be endemic to certain areas or companies. Don’t fear the reaper my friends, you can be 40 with a family and still work in high profile tech, probably.
> Careers that have higher status and entry requirements have the advantage that they don't cannabalize their middle-age job prospects like CS does.
Do you have any studies or evidence to back that assertion up? If the number of people working in a field doubles every five years there will then at 20 years with perfect retention only 7.5% of the population of workers will have 20 years experience and half will have less than five.
High status doesn’t really help that much if the economics and funding environment are awful. Professional actors are not looked down on by many and even fewer look down on biology or chemistry professors but trying to get into those careers is a poor choice unless your parents can support you if it doesn’t work out. Vicious competition for a small number of coveted spots leaves to many people spending years of their lives only to drop out of a tournament they lost, with precious little to show for it.
If you're smart about what you learn and what jobs you take, you can build a skill set that will keep you in demand forever. If you just spend 20 years building boring crud apps using the hot stack du jour, of course you're going to have problems.
Always try to make your next project more ambitious than your last. Look for opportunities to incorporate challenging features into boring products, and ask your employer to let you work on progressively harder things.
It's from 2013, but Peter Turchin talks about law degrees in the U.S., where job prospects had become bimodal. Note that this would not show up in unemployment stats necessarily, but the salary prospects of the lower mode clearly did not justify the time and expense of the law degree: http://peterturchin.com/cliodynamica/bimodal-lawyers-how-ext...
As a law student, I definitely saw this bimodal outlook. I went to a second-tier law school, but in the New York City area. At the time, twenty years ago, the "average" starting salary for my school was posted at $65k. But what you later found out is that the top students were getting offered $140k (Top NY firms), while everyone else was getting $30k (working for judges and small NJ firms). So no one was actually getting an offer for the average salary. Some of the top NYC firms would interview at my school, but would only interview the top 2% of students. Also, the reason the "spike" at the high end is so sharp and pronounced for law is that top tier law firms mostly all offer the same salary. If one of the group raises the starting salary, all the other top tier firms instantly match that new salary.
Consider the defense industry / government contractors. I'm not sure what kind of salary you're looking for, but the defense industry / gov contractors pay fairly well (I'd guess ~150k ish for someone with your background) and checks most of your requirements.
Here's a more simple thought experiment that gets across the point of why p(null | significant effect) /= p(significant effect | null), and why p-values are flawed as stated in the post.
Imagine a society where scientists are really, really bad at hypothesis generation. In fact, they're so bad that they only test null hypothesis that are true. So in this hypothetical society, the null hypothesis in any scientific experiment ever done is true. But statistically using a p value of 0.05, we'll still reject the null in 5% of experiments. And those experiments will then end up being published in scientific literature. But then this society's scientific literature now only contains false results - literally all published scientific results are false.
Of course, in real life, we hope that our scientists have better intuition for what is in fact true - that is, we hope that the "prior" probability in Bayes' theorem, p(null), is not 1.
> But statistically using a p value of 0.05, we'll still reject the null in 5% of experiments. And those experiments will then end up being published in scientific literature. But then this society's scientific literature now only contains false results - literally all published scientific results are false.
The problem with this picture is that it's showing publication as the end of the scientific story, and the acceptance of the finding as fact.
Publication should be the start of a the story of a scientific finding. Then additional published experiments replicating the initial publication should comprise the next several chapters. A result shouldn't be accepted as anything other than partial evidence until it has been replicated multiple times by multiple different (and often competing) groups.
We need to start assigning WAY more importance, and way more credit, to replication. Instead of "publish or perish" we need "(publish | reproduce | disprove) or perish".
Edit: Maybe journals could issue "credits" for publishing replications of existing experiments, and require a researcher to "spend" a certain number of credits to publish an original paper?
That's a good idea: encourage researchers to focus on a mix of replication and new research. When writing grants, a part of that grant might be towards replicating interesting/unexpected results and the rest for new research. Moreover, given that the experiment has already been designed, replication could end up demanding much less effort from a PI and allow his students to gain some deliberate practice in experiment administration and publication. On the other hand, scholarly publication might have to be changed in order to allow for summary reporting of replication results to stave off a lot of repition.
My field has less of a "You publish first or you're not interesting" culture than many others, and part of what that is is recognizing that estimating an effect in a different population, with different underlying variables, is, itself, an interesting result all its own.
Tim Lash, the editor of Epidemiology, has some particularly cogent thoughts about replication, including some criticisms of what is rapidly becoming a "one size fits all" approach.
Suppose all experiments have a p-value of 0.05. Suppose scientists generate 400 true hypotheses and 400 false hypotheses. One experiment on each hypothesis validates 380 true hypotheses and 20 false ones, for a cost of 800 experiments. If we do one layer of replication on each validated hypothesis, then, among the validated hypotheses, the 380 true will become 361 doubly-validated true hypotheses and 19 once-validated-once-falsified (let's abbreviate "1:1") true hypotheses; the 20 false will become one 2:0 false hypothesis and 19 1:1 hypotheses; all this increases the cost by 50%. Then it seems clear that doing a third test on the 38 1:1 hypotheses would be decently justified, and those will become 18.05 2:1 true hypotheses, 0.95 1:2 true hypotheses, 0.95 2:1 false hypotheses, and 18.05 1:2 false hypotheses. If we then accept the 2:0 and 2:1 hypotheses, we get 379.05 true and 0.95 false hypotheses at the cost of 1238 experiments, vs the original of 380 true and 20 false at the cost of 800 experiments; the cost increase is 54%.
On the other hand, suppose scientists generate 400 true and 4000 false hypotheses. The first experiments yield 380 1:0 true and 200 1:0 false hypotheses, at the cost of 4400 experiments. The validation round yields 361 2:0 true, 19 1:1 true, 10 2:0 false, and 190 1:1 false, costing 580 extra experiments; re-running the 1:1s, we get 18.05 2:1 true, 0.95 1:2 true, 9.5 2:1 false, and 180.5 1:2 false, costing 209 extra experiments. Taking the 2:0 and 2:1s, we get 379.05 true and 19.5 false hypotheses for 5189 experiments, instead of 380 true and 200 false hypotheses costing 4400 experiments; the cost increase is 18%.
So it's clear that, in a field where lots of false hypotheses are floating around, the cost of extra validation is proportionately not very much, and also you kill more false hypotheses (on average) with every experiment.
What is the "cost" of believing false hypotheses? It depends on what one does with one's belief. Hmm.
It would be nice if someone made a stab at estimating the overall costs and benefits and making a knock-down argument for more validation.
"Maybe journals could issue "credits" for publishing replications of existing experiments, and require a researcher to "spend" a certain number of credits to publish an original paper?"
This would cripple small labs, unless people's startup packages come with potentially millions of dollars in funding to get their first few "credits".
It depends on the field and the policy would best be followed in an area like experimental psychology, where replication is not extremely costly (and where it might be an especially large program).
If a null hypothesis is invariably true, it's impossible to reject it. Which means the scientists will not be able to find any statistic or data to support any of their bad, original hypotheses. Not 5%, not 0.005%, nor whatever.
p-values are not flawed. They are a useful tool for a certain category of jobs: namely to check how likely your sample is, given a certain hypothesis.
The argument in the original post is a bit of a straw man fallacy.
"I want to know the probability that the null is true given that an observed effect is significant. We can call this probability "p(null | significant effect)"
OK, hypothesis testing can't answer this type of questions.
Then "However, what NHST actually tells me is the probability that I will get a significant effect if the null is true. We can call this probability "p(significant effect | null)"."
Not quite correct. It's "p(still NOT a significant effect whatever it means | null)".
> If a null hypothesis is invariably true, it's impossible to reject it. Which means the scientists will not be able to find any statistic or data to support any of their bad, original hypotheses. Not 5%, not 0.005%, nor whatever.
Why argue when you can simulate:
> n <- 50
> simulations <- 10000
> sd <- 1
> se <- sd/sqrt(n)
> crit <- 1.96 * se
> mean(abs(colMeans(sapply(rep(n, simulations), rnorm))) > crit)
[1] 0.0494
Lo and behold, we reject the null hypothesis that the mean of a normal distribution is equal to zero in 5% of all simulations, even though the null hypothesis is in fact true. (`rnorm` defaults to 0 mean and 1 sd)
It's always refreshing to meet a fellow R hacker on HN!
May I ask you why you chose to use the normal distribution in your example or any distribution at all, for that matter? What I was replying to was
">they only test null hypothesis that are true."
Which means that the null hypothesis is always true no matter what data you collect trying to reject it. It does not depend on the null distribution (normal in your example), the value of the test statistic (the mean of the sample in your example), or the threshold (crit in your example). In fact, the null distribution in this case is not a distribution at all since there's no randomness in the null hypothesis. We know for a fact that it is always true (in the hypothetical situation we are considering).
It's more like
> rep(FALSE, simulations) # is the null hypothesis false? nope
or, if you insist on using the normal distribution,
In fact, in your example, since you are essentially running 1000 hypothesis tests on different samples, multiple hypothesis correction would solve the "problem" with p-value. This is how I would do it.
> May I ask you why you chose to use the normal distribution in your example or any distribution at all, for that matter?
The distribution is not important, any other data generator would do.
> Which means that the null hypothesis is always true no matter what data you collect trying to reject it.
The idea behind the thought experiment was that we live in a world in which researchers always investigate things that will turn out not to exist / be real, but the researchers themselves don't know this!, otherwise they wouldn't bother to run the investigations in the first place.
> In fact, in your example, since you are essentially running 1000 hypothesis tests on different samples, multiple hypothesis correction would solve the "problem" with p-value.
They're not multiple tests. They're multiple simulations of the same test, to show how the test performs in the long run.
Perhaps you're a wonderful statistician, I wouldn't know, but nothing you have said thus far about null hypothesis significance testing makes any sense or is even remotely correct.
> If a null hypothesis is invariably true, it's impossible to reject it. Which means the scientists will not be able to find any statistic or data to support any of their bad, original hypotheses. Not 5%, not 0.005%, nor whatever.
You've never heard of random error? Just because a null hypothesis may accurately describe a data generating phenomenon doesn't mean you will never get samples that aren't skewed enough to have a significant effect.
Pretend we are comparing neighborhoods. Say the true age of the people in my neighborhood and your neighborhood is actually equal, at 40, but my alternative hypothesis is that the average age of residents in my neighborhood is younger than yours (thus the null is they are the same, which unbeknownst to me is the truth). You are claiming that no matter how many random samples of residents of our two neighborhoods we take, they will always be close enough in average age that we will always fail to reject the null. That's obviously not the case.
In fact, by definition, the p-value is stating that we should expect 5% of samples we draw to indicate my neighborhood is significantly younger than yours, even though that isn't true, solely due to the randomness of our samples. That's literally the purpose of the p-value.
It seems like your reasoning and the reasoning of the author could be applied to any statistic testing the reliability of a hypothesis, not simply p values. Further, you could mitigate this problem if you knew the prior probability, sure. But how do you expect a bad hypothesis generator to be good at knowing the prior probability. The usual standard is "extraordinary claims require extraordinary evidence." The less likely a hypothesis, the stronger the evidence, measured as p-values or otherwise.
But the thing is the public and the scientific community has to be the one who are going to judge the extraordinariness of a claim. If an experimenter were to wrap their results in their own belief in the likelihood of the hypothesis, the observer wouldn't be able to judge anything. So it seems like experimenters reporting p-values is as good a process as any. It's just the readers of results need to be critical and not assume .05 is a "gold standard" in all cases.
> It seems like your reasoning and the reasoning of the author could be applied to any statistic testing the reliability of a hypothesis, not simply p values.
Precisely. That's the point. Hypothesis testing is inherently absurd.
What's impossible is thinking that just the output of a single experiment gives hypothesis certainty, or a fixed probability of a hypothesis or anything fully quantified.
You're alway going to have the context of reality. Not only will you have the null hypothesis you'll competing hypotheses to explain the given data.
But the point of science isn't blinding constructing experiments but instead forming something you think might be true and doing enough careful experiments to convince yourself and others in the context of our overall understanding of the world that the hypothesis is true. Common sense, Occam's Razor, the traditions of a given field and so-forth go into this.
Then, hypothesis testing was born in the context of industrial quality control, where the true data generating process is very close to being well-known and deviation from the norm raises a red flag rather than suggests new knowledge about how breweries work.
While intended as light humor, this actually seems like a really damning argument to me. It's conceptually similar to overfitting a machine learning model by aggressively tuning hyperparameters without proper cross-validation, etc. What serious defenses are there after this sort of attack?
Serious defense: p-values have been in use for a long time, and while they are error prone a larger number of true results has been found than false results (according to recent research the ratio is 2:1 reproducible to non-reproducible).
In the cartoon, the scientists are making multiple comparisons which is something strictly forbidden in frequentist hypothesis testing. One way to get around it is to apply a correction by dividing the significance theshold ("alpha") by the number of comparisons being made, in this case 20. The cartoon does not state it's actual p-value as most journals will require, but the hope would be that by dividing by the corrective factor the significance of that particular comparison goes away.
So p-value methods still lead to a lot of Type I and Type II errors, but in the past they have been the best science has been able to come up with. Actually, probably the greatest issue with false results in the scientific literature is that null results are not publishable. This leads to a case where 20 scientists might independently perform the same experiment where the null is true, for only one to find a significant result. The demand for positive results only acts as a filter where only Type I errors get made! This is just one problem with the publishing culture, and doesn't take into account researchers' bias to manipulate the data or experiment until p < .05.
An alternate approach to the frequentist methodology of using p-values is the Bayesian method, which has its own problems. First there are practical concerns such as choosing initial parameters that can affect your results despite sometimes being arbitrarily chosen, and also the high computational demand to calculate results (less of an issue in the 21st century, which is why the method is seeing a revival in the scientific community). Probably their main problem right now is that practitioners simply aren't familiar with how to employ Bayesian methods, so there's some cultural inertia preventing their immediate adoption.
while they are error prone a larger number of true results has been found than false results (according to recent research the ratio is 2:1 reproducible to non-reproducible)
It seems odd to talk about "results" as an average across all fields, rather than for a specific field. It's much more common for people to claim that psychology rather than physics has a reproducibility crisis, and thus I don't think it makes sense to talk about the combined reproducibility across both fields. What research are you referencing, and what fields did they look at? Given the differences across fields, if the average is 2:1 reproducible, I'd guess that some fields must be lower than 1:1.
You're right, it definitely depends on the field. The paper I am referencing looked at psychology, I believe. It is likely that a social science would have greater issues with reproducibility than a physical science.
Oh, it's definitely damning. The real joke in the XKCD comic is that, if we assume each panel is a different study, the only study that would be published in a journal is the one where p < 0.05.
Originally it was intended that peer review in published journals and study reproduction would verify findings. In a small community where all results are treated equally, this works fine. In a world without data systems to organize data and documents, this was really the only reasonable method, too.
However, we don't live in that world anymore. The community isn't small, and information science and data processing are much advanced. Unfortunately, since careers are built on novel research, reproduction is discouraged. Since studies where the null hypothesis is not rejected are typically not published at all, it can be difficult to even know what research has been done. There are also a large enough number of journals that researchers can venue shop to some extent, as well.
Many researchers are abandoning classic statistical models entirely in favor of Bayes factors [https://en.wikipedia.org/wiki/Bayes_factor]. Others are calling for publishing more studies where the null hypothesis is not rejected (some journals specializing in this like [http://www.jasnh.com/] have been started). Others are calling for all data to be made available for all studies to everyone (open science data movement). Others are trying to find ways to make reproduction of studies increasingly important.
As you point out, there is already a major issue when dealing with honest scientists who have to work in a publish or perish model where funding is based on getting results. But if we were to tweak the parameters so that there are at least some biased scientists and that the finding sources are biased for certain results (other than just any result where p < 0.05), and we take into account a subset of society looking for 'scientific support' of their personal convictions, the issue becomes much worse.
Look at how much damage was done by science misleading people about nutrition in regards to carbs and fats. How often, especially from the social sciences, does some scientific finding get reported by popular media as some major finding which should have drastic effects on social/legal policy, only for the root experiment to be a single study with a p < 0.05 where the authors caution against drawing any conclusions other than 'more research is needed'? Violence and media is a good example, and even more so when we consider the more prurient variants thereof.
I think this is the basis of why I am more willing to trust new research in physics more than in sociology.
Effect estimation, rather than relying on p-values, is one approach that provides far more context than just "Is or is not significant".
Also training your scientists - especially those outside the physical sciences - that effects likely aren't fixed in a meaningful sense (i.e. the effect of smoking on lung cancer isn't a universal constant in the way the speed of light is), at which point multiple estimates of the same effect from different groups and populations has value.
This example implies only statistically significant results get published. But 'proving' a null may also have value depending on how non-trivial it is to the non-scientists in that society.
And if you extend the hypothetical such that everyone in that society always had true nulls, there wouldn't even be a need for science. We'd all be too used to never being wrong.
I don’t have to imagine that society since they already exist and are called social scientists.
More seriously you do make a good point which is all scientists lie on a spectrum from always generating true hypotheses, to always generating false hypotheses. Scientists in different fields tend to lie more to one or the other of the extremes. My experience is the observational sciences are more shifted to the always false end than the experimental sciences.
It was in fact a joke, but with some truth. You're making serious claims about a vast body of literature and methodologies without having actually understood their entirety. This is exactly what you're criticizing social scientists for doing: drawing conclusions based on observations from systems no one has fully isolated for experimentation. If you think this is methodologically unsound, that's fine, but you shouldn't then do it yourself.
I was making the pointed armchair observation that all hypothesis being tested in the social sciences are false. Of course none of the down voters seemed to noticed that my hypothesis is a social science hypothesis. Subtly is lost on HN most of the time.
More seriously the social sciences do have a lot of problems, some driven by the methodologies used, some by ideology, and some by the inherent noisiness and unreliability of the data available. Not an easy area to do science in.
This is being down-voted for the shots fired, but the underlying point is almost certainly true to a degree. People aren't ideologically invested in (say) the weight of electrons in the same way that they are in IQ curves across demographic groups.
Ideology certainly plays an important role in generating false hypothesis, but all the observational sciences suffer from the problem that you can't run experiments to rigorously test the robustness of your hypothesis.
In the experimental sciences you can get far using the rule of thumb that if you need statistics you did the wrong experiment, while in the observational sciences the use of statistics is inherent.
It is not about true or false hypotheses generation. It is about likely and false hypotheses.
When we start to treat the hypotheses as "true" instead of "likely", we fall into a trap of not being able to reconsider the past evidence in the light of new evidence. We hold onto the "truths" of previous hypotheses instead of taking a fresh look.
An example of this is the current model used for astrophysics, where the basic "truth" that is the consensus of the majority working in the area is that "gravity" is the only significant force operating at distances above macroscopic. I use "gravity" because there is much debate in various areas as to what this force actually is.
There is evidence that our explanations are either incomplete or wrong. Yet this fundamental "truth" is unquestioned in the majority and where it is questioned, those questioners become personae non gratae.
It happens in the climate change debate. Here the "truth" is that the causes are anthropomorphic. So if you question that "truth", you are persona non grata. Yet, the subject is so complex that we do know to what extent, if much at all, human activity changes the climate over long periods of time. To question the "truth" of essential anthropomorphic causes to climate change means that detailed investigations into the actual causes do not get undertaken if they do not support the "truth" hypothesis.
In real life, scientists are people with the same range of foibles and fallibilities as everyone else. Just because one is "smart" doesn't mean one is clear-headed and logical. Just because the "scientific consensus" is for one model or another doesn't make that "scientific consensus" any more true than an alternative model that explains what we see.
We need to stop getting uptight about our favourite models and those who dispute them. We need to be able to take fresh looks at the data and see if there are alternatives that may provide a better working model. We also need to get away from considering successful models as "truth" and more as the "current successful working" models.
Having worked quite closely with cosmologists I can tell you that you have the wrong impression. Cosmologists perform maximum likelihood parameter estimations of models. Often included in these models are parameters that control deviations from general relativity or parameters that completely switch from GR to another form of gravity. The fundamental truth that there is dark matter is the fundamental fact that GR + visible matter alone is a terrible fit, GR + visible matter + invisible matter is an amazing fit and all other models tried so far are also bad fits if multiple distinct experiments are compared. They continue to try to replace the invisible matter term with terms from first principles all the time. However, often someone comes along and fits a model to a single dataset and proclaims loudly that they have solved the dark matter or dark energy problem. However, there are many distinct datasets which also need to be modeled, and invariably when this is performed the model was seen to be a worse fit than GR + visible matter + invisible matter. I've been involved in various alternate model discussions with cosmologists and I wasn't even a cosmologist, so it is definitely not true that testing alternatives to gravity is the third rail.
The same seems to happen in the climate change debate: there is a huge range of experiments, where anthropomorphic warming is the maximum likelihood model. Many people select a single experiment, find a model with a better fit and then loudly proclaim that anthropomorphic warming is a conspiracy. However, their model is a terrible fit to the other experiments which they did not perform due diligence in checking.
Scientists grow tired of playing politics. If you have an alternate model, it needs to fit a vast set of observations, not a cherry picked one. If you only test against one observation and make a press release about it, you will definitely not be seen as a serious scientist.
My apologies that it has taken some time to respond to your points. The problem I have is that cosmologists incorporate entities that have not been experimentally verified or are impossible (at least at this time) to be experimentally verified. Just because the models appear to work actually means nothing when you cannot get any experimental verification of all the elements on which a theory or model depends. Proxy evidence is used to enhance the belief in some specific entities, but proxies are only proxies and the use of such can be very misleading.
To say that "the fundamental truth that there is dark matter..." is problematic from the get go. No experiment has demonstrated that "dark matter" of any kind exists. You cannot say that there is a fundamental "truth" anywhere in science. We have observation, we develop hypothesis which should suggest experiments to test said hypothesis and with further evidence we develop theory. At no point is either hypothesis or theory "truth". Unless, of course, your intention is to make science into a religion.
When it boils down to it, science is a way of developing understanding of the physical world about us. It may lead to changes in one's philosophical or religious viewpoint, but it doesn't have to. It is not the be all and end all of anything. It is simply a means of hopefully increasing one's understanding. Sometimes it does and sometimes it doesn't. There are many examples of experiments and the results that have been considered anathema to the consensus view that the scientists who did those experiments have been made pariahs. This is very problematic as politics and religion become the driving forces that maintain the orthodox view.
There has been and is a significant push for science to be the authoritative voice as to what one should believe. However, science gives no guidance on any matters relating to human interaction or action. If anything, it is a cause of significant problems for human interaction and action.
I think you make a good point overall, and I think anthropogenic global warming should be open to questioning. It should be be able to prevail on its merits in the face of competing theories.
However if you are speaking in terms of what "we know", I think you have to acknowledge that the scientific consensus is that AGW is real. That doesn't prove it is true -- nothing in our world outside of math is ever truly proven. But it puts the burden of proof on doubters to not only provide a different/better theory, but also to explain why everyone else is wrong.
If your position is that everyone else is wrong, but the "actual causes" are not known yet, then you just end up looking like someone who has their thumb on the scale, and is invested in a particular outcome.
> nothing in our world outside of math is ever truly proven
Just biting a bit: depends on the logic you use. The logic might not be true in our Universe, just in mathematicians' heads/idealistic Universe. And even on those idealistic Universes there is no real consensus if they/which are true, or just useful.
Imagine you add Maybe as third value between True and False. Later you might find one Maybe is not enough, you might need four different Maybes. Then suddenly it dawns on you that countable amount of Maybes is the minimum. Then you throw away such logic because it's practically useless, even if it models reality better with the side effect of breaking established math as well. Then you wonder why simple binary logic is quite good in describing many things in real Universe, but you have no means to prove any relation between this logic, math derived from it, and reality you live and observe.
None of the logics mentions even touch the idea of quantifiers. Which is the way most math proofs are written nowadays.
It is strictly more powerful than any multivalued logic.
The problem, as I see it, is that the "consensus" view is taken to be true. As has been pointed out elsewhere, the "97% of scientists" who believe that climate change is anthropogenic comes from a study of papers. From what I understand, the 97% is 97% of the 1/3 of papers on climate change that made any reference to climate change being anthropogenic. The other 2/3's made no reference to climate change being anthropogenic or not.
It should be irrelevant what the consensus view may be. If an alternative model or theory is proposed, then the model or theory should stand on its merits not on whether or not it agrees with the consensus view.
My view is that science is about gaining some understanding of the universe about us. If a model or theory is useful in that understanding then good, it is useful. But if a theory or model develops big holes in it then mayhaps we should be looking for alternatives that have lessor holes.
Take the example of study of standard model of sub-atomic physics. Within it, there are some quite large holes that are papered over with theoretical mathematics. Yet, if one steps back and takes another look at what is being seen there are some interesting observations to be made that raise questions about the validity of the standard model.
You’re confusing 97% of scientists with 97% of papers - which isn’t a very scientific thing to do.
As for the Standard Model - scientists would dearly love to find observations thst challenge it, but so far there’s been no consistent, high quality evidence of physics beyond it.
In the beginning, the consensus view was that human activity was not a significant factor in climate change. The consensus came about because of overwhelming evidence. In this matter, the causal relationship is the opposite of what you state, and you are making a false claim about how science works because you refuse to accept the evidence.
In the beginning the consensus view was that we were heading for an imminent "ice age" and then that view changes to "hockey stick global warming" and now to cover all bets, climate change.
I have questions that I have posed to climate scientists and if a reasonable answer comes back then anthropogenic climate change is on the cards. But in fifteen years, nary an answer to those questions have come back, so, any prognostications by climate scientists based on their models are, as far as I am concerned, worthless.
As far as the evidence is concerned, it may or may not support an anthropogenic causal regime. But, on the basis of that evidence, I lean towards a non-anthropogenic majority cause for climate change.
As far as how science works, climate scientists make many assumptions about their proxies that have not been verified as being conclusively accurate. There is sufficient evidence, if you actually look around for it, to say that the interpretation of the proxy evidence is either incomplete or wrong or meaningless.
Make sure you apply to more schools than the three you listed (MIT, Penn State, Stanford). Based on the information you provided in your post, you're definitely not guaranteed to be accepted to MIT or Stanford (there's a very significant chance that you'll be rejected). MIT and Stanford both reject many applicants with profiles similar to yours (or better than yours) each year.
I'm partly speaking from personal experience. I had similar stats to you (4.8 GPA weighted, 3.9+ GPA unweighted) in high school and a 2330 SAT (780 math / 780 reading / 770 writing). I also had very high standardized test scores in other areas and numerous extracurriculars / state and national level awards.
I ended up getting rejected from both Stanford and Princeton (though I did get into Yale). Basically, the top colleges are a crapshoot even with stellar stats, and you should really apply to 7-10 colleges with some backups mixed in.
As far as where to go, I'd just give the general advice of making sure you don't take on a significant amount of debt. If you're middle class, MIT, Stanford, and the Ivies should give a lot of financial assistance. It's debatable whether prestigious colleges actually provide better education than good state colleges, but the prestige associated with a top school will help you a lot in the future (like it or not, most of the world is not anywhere close to an approximate meritocracy, which means that prestige will play a significant role in the opportunities open to you after college).