What age did you start med school? Probably not feasible for those of us who didn’t have stellar academic records. (Regardless of professional accomplishment)
proud owner of a financed 2024 manual nissan versa here :) but yeah the dealership made almost no money and I put down a deposit when it was a couple months from coming in at a location far from where I live. it's a $20k car though.
Hell no, any experienced engineer would rather do it themselves than attempt to corral an untrained army. Infinite monkeys can write a sequel to shakespeare, but it's faster to write it myself than to sift through mountains of gibberish on a barely-domesticated goose chase.
> Many studies have shown the incidence of repair procedures and worse final vision outcomes were higher in groups with autoimmune conditions (SJS, OCP). The difference in outcomes appears to be related to the degree and cumulative past period of inflammation. Overall most favorable outcomes are achieved in non-cicatrizing conditions, followed by ocular burns and OCP with the worst outcomes in SJS patients.
The patient in the article was a SJS patient
> The massage therapist says he could see just fine until he was 13 years old, when he took some ibuprofen after a school basketball game, triggering a rare auto-immune reaction known as Stevens-Johnson syndrome.
For those without means understanding is a luxury, for many with, it is merely aesthetic. Matching drive with opportunity could unlock humanistic discovery but it is far more likely to be done artificially given the way we organize our societies.
> For those without means understanding is a luxury
Saying that those without wealth are barred from genuine understanding reduces learning to a matter of money. Plenty of us with limited resources develop deep insights by way of libraries, conversations, online education, or other active seeking in the margins of what time and means permit.
That proposition also throws away personal agency by framing understanding as something that happens only if external conditions allow it, rather than admitting each person’s power to seek knowledge and push beyond circumstance.
Your second sentence feels more balanced, setting up a good question: how do we bootstrap Gene Roddenberry’s future fairly while still recognizing personal drive, differences, and merit?
I gave up a while ago and started on the path to becoming a doctor. I'm pretty happy about it except for the 10 years until I make what I used to part.
Active learning and problem selection is what I came to as well going through the literature. I was thinking of teaching algorithm design by having students build their own algorithm laboratory where they create visualizations and experiments that motivate interest in problems and the design of solutions.
I think “motivating the problem” (a phrase I hear a lot in American lectures) is often the weakest part of an algorithm and data structure course. The teaching approaches seem quite abstract, or make passing mention to practical problems.
Personally I find algorithms a bit boring in the abstract. I’ve always wondered why DSA projects are so rarely things like “Here’s a simple database that doesn’t support indexes. We’re going to query it and it will be painful. Then you’ll extend the database with a b-tree index.”
> “Here’s a simple database that doesn’t support indexes. We’re going to query it and it will be painful. Then you’ll extend the database with a b-tree index.”
Because this is in my opinion a too complicated project for many computer science students (of a non-elite university). In other words: this looks like a great project to "weed out" students who should better not study computer science. Lectures/projects to weed out unusuitable student seem not to be accepted in the academic environment in the USA.
Hm this seems potentially dangerous, doing MCAT practice I got:
Which of the following is NOT a neurotransmitter in the central nervous system?
serotonin
glutamate
acetylcholine
dopamine
Neurotransmitters in the central nervous system include serotonin, glutamate, and dopamine. Acetylcholine is a neurotransmitter in the peripheral nervous system.
However this is wrong. Acetylcholine is both a PNS and CNS neurotransmitter. The rest are also CNS neurotransmitters.
It’s really not unquantifiable. I read “How to Measure Anything in Cybersecurity Risk” and it was an eye opener. Using a table of risks and outcomes with associated probabilities and 90% confidence intervals of dollar impacts we can quantify categories of technical debt.
If "Cybersecurity Risk" were the only form of technical debt, we'd be just fine(?). Or, at least, we'd have some sort of metric. It wouldn't be a good one, but it'd be there. Chance of a breach: 1%. Existential or not? Probably not. Cost of mitigation? Probably small. Worth addressing? Mostly no, unless you're a regulated entity; then it's mandatory. Quantifiable, for this narrow case, but what of the rest?
Apply the same mentality to other things. If the cybersecurity folks can quantify risk so can you. Are you keeping track of your supply chain? How modular is your code? How easy to refactor is your code? You could think of reasonable metrics to measure various aspects of technical debt. It won't be perfect but it's better than nothing.
I think a bad metric is very much worse than nothing. It sucks away time to record, debate, report, and discuss. It encourages bad decision making. If you throw up a number people will give it weight, even if it's stupid. Multiplying 6 gut checks and trying to make a decision about engineering direction is like tracking someone's mood by the metric of whether they ate an odd or even number of calories yesterday. There's theoretically a signal under all that noise, but the direct gut-check or any number of qualitative clues are so much better than the distracting number.
I agree whole-heartedly. A bad metric is a curse. It's misleading, resulting in waste, and falsely reassuring simply because it exists as a number.
+100 on the gut-check qualitative approach
What about the Bayesian methods shown in "How to Measure Anything"? They have been applied to Cybersecurity ("How to Measure Anything in Cybersecurity Risk" in a very thorough and convincing manner. It looks like the business around it is trying to apply it to product management (https://hubbardresearch.com/shop/measure-anything-project-ma...). Basically the idea is when things are hard to measure we should not abandon quantitative scales and use qualitative ones (like t-shirt sizes) but instead use probabilities to quantify our uncertainty and leverage techniques like bayesian updates, confidence intervals, and monte carlo simulations.
This is not inconsistent with How to Measure Anything IMO (I like that book as well). The biggest issue to me is that he does not define actual follow ups on ROI -- it is all estimated in this framework. So it is all good to define how to prioritize, it is not helpful though retrospectively to see if people are making good estimates.
My work very rarely is a nicely isolated new thing -- I am building something on-top of an already existing product. In these scenarios ROI is more difficult -- you need some sort of counterfactual profit absent the upgrade. Most people just take total profit for some line, which is very misleading in the case of incremental improvements.
The problem is the muda should have expected values associated with them. Bugs and security vulnerabilities do cost money, these are 90% confidence intervals of dollar impact from How to Measure.