It makes more sense when you realize that insider trading laws came after it was a problem, not before.
Before the insider trading laws, the stock market was much more volatile and was more akin to gambling for people out of the know. For people in the know, it was an easy way to extract wealth from those on the outside just looking at the numbers and publicly available information.
IMO, copyright is something that should be shorter the bigger the media producer is.
The reason we need a copyright in the first place is to stop someone like disney just vacuuming up popular works and republishing them because they have the money to do it.
Disney, however, doesn't need almost any copyright to still encourage them to make new products. They'll do that regardless.
For an individual author, copyright should basically be for their lifetime. If they sell it, the copyright should only last 5 years after that.
A company like disney should get copyrights for like 1 year.
But also the type of media matters. IMO, news outlets and journalists should get copyrights for 1 day max. Old news is almost worthless and it's in the public interest that news be generally accessible and recordable.
Disney didn't invent (e.g.) Beauty and the Beast. They took an idea and a story in the public domain and retold it. Then they claim ownership of that and sue anyone who uses the same character(s) for the next 75+ years.
This is not "encouraging creation". This is strip-mining our shared culture.
So yeah, agree 100% that this kind of corporate theft needs to be stopped. I can't see that happening in the face of all the money though.
> Would something like this even be financially feasible
No.
The entire reason they went after Cox is because cox has deep pockets and there was a possibility that Cox would just settle and work with them rather than fighting this all the way to the supreme court.
The problem sony has is the maximum money they can claim from an individual is just way less than what they can get from a business. Almost certainly enough to justify the legal fees.
What's horribly frustrating with the age ID stuff is that the issue at question with Meta wasn't that they didn't know what they were doing and that they were doing it to children. They did. This wasn't an issue of "If only they had the the age, then they could have done the right thing".
The laws being passed target exactly the wrong thing that wasn't a problem. They should have been passing "duty to care" laws aimed at social media companies not "give me your age" laws.
I may have missed it, but almost all these laws being passed for this issue have been pretty much solely around data collection rather than modifying the behavior of the worst businesses in the game.
It would be like seeing a car wreck kill a bunch of pedestrians and then passing a law that pedestrians need to carry IDs on them.
Yea, in the end there will basically be no consequences for Meta- Facebook is already mostly dead, and the ad revenue from that time has already been collected.
Now we're just moving on to a kind of moral panic think-of-the-kids kind of moment that is thinly-veiled state surveillance.
The primary benefit of AC is it's really easy to change the voltage of AC up or down.
The transmission efficiency of AC comes from the fact that you can pretty trivially make a 1 megavolt AC line. The higher the voltage, the lower the current has to be to provide the same amount of power. And lower current means less power in line loss due to how electricity be.
But that really is the only advantage of AC. DC at the same voltage as AC will ultimately be more efficient, especially if it's humid or the line is underwater. Due to how electricy be, a change in the current of a line will induce a current into conductive materials. A portion of AC power is being drained simply by the fact that the current on the line is constantly alternating. DC doesn't alternate, so it doesn't ever lose power from that alternation.
Another key benefit of DC is can work to bridge grids. The thing causing a problem with grids being interconnected is entirely due to the nature of AC power. AC has a frequency and a phase. If two grids don't share a frequency (happens in the EU) or a phase (happens everywhere, particularly the grids in the US) they cannot be connected. Otherwise the power generators end up fighting each other rather than providing power to a load.
In short, AC won because it it was cheap and easy to make high voltage AC. DC is comming back because it's only somewhat recently been affordable to make similar transformations on DC from High to low and low to high voltages. DC carries further benefits that AC does not.
If we could change grids in one way, the best thing we could probably do is switch from HVAC for transmission to HVDC.
I think the ideal grid would switch from DC to AC either at a substation at central location for a community.
Why might someone do this?
One of the hardest problems to work through is a grid cold start. When a grid goes completely down it takes a monumental effort to bring it back up again. There's a delicate balance that has to be struck with load and other generators coming online. It's hard to do. The AC waveform is a finicky thing that gets pulled and mutilated by every motor or vacuum cleaner that starts running.
With a bunch of AC microgrids joined by a DC major grid, you can completely sidestep that problem. It suddenly becomes just a lot easier to ramp up power production because the deformations to the waveform happen in small local regions, not everywhere in the grid. And further, the other plants just have to watch the DC voltage, they don't need a whole bunch of equipment around syncing with the AC waveform of the grid as a whole.
DC grid conversion is much more expensive than even a large transformer.
Cold (aka "black") start is not a common occurrence. Vacuum cleaners are not much of an issue. Industrial consumers are usually mandated to sort out their apparent power factor so it's not too weird for the grid.
What does matter sometimes is phase/frequency trips. The grid frequency is an important coordination mechanism between generators. When it gets distorted by loads coming on/offline, sometimes this can cause other generators to trip out at 59/61Hz, and then you get the Spain blackout situation.
Batteries could solve this but the software/regulatory framework isn't entirely there yet. See e.g. the UK market for "fast frequency response".
> With a bunch of AC microgrids joined by a DC major grid, you can completely sidestep that problem.
Not necessarily. Big local consumers will be large relative to the microgrid, which will not have a lot inertia. This is one of the things that you really notice when you go 'off grid', your grid is essentially your house and whatever else you decide to power from it and unless there are a couple of beefy motors already running starting a new one has a high likelihood of tripping the inverter, even a very beefy one. Start-up currents for larger consumers can be really high and you need a lot of inertia in your grid to overcome that.
> Start-up currents for larger consumers can be really high and you need a lot of inertia in your grid to overcome that.
This is true of an AC grid as well. Big inductive loads will often have to buy special equipment before hooking up to the grid because of their impact. It'd be the same with a DC first grid. To overcome a large startup current they'd likely need to buy a bunch of capacitors. Which, funnily, is exactly what they'd have to do to run on straight AC.
I grew up understanding that one of Tesla’s big innovations was using AC to transmit power distances so that there weren’t tremendous losses and line meltings or something. Can someone help me reconcile the delta between this understanding and the above comment? Was this not actually a thing? Or have we overcome it somehow?
HVDC is a miracle of modern engineering that could not have been done in the days of Tesla. It removes several sources of losses that otherwise would have turned valuable power into heat. That said, it isn't without drawbacks: the cables are quite expensive, harder to repair and somewhat fragile, and 'local stepdown' which otherwise would just be a properly rated (capacity and insulation) transformer now turns into a much higher technology exercise. HVDC is for now relegated to a long haul role not unlike oil pipelines compared to the AC network which is far more interconnected and wide spread. You are unlikely to see HVDC used for lower level distribution in the next decade, just as you are unlikely to see your local gas station hooked up to an oil pipeline.
DC is also much harder to switch than AC; the latter has zero-crossings which tend to extinguish any arcs that form, but DC will just keep going. Look at the DC vs AC ratings on switches and you'll see a huge difference.
It can be either AC or DC. Aluminum TIG welding uses AC, whereas you'd use electrode-negative DC for steel or copper. As I understand it, with aluminum you need the electrode-negative part of the waveform to transfer heat to the work piece, but you need the electrode-positive part of the waveform to clear out the crud that accumulates in the electrode-negative part. Often you set a lopsided duty cycle and use different frequencies depending on how deep you want the weld to penetrate.
If you go to 100% electrode positive you tend to heat the metal rather poorly, but can turn the end of your tungsten electrode into a molten blob -- which is usually not desirable.
> the cables are quite expensive, harder to repair and somewhat fragile
Nope, HVDC uses the same style of cable as AC. I'm not sure why you'd think they'd be different.
The HVDC cables that can be expensive are meant to be submerged. A feat that only HVDC can do. HVAC can't be submerged due to the capacative effect.
But otherwise I agree. It's more a pipedream for me that HVDC becomes more common place as I believe it'd make grids ultimately more stable and resilient.
Hm, yes, you are right, I must have been reading on submerged cables, but it's a while ago.
The devil is in the details here, AC tri-phase cabling can not easily be re-purposed for HVDC purposes because you only have a pair of conductors rather than three 120 degree out of phase lines. So while technically the cable itself can be the same the carrying capacity of a triple of conductors would be reduced and one of the conductors would be idle, so if this is an in-ground or overhead cable not specifically made for DC that is a lot of wasted carrying capacity.
AC/DC hybrid transmission infrastructure defeats important fault handling and inertia characteristics you get for free out of a fully connected AC grid. HVDC converter stations cannot handle remotely the same amount of fault current that synchronized machines in an AC grid can. It's maybe 1/10th the capacity. You also don't get the same guarantees of zero crossing in a fault scenario with HVDC. If you never cross zero, it's possible some circuit breakers cannot function anymore.
I've done it. The larger the grid, the more difficult it is. But as long as you have fuel and an adequately maintained grid, its not as hard as some in the comments make it out to be. Better regulation would make it easier. For instance, in Singapore emergency diesel or some other method for black start is a requirement for most generation stations. The rest of the world likely has more lax requirements.
The article presumes that the models we have today describing everything could still be subject to a major paradigm shift.
Maybe they could be, but it seems pretty unlikely. The edges of a lot of scientific understanding are now past practical applicability. The edges are essentially models of things impossible to test. In fact, relativity was only recently fully backed up with experimental data.
I don't think paradigm shifts have to be 'better' in some march-toward-progress sense, they can be lateral or even regressive in that way and still lead to longer-horizon improvements.
I think also what's practically applicable changes constantly. Perhaps we're truly at the End of Science, but empirically we've been wrong every other time we've said that. My money is that there's more race to run.
On that note, Terence Tao gave a good interview to Dwarkesh Patel talking about Kepler. He pointed out that the previous geocentric models were actually more accurate than Kepler's at the time, in part because they'd had so much complexity piled on to solve minor errors. Kepler's theory was more elegant, but at the time it wasn't necessarily a better model.
I think important paradigm shifts can often look like this - there's not necessarily a reason to expect them to be instantly optimal. Deep Learning vs 'good old-fashioned AI' is another example of this dichotomy; it took a long time for deep learning to establish itself.
> I don't think paradigm shifts have to be 'better'
But they do. Paradigm shifts happen because the new paradigm explains the unexplained and importantly also covers the old model. If prior data is unexplained with a paradigm shift, the shift will never be adopted.
> Perhaps we're truly at the End of Science
Who said that? Just because the core of our current models seem pretty rock steady doesn't mean there's not more science. It simply means that we can mostly just expect refining rather than radical discovery.
There will be sub-paradigm shifts, but there's likely not going to be major "relativity" moments from here on out.
> Paradigm shifts happen because the new paradigm explains the unexplained and importantly also covers the old model
Empirically it seems that paradigm shifts are more driven by deaths and retirement rather than improved fit to the data. Moreover the way that you reconcile old data with the new model can be contestable; it's not like everyone all at once says "oh this new model is clearly a strict superset of the previous one, time to adopt it". With all that said I think one could argue that this stuff is basically noise and that the process still 'trends toward progress' (and I'd agree). But I would say that the scale of noise can also be quite large relative to things a human might experience in their life. I was sort of imagining social-disruption (like a dark-age type regression) as the 'backwards paradigm shift'.
> but there's likely not going to be major "relativity" moments from here on out
I cannot understand how anyone treat this as something that can be objectively concluded; by definition these kinds of radical paradigm shifts are basically unforeseeable up until they happen. I called it the "End of Science" to draw a parallel to "End of History"-type thinking because both (IMO) take this view of "there will be no more revolutions, only incremental adjustments on an unshakeable core into infinity", which I feel is personally a 'vibes based' assessment of things. It's not even that I disagree with it so much as I feel like the statement is basically (and will always be) a pure guess, one which many people have made and been wrong about in the past.
> I cannot understand how anyone treat this as something that can be objectively concluded
Mostly because the room for the unexplained in physics is really small. It's possible that we end up finding some sort of big revelation about quantum physics that completely changes how we view relativity. But even in that case we are more likely to find that relativity is just a simplification of a more complex model with better explanatory power. Very much like how Newtonian physics still works really well from quite small things to anything most humans will deal with on earth. It's only when you start talking about uncommon experiences in extreme environments where relativity starts being a requirements to make the math work.
> there will be no more revolutions, only incremental adjustments on an unshakeable core into infinity
I guess I'm just more comfortable with that position. A lot of the revolutions in science circled around detecting and measure things previously immeasurable and unsee-able. The study of EMF exploded when it did because that's also when our ability to generate and measure electricity in more than just a party trick happened.
We are at a point where things are more of an unknown unknowns with no theoretical way to observe. The physics models at the fringes are mostly centered around things we can't measure.
There just aren't interactions we can't currently predict. The only one I know about is radioactive decay.
And a lot of this shows in modern society. In physics, the last major paradigm shift was relativity. That's a nearly 100 year old model at this point. Everything we have currently is just incremental improvements on the physics model.
I don't think this is because we just aren't as smart today as we once were. Quiet the opposite. There are far more people on the planet. There are almost certainly a lot more "Einsteins" trying to find a new paradigm and they've simply failed over the decades because it's seemingly increasingly unlikely that there is something to find.
> It simply means that we can mostly just expect refining rather
The practical issue is if there will enough funds for just "refining", instead of "paradigm shifts", which I understand as new and "exciting" discoveries. I'm not a scientist, of course, this is just my layman's understanding.
Physics is a bit of a special case. This certainly doesn't apply to, say, biology, medicine, cognition, not to mention any of the social sciences—i.e. most research.
I'm also a little skeptical about the practical value of the bleeding edge of both experimental and theoretical physics. Interesting? Sure.
cognition is just a special case of medicine which is a special case of biology which is a special case of chemistry which is a special case of physics.
And the closer you get to physics, the less likely any sort of major paradigm shift will be discovered (though the article focuses pretty heavily on physics which is why I do as well).
But even in those fields, there are core parts that aren't likely to ever see any sort of paradigm shift. For example, in biology, I doubt we'll see a shift from evolution as it'll be impossible for a new model to also explain what evolution does.
I agree that at the edges you'll possibly see more paradigm shifts and discovery, but those are all going to be working from things that will not see paradigm shifts. For example, biology can't escape things like single celled organisms made up from atoms and chemical compounds.
But ultimately, what I disagree with in the article is the notion that discovery won't ultimately be a process of hypernormalization. In medicine, we are unlikely to see a new paradigm that isn't germ theory. When it comes to the research, it'll mostly be focused on finding new compounds and delivery mechanisms for treatment rather than finding a new paradigm for how to treat a disease.
The softer sciences are the only place where you might find new paradigms, but that's simply because the data itself is so squishy and poor anyways that it's easy to shift around. There it's less a question of the science and more of the utility of the model (regardless of whether or not it aligns with reality).
> article presumes ... everything could still be subject to a major paradigm shift. ...seems pretty unlikely
Alternatively: there's plenty of mainstream, accepted science that's plain, flat out, provably wrong. Yet, it is against good taste (job security, people's feelings, status quo bias, etc.) to point this out.
Hence, it can actually be tricky to catch wind of, or get a grasp on, such issues to begin with, much less pursue such issues toward meaningful, published, recognized change in understanding (that is to say: paradigm shift).
I'd name some examples, but you wouldn't believe me.
With respect to the article, it seems the current LLMs can (though, obviously, do not necessarily have to) return text that appears to reason (pretty reasonably!) about paradigm shifts, when given the context required and nudged quite forcefully toward particular directions. But, as the article seems to indicate, the LLMs seem to not tend toward finding, investigating, and reporting on paradigm shifts all on their own very much. (But maybe part of that is intrinsic to how they are programmed and/or their context?)
> there's plenty of mainstream, accepted science that's plain, flat out, provably wrong. Yet, it is against good taste (read: job security, people's feelings, etc.) to point this out.
I highly doubt that.
There are a lot of people that think they've proving the mainstream wrong. But more often than not, it's cranks using bad non-repeated tests. These bad tests are propped up, ironically, because of people's feelings and job security more than a built up body of evidence.
They also almost always have to ignore the mainstream body of evidence and just say it's wrong and bad because of a conspiracy.
For example, plenty of creationists believe they have irrefutable evidence that evolution is provably wrong. It's usually a few cherry picked or poorly interpreted results or sometimes just flat out lying. And often they simply flat out lie about the existing body of evidence that support evolution.
Another example is the antivaxx movement. Wakefield and RFK both built careers that made them a lot of money talking about how the mainstream was wrong. Even when the industry adopted some of the recommendations (abandoning Thimerosal), they simply ignored the fact that further data didn't support their claims.
> In fact, relativity was only recently fully backed up with experimental data.
Can you elaborate on the assertion you made here? In addition to the important points @elbasti made about tests performed approximately a century ago, what does it even mean for a scientific theory to be "fully backed up"? Such theories can be tested and the tests either passed or the theory disproven but it's not possible to _prove_ such a theory. And to some extent we already know that relativity cannot be the final answer because it doesn't mesh well with quantum mechanics (which has been experimentally tested substantially, arguably even more than relativity has).
Nope. The edge for a lot of interesting science is difficult to use scientific software and scientists not using basic statistical techniques correctly.
What's impressive is that if you look at the issues PATCO struck over, it was basically identical to the problems ATC faces today. The problem being that everything has only gotten a lot worse for ATC controllers.
The union pretty loudly and early on pointed out major problems with that job and the response of ignoring them for 4 decades is what's driven us to the current situation.
There's unfortunately an alertness problem WRT automated systems.
If the reason you have the human there is to handle the unusual cases, you run the real risk that they just aren't paying attention at critical moments when they need to pay attention.
It's pretty similar to the problem with L3 autonomous driving.
Probably the sweet spot is automation which makes clear the current set of instructions on the airport which also red flags when a dangerous scenario is created. I believe that already exists, but it's software that was last written in 1995 or so.
Regardless, before any sort of new automation could be deployed, we need slack for the ATC to be able to adopt a new system. That's the biggest pressing problem. We could create the perfect software for ATC, but if the current air traffic controllers are all working overtime and doing a job designed for 3 people rather than one, they simply won't have the time to explore and understand that new system. It'll get in the way rather than solve a problem. More money is part of the solution here, but we also need a revamped ATC training program which can help to fill the current hole.
[1] https://eatshistory.com/the-5-oldest-recipes-in-the-world/
reply