> This means that we feel comfortable about the prospect of allocating 178 billions /48 prefixes under that scheme before problems start to appear. To understand how big that number is, one has to compare 178 billion to 10 billion, which is the projected population on earth in year 2050
~So that's a maximum average 17.8 IPs per person? Seems awfully low, considering people these days have multiple devices connected to the Internet at any given time: phone, watch, tablet, laptop(s), perhaps desktop(s), security cameras, IP phones, refrigerators, washing machines, TVs, video game consoles, I could go on...~
Edit: oh, wow I misread that completely. It's 17.8 /48 ranges per person on average. Yep, that should be enough for quite a while.
> The Ipv6 pool is quite vast. Yet I am a little surprised that an individual can receive a /48 without much trouble: that a lot of IPs.
A /48 is considered one "site" in current thinking. Since IPv6 subnets are /64, you have 16 bits between the /48 and the /64. This is the equivalent of using 10/8 for your network and using /24 IPv4 subnets: in both cases you can have upto 2^16 subnets.
The main difference being that a /24 can have ~250 hosts, but a /64 IPv6 subnet can hold the equivalent of four billion Internets (2^32 * 2^32).
But one of the selling points of IPv6 is reducing/eliminating the mental math about worrying about if you have "enough" addresses (and then carving things into /26, /30, etc).
> A /48 is considered one "site" in current thinking.
This is the really important part. As they continue to hand out IPv6 like candy, the minimum prefix length will get shorter.
No ISP wants a hundred million+ routes in their routing table, so people will start to drop anything shorter than a /42, /40, /38, etc. until the table gets small enough and shunt everyone elses traffic off to Hurricane Electric or the like as a default route.
People have this mindset of "we added a bunch of zeros, its an infinite resource now!" which is how we ended up in this mess to start with.
What is your basis for this opinion? Network operators are already talking about it.
> Too late. IPv4 are set to hit 1024K (2^20) in January 2024 at current trends
Allow me to expand that number for you: 1,048,576
One million. A reasonable upper bound for max announced v4 prefixes is somewhere around two million. You can handle that in a few GB of RAM. IPv6 could see 100x or 1000x that number based on how we are handling allocations, at which point prefix trimming will happen.
IIRC /64s are ideally meant to address to individual subnets (the bottom 64 bits aren't intended for routing) so it's only really 64k addresses in the conventional sense.
/48 is actually a relatively standard allocation even for home connections, although /56 is more common.
This was key to me understanding IPv6 - you don't really do variable length subnets. You just get a /48 and break that up into /64's. Because the space is so vast, you can do inefficient things with no chance of running into allocation issues like v4 and you don't have to mentally try and subnet a 128 bit address which maybe some can handle but hurts my head.
Another point to consider is the IPv6 global routing table for internet routers. Suppose you decided to give end users /120s instead and those /120s were all routable on the internet. That means there are 120 bits of addressing just to find the network. There are 2^120 such networks in theory. If you could actually enumerate this, you'd be well on your way to bruteforcing AES128. In other words, this is just infeasible.
By handing out /48s the routing table stays manageable. This is the smallest address block you can announce via BGP for this reason.
Given the utter vastness of IPv6 we are also able to do things like carve out an entire /7, fc00::/7, for unique local addresses, and still tell people they shouldn't actually need these addresses at all.
As to the actual process of getting a PI block, I think it is likely to involve some questions. A similar objection exists to handing out smaller than /48s: more people having their own block implies more routing entries. Much better if an existing provider carves you a /48 out of their allocation and routes you traffic. This is probably balanced against the fact that by requiring you to deal with an existing LIR and set up BGP (and your LIR won't want you to muck that up) the number of people who will actually do this just for fun is limited.
The IETF IPv6 allocation scheme was crazy from the start. The standard subnet size for autoconfiguration purposes is /64 = 18 Billion Trillion addresses. Of course reality is that autoconfiguration is only marginally useful and lots of things like servers and infrastructure links are statically assigned or assigned through dhcpv6. Most of the time I assign /112's to server subnets to limit the risk of IPv6 neighbor discovery attacks.
The standard /48 assignment size from ISP's to end-users was targeted mainly at stingy residential ISP's that were only assigning one IP per customer. They wanted any customer to have as many publicly routable subnets as they would ever need or want, but 65k is a lot of subnets. This was later updated to a default of /56 (256 subnets) but you could still get at /48 just for asking.
It got even weirder on RIR to ISP portable allocations. One of the original schemes would have RIR's allocating a "TLA" /16 (!) to only a few mega ISP's. Most ISP's/hosts would have to get their "NLA" address blocks from and be subservient to the 800 pound gorillas in the industry. https://www.rfc-editor.org/rfc/rfc2450.txt, https://www.rfc-editor.org/rfc/rfc2374
This was loosened somewhat in 2000 with RFC2928 which designated a "Sub-TLA" /29 as the initial ISP size. This would open it up to many more potential registrants, at least in theory. https://www.rfc-editor.org/rfc/rfc2928.html
The TLA/Sub-TLA/NLA system was abandoned in 2003 with RFC3587 after panic set in that nobody was deploying IPv6. https://www.rfc-editor.org/rfc/rfc3587.txt. ARIN had only made 30 Sub-TLA assignments by the end of 2002.
While RFC3587 wisely punted allocation policy to the RIR communities, the RIR's still had very restrictive policies inherited from RFC2450. Only around 2004 did they loosen to the point that ordinary networks could start requesting addresses. Google for example got their first IPv6 allocation in 2005. The problem is by then few networks were interested or just assumed they wouldn't qualify. When I got my first /32 in late 2004 there were less than 1000 routes in the global IPv6 table! Today there are ~135,000.
These more permissive rules only applied to ISP's/Hosts. End user orgs of any size were not allowed to start requesting portable /48's directly from RIR's until 2006 after much debate in RIR communities and vocal objections from IETF members and certain large ISP's.
Today router silicon and RIR policies have converged to a reasonably functional state. Too bad it took 20 years to get here or IPv6 might actually be on it's way to replacing IPV4. Instead that is still about 20 years away.
The 4G core is derived from the one built for the GSM. Sure it was updated here and there but it still runs some piece of software from the 1990.
Whereas with the 5G, the core was redone from scratch: everything is a micro-service, a service mesh for discovery and subscription, etc ... . This base will be kept for ~30 years. So the 6G is going to be a "simple" extension of the 5G.
In the same thread, there is France with a price of 645€.
I don't understand why a country loaded with nuclear reaction can beat Germany in price. Or maybe does it mean that the neighboring country are willing to pay France this much because they are desperate, but in this case why not buy the electricity from Germany at 432€ ?.
I think the answer you may be looking for is: Nuclear reactors are unreliable. Compare them to solar, which is very different.
Solar produces power according to sunshine, with no moving parts, and if a panel breaks the rest of the panels go on working, so the failure is a small deterioration in power output. Nuclear plants are differently unreliable, because they have lots of moving parts and elaborate maintenance procedures, and during maintenance a plant produces 0% power instead of that small deterioration. The unreliability is on a different timescale, both better and worse than solar.
France has lots of reactors, but they're not all up, and you can't expect all to be up, so sometimes they'll have this problem. IIRC they had something similar in 2016, which they were able to solve using imports.
You clearly don’t work in solar. The panels move on trackers, the panels have to be cleaned of dust and snow, vegetation around the panels has to be cut, inverters fail, etc. there are many moving parts and maintenance required.
That sort of maintenance can easily be done without the need for specialised engineers. A gardener can mow the lawn, a janitor can clean the panels, and most solar plants are not on tracks but static.
But that's not what I had in mind. When an inverter fails, it has to be replaced, and meanwhile a few panels don't produce power. A few, as opposed to all. You get many small failures that cause the plant to produce ninety-some per cent of its possible output instead of the nuclear kind, where the plant is down and produces zero energy, and can't be turned back on even with several days of advance warning.
Nuke is 'reliable' in a much more important way - the plant outages are scheduled and the generation is scheduled. You can (and do) bet the health of the grid on nuke having the output you plan when you plan for it to be there. This is referred to in generation industry jargon as 'dispatchable' power. This makes a modern zero-sum storageless grid work. Solar is non-dispatchable. You can estimate what power output will be available, you can over-provision generation and store it in expensive batteries, but you simply can not rely on it being there and expect the grid to stay online.
It turns out that regular plant maintenance outages aren't particularly important compared to not being able to properly plan generation.
Of course you can decide that. That's what the French did before the price peak in 2016. The thing is that only some of the decisions involved plannable dates.
They decided that when problems were discovered such that redundant security mechanisms weren't, then the problems should be fixed, with time limits. They decided what kind of problems should require fixes within set time limits, and they decided what those time limits should be.
And they decided that some reactors should be taken down for planned maintenance in specific periods.
When problems were discovered at inoppportune moments, the decisions combined to leave them with too few operational reactors.
Nuclear plants are "dispatchable" in the same way wind farms are "dispatchable" - you can just not feed as much of the power you are creating into the grid.
Gas and hydro are actually dispatchable. You can dial down the power you are using and actually use it later.
Its just that their high costs make that an uneconomical decision. Nuclear fuel is damn near free, so why would you ever want to turn it off unnecessarily? Only if the price of electricity reaches into negative-territory is it economically viable to turn off a nuclear power plant.
Same thing with wind / solar. As long as the price of electricity is positive, there's no point in turning them off.
--------
Now think about coal / natural gas. The price of fuel in these cases dominate. Which means its economically viable to turn off when the price of electricity drops (even if its still above $0).
Nuclear fuel is not free but it is cheap, especially compared to the eye watering capital costs of nuclear power.
It is helpful to turn off power production to stabilize the grid. It's not always about production costs. This is why nuclear plants and wind turbines sometimes just "waste" excess energy.
Of course. But also not much less. If you have a 50-unit wind farm you'll have a lot of problems that reduce your output to 98% of what the wind allows until you can fix them. Not so many that reduce output to 0% of what the wind allows.
Colour me confused, a nuclear power plant the operators can choose how much power they make every day and generally choose as much as possible, a wind farm they can set a limit for the maximum power generated but the actual power generated could be anywhere from zero to the set maximum depending on the wind speed.
nuclear is more often considered baseload generation as opposed to dispatchable but wind is not baseload or dispatchable since it is not possible to control the wind speed.
>a nuclear power plant the operators can choose how much power they make every day
No. They cant.
It takes 24-48 hours for them to ramp production up or down and this is both expensive and difficult. This is far too slow to be useful. Natgas takes ~7 seconds-minutes to go from 0-100% and it's cheap and easy. Same for batteries. Hydro is slower but still in the order of minutes/seconds. This IS useful.
When nuclear plants have claimed ramp up/down speeds quicker than that they are always still producing the same amount of energy at the same cost they are just wasting some of it.
The nuclear industry tries to blur these distinctions.
>but wind is not baseload or dispatchable since it is not possible to control the wind speed.
Wind can often pull the same "trick" nuclear does where if the grid only wants 100MW but it's producing 150MW it can just let 50MW go to waste.
This isnt really "dispatchable" either but sometimes wind energy advocates pull the same trick.
Wind farms can’t plan to make 100 MW or 150 MW or 50 MW on Tuesday next week if there is no wind.
Nuclear can.
I agree nuclear isn’t dispatchable, natural gas is. But wind is not dispatchable nor is it baseload so making a comparison to nuclear and pretending there is firm energy out of a wind farm on a given day is false to me.
Pretty sure they didnt envision Germany having cheaper winter electricity than France.
All I can find evidence for is a balloon that short circuited a substation, which is a blackout that is unlikely to have been prevented by a nuclear reactor.
> Pretty sure they didnt envision Germany having cheaper winter electricity than France.
Unfortunately that does not reflect at all in the consumer price. We have been taking top spots in the energy price rankings for years now and there is no real hope for improvements.
Right. Solar depends on the weather and is unreliable in that respect, but reliable against mechanical problems. Nuclear is the opposite. More and less reliable.
I thought Twitch was the streaming king, I guess that China is still on another level. I have a hard time getting my head around those figure: ~$1.9 billion in one livestream [1]
I don't understand their statement about MPLS and security: "a need for MPLS to make their network operate securely"
Isn't MPLS used for routing and building SDN fabric where you applied a bunch of QoS rules depending of the MPLS tags ?, which as nothing to do with security.
I also noticed the writing was particularly poor. And not just the technical detail... Everything from grammar to general syntax needs tweaking for ease-of-reading.
Before you owned a nice full-featured i9 proc. You could go on Ebay and sell it for all its power.
Tomorrow, you buy your new shiny basic i10 proc. You then pay extra to unlock the fast multiply for our games, the fast simd for you AI and the upgrade for 4K decoding. After a while, you decide to sell on Ebay. Well your nice shiny software defined features are now worthless. Well not for Intel which has the opportunity to sell the same stuff twice or more.
What would be incoherent is for Intel to not juice this technology even more. Why not have a limited pool of licenses and let people compete for them monthly ?.
The problem is that the boundary of ownership is getting shifted, and by the look of if the consumer will loose.
I believe that Intel is smart enough to mix the licence with TPM signature for banning leaked keys.
Now I concur that this trend of "you own something but you are going to pay a rent for it anyway" is starting to get annoying. Even more annoying as it's seems to be the "accepted" new norm. For example with the new Mercedes car where you have to pay a monthly fee to unlock some extra degree of steering angle.
At this point, the French should just strike a deal with the Chinese. Sure they already know how to build nuclear submarine, but I am sure they would pay billions to lease land in New Caledonia, Guadeloup ... even Brittany. In the end, France don't care about the East China Sea so they should be more pragmatical when their historic "allies" make such moves.