You don't need this. Strictly speaking, we don't need much.
But a travel router can be nice to have.
I bring some tech with me when I travel.
Obviously a phone, but also a decent-sounding smart speaker with long battery life so I can hear some music of my choosing in decent fidelity without using Bluetooth [bonus: battery-backed alarm clock!], a laptop for computing, a streaming box for plugging into the TV, maybe some manner of SBC to futz with if I'm bored and can't sleep during downtime.
All of this stuff really wants to have a [wifi] connection to a local area network, like it has when I'm at home.
A travel router (this one, or something from any other vendor mentioned in these threads, or just about anything that can run openwrt well) solves that problem.
All I have to do is get the router connected to the Internet however I do that (maybe there's ethernet, decent wifi, or maybe my phone hotspot or USB tethering is the order of the day), and then everything else Just Works as soon as it is unpacked and switched on.
And it all works togetherly, on my own wireless LAN -- just as those things also work at home.
Bonus nachos: With some manner of VPN like Tailscale configured in the router, or the automagic stuff this UBNT device is claimed to be able to do, a person can bring their home LAN with them, too -- without individual devices being configured to do that.
I think travel routers are pretty great, myself.
(But using Ubiquiti gear makes me feel filthy for reasons that I can't properly articulate, so I stick with things like Latvian-built Mikrotik hardware or something running OpenWRT for my own travel router uses.)
In my opinion, you only need this if you don't like connecting to unknown (insecure or suspect) network to get access to the internet. Ideally, you would configure this kind of router to connect to a VPN so that as soon as it connects to the internet, it immediately logins to the VPN and reroutes all your network traffic through it. This makes it more difficult for someone to hijack your connection or crack it. From the comments it also appears that some people use it to connect to their home network, either to access their home server or to use as VPN (this can help you get around geo-fence and unnecessary additional authentications that some services require for fraud prevention). Some travel routers can also combine 2 or more internet connections (public WiFi + mobile data) to provide you a more stable internet connection, which is often desirable.
You have a workplace that insists you are working from your home while you travel.
It has limits, like the amazon hardware keypress thingy with north korea showed recently, but unless your working at superbigtech or defense contractor it would probably work.
connect screenless devices, e.g., Echo Dot
extend weak wireless range in hotel
screen share or network between multiple devices eg travel with two laptops and can virtual KVM
only have to do the captive device on one - many hotels limit number of devices
extra security buffer
phone can't bridge wifi for headless like this
etc etc
I wrote code to do this between a C64 and a 1541 disk drive when I was in high school. It got me to the international science fair and (probably) earned me a full tuition scholarship for undergrad.
I called MS support once because some random dude managed to get my son's account registered under his "family", and then locked my son out of being able to update his own machine.
The MS support guy literally tried to get me to password crack the random dude's account. Like, he wanted me to help him guess the guy's password so we could log in as him and change his family settings.
The Microsoft family/organisation situation is fucking ridiculous. If you somehow get enrolled in either, good luck ever ridding your device of it.
For literal years after leaving university, my windows install was still linked to my uni despite multiple attempts to fix it. All this, because I logged in using my university Microsoft account once.
My Operating Systems class as an undergrad used a book written by the Multics guys.
I hated it. It would present a bunch of apparently incompatible techniques for e.g. job scheduling, and then say that Multics implemented all of them. I immediately understood why UNIX came about: the Multics designers appeared incapable of having opinions, which led to an OS that was bloated and hard to understand.
That class was a long time ago, and I was a young, arrogant, and uninformed programmer, and maybe that take was wrong. But it left a strong impression at the time, and it was one of the few books from my undergrad days that I sold back instead of keeping.
As with a lot of things, it isn't the initial outlay, it's the maintenance costs. Terrestrial datacenters have parts fail and get replaced all the time. The mass analysis given here -- which appears quite good, at first glance -- doesn't including any mass, energy, or thermal system numbers for the infrastructure you would need to have to replace failed components.
As a first cut, this would require:
- an autonomous rendezvous and docking system
- a fully railed robotic system, e.g. some sort of robotic manipulator that can move along rails and reach every card in every server in the system, which usually means a system of relatively stiff rails running throughout the interior of the plant
- CPU, power, comms, and cooling to support the above
- importantly, the ability of the robotic servicing system toto replace itself. In other words, it would need to be at least two fault tolerant -- which usually means dual wound motors, redundant gears, redundant harness, redundant power, comms, and compute. Alternately, two or more independent robotic systems that are capable of not only replacing cards but also of replacing each other.
- ongoing ground support staff to deal with failures
The mass analysis also doesn't appear to include the massive number of heat pipes you would need to transfer the heat from the chips to the radiators. For an orbiting datacenter, that would probably be the single biggest mass allocation.
I've had actual, real-life deployments in datacentres where we just left dead hardware in the racks until we needed the space, and we rarely did. Typically we'd visit a couple of times a year, because it was cheap to do so, but it'd have totally viable to let failures accumulate over a much longer time horizon.
Failure rates tend to follow a bathtub curve, so if you burn-in the hardware before launch, you'd expect low failure rates for a long period and it's quite likely it'd be cheaper to not replace components and just ensure enough redundancy for key systems (power, cooling, networking) that you could just shut down and disable any dead servers, and then replace the whole unit when enough parts have failed.
Exactly what I was thinking when the OP comment brought up "regular launches containing replacement hardware", this is easily solvable by actually "treating servers as cattle and not pets" whereby one would simply over-provision servers and then simply replace faulty servers around once per year.
Side: Thanks for sharing about the "bathtub curve", as TIL and I'm surprised I haven't heard of this before especially as it's related to reliability engineering (as from searching on HN (Algolia) that no HN post about the bathtub curve crossed 9 points).
Wonder if you could game that in theory by burning in the components on the surface before launch or if the launch would cause a big enough spike from the vibration damage that it's not worth it.
I suspect you'd absolutely want to burn in before launch, maybe even including simulating some mechanical stress to "shake out" more issues, but it is a valid question how much burn in is worth doing before and after launch.
Vibration testing is a completely standard part of space payload pre-flight testing. You would absolutely want to vibe-test (no, not that kind) at both a component level and fully integrated before launch.
The analysis has zero redundancy for either servers or support systems.
Redundancy is a small issue on Earth, but completely changes the calculations for space because you need more of everything, which makes the already-unfavourable space and mass requirements even less plausible.
Without backup cooling and power one small failure could take the entire facility offline.
And active cooling - which is a given at these power densities - requires complex pumps and plumbing which have to survive a launch.
The whole idea is bonkers.
IMO you'd be better off thinking about a swarm of cheaper, simpler, individual serversats or racksats connected by a radio or microwave comms mesh.
I have no idea if that's any more economic, but at least it solves the most obvious redundancy and deployment issues.
> The analysis has zero redundancy for either servers or support systems.
The analysis is a third party analysis that among other things presumes they'll launch unmodified Nvidia racks, which would make no sense. It might be this means Starcloud are bonkers, but it might also mean the analysis is based on flawed assumptions about what they're planning to do. Or a bit of both.
> IMO you'd be better off thinking about a swarm of cheaper, simpler, individual serversats or racksats connected by a radio or microwave comms mesh.
This would get you significantly less redundancy other than against physical strikes than having the same redundancy in a single unit and letting you control what feeds what, the same way we have smart, redundant power supplies and cooling in every data center (and in the racks they're talking about using as the basis).
If power and cooling die faster than the servers, you'd either need to overprovision or shut down servers to compensate, but it's certainly not all or nothing.
Short version: make a giant pressure vessel and keep things at 1 atm. Circulate air like you would do on earth. Yes, there is still plenty of excess heat you need to radiate, but dramatically simplifies things.
even a swarm of satellites has risk factors. we treat space as if it were empty (it's in the name) but there's debris left over from previous missions. this stuff orbits at a very high velocity, so if an object greater than 10cm is projected to get within a couple kilometers of the ISS, they move the ISS out of the way. they did this in April and it happens about once a year.
the more satellites you put up there, the more it happens, and the greater the risk that the immediate orbital zone around Earth devolves into an impenetrable whirlwind of space trash, aka Kessler Syndrome.
serious q: how much extra failure rate would you expect from the physical transition to space?
on one hand, I imagine you'd rack things up so the whole rack/etc moves as one into space, OTOH there's still movement and things "shaking loose" plus the vibration, acceleration of the flight and loss of gravity...
I suspect the thermal system would look very different from a terrestrial component. Fans and connectors can shake loose - but do nothing in space.
Perhaps the server would be immersed in a thermally conductive resin to avoid parts shaking loose? If the thermals are taken care of by fixed heat pipes and external radiators - non thermally conductive resins could be used.
Connectors have to survive the extreme vibration of a rocket launch. Parts routinely shake off boards in testing even when using non-COTS space rated packaging designed for extreme environments. That amplifies the cost of everything.
The Russians are the only ones who package their unmanned platform electronics in pressure vessels. Everyone else operates in vacuum, so no fans.
The original article even addresses this directly. Plus hardware returns over fast enough that you'll simply be replacing modules with a smattering of dead servers with entirely new generations anyways.
It would be interesting to see if the failure rate across time holds true after a rocket launch and time spent in space. My guess is that it wouldn’t, but that’s just a guess.
I think it's likely the overall rate would be higher, and you might find you need more aggressive burn-in, but even then you'd need an extremely high failure rate before it's more efficient to replace components than writing them off.
The bathtub curve isn’t the same for all components of a server though. Writing off the entire server because a single ram chip or ssd or network card failed would limit the entire server to the lifetime of the weakest part. I think you would want redundant hot spares of certain components with lower mean time between failures.
We do often write off an entire server because a single component fails because the lifetime of the shortest-lifetime components is usually long enough that even on-earth with easy access it's often not worth the cost to try to repair. In an easy-to-access data centre, the component most likely to get replaced would be hot-swappable drives or power supplies, but it's been about 2 decades since the last time I worked anywhere where anyone bothered to check for failed RAM or failed CPUs to salvage a server. And lot of servers don't have network devices you can replace without soldering, and haven't for a long time outside of really high end networking.
And at sufficient scale, once you plan for that it means you can massively simplify the servers. The amount of waste a sever case suitable for hot-swapping drives adds if you're not actually going to use the capability is massive.
I'd naively assume that the stress of launch (vibration, G-forces) would trigger failures in hardware that had been working on the ground. So I'd expect to see a large-ish number of failures on initial bringup in space.
Electronics can be extremely resilient to vibration and g forces. Self guided artillery shells such as the M982 Excalibur include fairly normal electronics for GPS guidance. https://en.wikipedia.org/wiki/M982_Excalibur
On the ground vibration testing is a standard part of pre-launch spacecraft testing. This would trigger most (not all) vibration/G-force related failures on the ground rather than at the actual launch.
The big question mark is how many failures you cause and catch on the first cycle and how much you're just putting extra wear on the components that pass the test the first time and don't get replaced.
Appreciate the insights, but I think failing hardware is the least of their problems. In that underwater pod trial, MS saw lower failure rates than expected (nitrogen atmosphere could be a key factor there).
> The company only lost six of the 855 submerged servers versus the eight servers that needed replacement (from the total of 135) on the parallel experiment Microsoft ran on land. It equates to a 0.7% loss in the sea versus 5.9% on land.
6/855 servers over 6 years is nothing. You'd simply re-launch the whole thing in 6 years (with advances in hardware anyways) and you'd call it a day. Just route around the bad servers. Add a bit more redundancy in your scheme. Plan for 10% to fail.
That being said, it's a complete bonkers proposal until they figure out the big problems, like cooling, power, and so on.
Indeed, MS had it easier with a huge, readily available cooling reservoir and a layer of water that additionally protects (a little) against cosmic rays, plus the whole thing had to be heavy enough to sink. An orbital datacenter would be in a opposite situation: all cooling is radiative, many more high-energy particles, and the weight should be as light as possible.
> In that underwater pod trial, MS saw lower failure rates than expected
Underwater pods are the polar opposite of space in terms of failure risks. They don't require a rocket launch to get there, and they further insulate the servers from radiation compared to operating on the surface of the Earth, rather than increasing exposure.
The biggest difference is radiation. Even in LEO, you will get radiation-caused Single Events that will affect the hardware. That could be a small error or a destructive error, depending on what gets hit.
Had they said "the array will be so large it'll have its own gravity." then you'd be making a valid point.
But they didn't say just "gravity", they said "gravity well".
> "First, let us simply define what a gravity well is. A gravity well is a term used metaphorically to describe the gravitational pull that a large body exerts in space."
So they weren't suggesting that it will be big enough to get past some boundary below which things don't have gravity, just that smaller things don't have enough gravity to matter.
Given all mass has gravity, and gravity can be metaphorically described by a well, all mass has a gravity well. It is not necessary for mass to capture other mass in its gravity. A well is a pleasant and relative metaphor humans can visualize - not a threshold reached after certain mass.
"Large" is almost meaningless in this context. Douglas Adams put it best
> Space is big. Really big. You just won't believe how vastly, hugely, mind-bogglingly big it is. I mean, you may think it's a long way down the road to the chemist, but that's just peanuts to space.
From an education site:
> Everything with mass is able to bend space and the more massive an object is, the more it bends
They start with an explanation of a marble compared to a bowling ball. Both have a gravity well, but one exerts far more influence
As mentioned in the article the Starcloud design requires solar arrays that are ~2x more efficient than those deployed on the ISS. Simply scaling them up introduces more drag and weight problems as do the batteries needed to suffice for the 45 minutes of darkness the satellite will receive.
>The mass analysis also doesn't appear to include the massive number of heat pipes you would need to transfer the heat from the chips to the radiators. For an orbiting datacenter, that would probably be the single biggest mass allocation.
And once you remove all the moving parts, you just fill the whole thing with oil rather than air and let heat transfer more smoothly to the radiators.
Oil, like air, doesn't convent well in 0G, you'll need pretty hefty pumps and well designed layouts to ensure no hot spots form. Heat pipes are at least passive and don't depend on gravity.
A light oil has a density of 700kg per cubic meter. Most common oils are denser.
Then you'd need vanes, agitators, and pumps to keep the oil moving around without forming eddies. These would need to be fairly bulky compared to fans and fan motors.
I'd have to see what an engineering team came up with, but at first glance the liquid solution would be much heavier and likely more maintenance intensive.
I used to build and operate data center infrastructure. There is very limited reason to do anything more than a warranty replacement on a GPU. With a high quality hardware vendor that properly engineers the physical machine, failure rates can be contained to less than .5% per year. Particularly if the network has redundancy to avoid critical mass failures.
In this case, I see no reason to perform any replacements of any kind. Proper networked serial port and power controls would allow maintenance for firmware/software issues.
On Earth we have skeleton crews maintain large datacenters. If the cost of mass to orbit is 100x cheaper, it’s not that absurd to have an on-call rotation of humans to maintain the space datacenter and install parts shipped on space FedEx or whatever we have in the future.
If you want to have people you need to add in a whole lot of life support and additional safety to keep people alive. Robots are easier, since they don't die so easily. If you can get them to work at all, that is.
That isn't going to last for much longer with the way power density projections are looking.
Consider that we've been at the point where layers of monitoring & lockout systems are required to ensure no humans get caught in hot spots, which can surpass 100C, for quite some time now.
No, I mean like you crumple to the ground and cook to death if there isn't someone close enough to grab you within a few minutes. 212F ambient air. Like the inside of a meat smoker, but big enough for humans.
DC's aren't quite there yet, but the hot spots that do occur are enough to cause arc flashes which claim hundreds of lives a year.
This sort of work is ideal for robots. We don't do it much on Earth because you can pay a tech $20/hr to swap hardware modules, not because it's hard for robots to do.
It's all contingent on a factor of 100-1000x reduction in launch costs, and a lot of the objections to the idea don't really engage with that concept. That's a cost comparable to air travel (both air freight and passenger travel).
(Especially irritating is the continued assertion that thermal radiation is really hard, and not like something that every satellite already seems to deal with just fine, with a radiator surface much smaller than the solar array.)
It's all relative. Is it harder than getting 40MW of (stable!) power? Harder than packaging and launching the thing? Sure it's a bit of a problem, perhaps harder than other satellites if the temperature needs to be lower (assuming commodity server hardware) so the radiator system might need to be large. But large isn't the same as difficult.
Neither getting 40MW of power nor removing 40MW of heat are easy.
The ISS makes almost 250KW in full light, so you would need approximately 160 times the solar footprint of the ISS for that datacenter.
The ISS dissipates that heat using pumps to move ammonia in pipes out to a radiator that is a bit over 42m^2. Assuming the same level of efficiency, that's over 6km^2 of heat dissipation that needs empty space to dissipate to.
Well sure. If you think fully reusable rockets won’t ever happen, then the datacenter in space thing isn’t viable. But THAT’S where the problem is, not innumerate bullcrap about size of radiators.
(And of course, the mostly reusable Falcon 9 is launching far more mass to orbit than the rest of the world combined, launching about 150 times per year. No one yet has managed to field a similarly highly reusable orbital rocket booster since Falcon 9 was first recovered about 10 years ago in 2015).
I suspect they'd stop at automatic rendezvous & docking. Use some sort of cradle system that holds heat fins, power, etc that boxes of racks would slot into. Once they fail just pop em out and let em burn up. Someone else will figure out the landing bit
I won't say it's a good idea, but it's a fun way to get rid of e-waste (I envision this as a sort of old persons home for parted out supercomptuers)
Thanks for the thorough comment—yes, the heat pipes etc haven’t been accounted for. Might be a future addition but the idea was to look at some key large parts and see where that takes us in terms of launch. The pipes would definitely skew the business case further. Similarly, the analysis is missing trusses.
Don’t even get me started on the costs of maintenance. I am sweating bricks just thinking of the mission architecture for assembly and how the robotic system might actually look. Unless there’s a single 4 km long deployable array (of what width?), which would be ridiculous to imagine.
Don’t you need to look at different failure scenarios or patterns in orbit due to exposure to cosmic rays as well?
It just seems funny, I recall when servers started getting more energy dense it was a revelation to many computer folks that safe operating temps in a datacenter should be quite high.
I’d imagine operating in space has lots of revelations in store. It’s a fascinating idea with big potential impact… but I wouldn’t expect this investment to pay out!
What if we just integrate the hardware so it fails softly?
That is, as hardware fails, the system looses capacity.
That seems easier than replacing things on orbit, especially if StarShip becomes the cheapest way to launch to orbit because StarShip launches huge payloads, not a few rack mounted servers.
Space is very bad for the human body, you wouldn't be able to leave the humans there waiting for something to happen like you do on earth, they'd need to be sent from earth every time.
Also, making something suitable for humans means having lots of empty space where the human can walk around (or float around, rather, since we're talking about space).
Underwater welder, though being replaced by drone operator, is still a trade despite the health risks. Do you think nobody on this whole planet would take a space datacenter job on a 3 month rotation?
I agree that it may be best to avoid needing the space and facilities for a human being in the satellite. Fire and forget. Launch it further into space instead of back to earth for a decommission. People can salvage the materials later.
The problem isn't health “risk”, there are risks but there are also health effects that will come with certainty. For instance, low gravity deplete your muscles pretty fast. Spend three month in space and you're not going to walk out of the reentry vehicle.
This effect can be somehow overcome by exercising while in space but it's not perfect even with the insane amount of medical monitoring the guys up there receive.
Good points. Spin “gravity” is also quite challenging to acclimatize to because it’s not uniform like planetary gravity. Lots of nausea and unintuitive gyroscopic effects when moving. It’s definitely not a “just”
Every child on a merry go round experiences it. Every car driving on a curve. And Gemini tested it once as well. It’s a basic feature of physics. Now why NASA hasn’t decided to implement it in decades is actually kind of a mystery.
The Jamestown colonists starved to death literally living on the shore of the most productive marine environment on earth. They didn’t know how to care for the fishing nets, so they rotted, and then didn’t know how to fix them.
The issue was that many of the colonists were second sons of relatively wealthy families, and weren’t all that familiar with fishing or farming. The first son inherited everything, and the second son had to make his way in the world, and colonizing was an enticing prospect for making your fortune. Poorer families, at the very early stages, weren’t sending their sons on these ventures because they needed the labor at home.
As someone who grew up next to Jamestown, I can add some context.
John Smith, one of Jamestown's leaders, was not from a wealthy or privileged background. "The issue" may have been less about class and more about poor organization, leadership and unrealistic expectations.
Fishing and farming skills also deserve context. The soil around Jamestown was marshy and brackish, unsuitable for traditional English farming methods. Yes there were lots of fish but they only ran seasonally (sturgeon etc). The "starving time" you are referencing was made worse by a drought and cutoff trade with the indians
The soil may have been brackish, but this wasn't their main setback.
The Jamestown colonists didn't even attempt to plant crops for several years after their arrival. Their first ship brought jewelers and smiths to work the gold they assumed they'd find, but didn't have a real plan for agriculture. The majority died of starvation and disease, but the survivors were sustained by meager leftover travel supplies from newly arriving ships, and by raiding neighboring natives for their corn.
Less than a decade later, separatist Pilgrims landed in New England, and by contrast, grew crops immediately, and cultivated diplomatic relations with their neighbors. The Pilgrims settled in a higher latitude with a shorter growing season, but during their first drought they had already stored enough supplies to share with local natives.
Jamestown could have been on a similar footing if they'd prioritized survival and diplomacy over finding treasure for the crown, the chartering company, and themselves.
>The Jamestown colonists didn't even attempt to plant crops for several years after their arrival
Source? I'm pretty sure they planted corn and wheat as soon as they could, in the first month of arrival. "The 15th June we had finished our fort... we had also sown most of our corn on two mountains. It sprang a man's height from the ground." https://en.wikipedia.org/wiki/Edward_Maria_Wingfield
> Less than a decade later, separatist Pilgrims landed in New England, and by contrast, grew crops immediately, and cultivated diplomatic relations with their neighbors.
And, as I understand it, settled into areas which had previously been cleared and cultivated by the natives but had been relatively recently abandoned.
https://discover.hubpages.com/education/The-Pilgrims-and-the...
"The Pilgrims decided to establish their colony in an area that had been cleared and abandoned by the Patuxet Indians. One colonist remarked, “Thousands of men have lived here, which died in a great plague not long since; and pity it was and is to see so many goodly fields, and so well seated, without men to dress and manure the same."
That's one amazing head start. And, had they not had it, the Pilgrims probably would have died, too.
This mostly fails a sniff test to me? And indeed, reading the linked article doesn't support your editorializing. To quote: "There is some evidence that they had poor fishing skills, but other factors may have contributed more to their failures"
The idea that they were not nearly as efficient at building a town as they could have been is not at all surprising. All the more so when you consider just how different the storm season was compared to what they were used to.
But the idea that they failed due to their own inadequacies feels like a stretch? Like, had they "stayed home" what kind of life do you think they had there? People used to have to do far more of their own survival than modern people can really understand.
‘They suffered fourteen nets (which was all they
had) to rot and spoil, which by orderly drying and
mending might have been preserved. But being
lost, all help of fishing perished.’ (25)
(25) Strachey, W. 1998b [1610], ‘A True Reportory of the
wrack and redemption of Sir Thomas Gates,
knight, upon and from the Islands of the Bermudas; his coming to Virginia, and the estate of that
colony then, and after under the government of
the Lord La Warre’, in Haile 1998, p. 441
I originally learned this by talking with a Jamestown National Historical Park docent. I said that, having grown up in Virginia in the 20th century and knowing what tidewater Virginia was like in the 17th century, it would have been very hard to starve to death. American chestnut was still the dominant forest tree, and provided literally tons of nuts per tree. Black walnut and acorn were also plentiful and make good survival foods if you know how to prepare them. The Chesapeake Bay had enormous oyster beds, with oysters being described as "the size of dinner plates", and John Smith said that he thought he could have walked across it on the backs of fish, and if you know how to dry or salt fish it doesn't matter that the sturgeon and rockfish are seasonal. Mussels and crab, likewise, would have been plentiful, and unlike fish, accessible year round. Deer, turkey, rabbit, groundhog, squirrel, opossum and raccoon were plentiful, and passenger pigeon were also around, not having suffered the overhunting they did in the early 20th century.
She indicated that the majority of the English settlers weren't farmers or fishermen and didn't have the hands-on experience to make use of the resources at their disposal. I went home and did a bit of internet research on that statement, and it seemed fairly accurate.
I do not claim to be a trained historian of colonial Virginia; I just grew up there.
You are still strengthening the claim beyond the paper, is my point. The paper, specifically, has several other explanations beyond "they didn't know how to care for nets."
For example:
The colonists’ performance in fishing in
the first years, in common with all other activities,
must also have been severely hampered by their
generally poor health, malnutrition and subse-
quent lack of energy. For a period of five months
there are said to have been only five men healthy
enough to man the bulwarks of the fort against
hostile Virginia Indians. During such difficult
times it is likely that fishing would have been
restricted and perhaps would have been halted
altogether.
That is, it isn't just that they were not "professional fisherman." Something that probably didn't even exist in the modern sense of the word. They were in a much harsher environment than was anticipated.
The low stock of salt and not having the same dry season that they were used to from the other side of the Atlantic almost certainly played much more heavily, as well. (And to be clear, that paper covers these as heavy influences.)
Probably also worth remembering how parasite ridden all of the food supplies you are mentioning would be. Our food supply is supernaturally clean, nowadays.
At any rate, my main gripe here is the mental image of "second sons that didn't know how to do anything" that you conjured. Certainly possible, but feels far overstated, to me. They had managed to survive a ship across the ocean. Something that was not a passive cruise journey.
Like most disasters, there were many causes for this one. The general unpreparedness of the Jamestown settlers is, however, an important one, and probably the primary causative one (although see edit #2 for a strong contrary argument).
The colonists’ performance in fishing in the first years, in common with all other activities, must also have been severely hampered by their generally poor health, malnutrition and subsequent lack of energy.
Obviously, once you're in the throes of malnutrition and illness, your ability to fish and forage is going to be significantly reduced. But the disaster is already in progress at that point. Why were they already malnourished? In large part because they weren't very good at fishing or farming, and didn't actually plan to survive by farming at all, instead intending to rely on trade with the natives. But they mismanaged diplomatic relations with the natives to the extent that not only was trade non-existent towards the second year, but they were actually being shot on sight. They exhausted their supply of small game on the Jamestown peninsula, and couldn't voyage farther than that due to danger from the native Americans, again due to their own mismanagement of relations.
Note that a primary reason for the poor relationship with the native Americans was that the settlers didn't have their own food sources, and resorted to theft and assault to get native's food supplies -- which, as a result of the drought, weren't all that great (source: https://www.loc.gov/classroom-materials/united-states-histor...)
They also didn't have the skills necessary to (for instance) prepare acorns or harvest pine bark cambium. Survival foods would have been foods that noble Englishmen hadn't ever even eaten, much less prepared themselves.
From the Wikipedia article on "The Starving Time":
Although they did some farming, few of the original settlers were accustomed to manual labor or were familiar with farming. Hunting on the island was poor, and they quickly exhausted the supply of small game. The colonists were largely dependent upon trade with the Native Americans and periodic supply ships from England for their food.
And in point of fact, they actually ended up hiring native Americans to fish and harvest shellfish for them, because they didn't know how to do it on their own. (source: https://virginiahistory.org/learn/oysters-virginia#).
As a consequence of the deteriorating relationship with the natives, the Jamestown colonists' ability to do any land-based (as opposed to water-based) subsistence activities was severely curtailed, and, one assumes, their ability to hire natives to fish for them also eroded. But they did have one major advantage, an actual oceangoing ship that they could have sailed into the Bay and used to fish. The natives had only canoes and could not possibly have constituted a major threat on the waters of the Bay. But that only works if you know how to fish, which they didn't. Once the nets rotted due to the colonists not understanding the importance of drying them, that advantage was also neutralized, and starving was inevitable in the absence of relief supplies from England or the Caribbean.
Probably also worth remembering how parasite ridden all of the food supplies you are mentioning would be.
Everyone in the colonial period was parasitized to some extent, including the natives. However, the plant-based survival foods I mentions above (chestnuts, acorns, black walnuts, etc.) are not known for harboring parasites. The animal game certainly would have, but almost certainly not more so than the same game in England would have.
The colonists were ill primarily because they didn't practice good hygiene wrt situating their toilet facilities away from their drinking water and ended up with dysentery, a problem that the native Americans managed to avoid (source: https://encyclopediavirginia.org/the-myth-of-living-off-the-...).
Summary: the original contingent of Jamestown settlers had bad luck (drought, several supply ships being wrecked or otherwise not showing up on time) but their primary problem was that they didn't intend to live off the land at all, either by fishing and farming or by foraging. They didn't have the right supplies to do so, and mostly didn't have the knowledge needed to do it as a backup plan when the original plan of trading with the native Americans failed (due to poor diplomatic skills and poor diplomatic decision making.)
My main lack of trust for this is the over leaning on "incompetence" as the explanation. I'm fully ok with the idea that they made mistakes and were not ready for the vastly different climate. I'm even comfortable with the idea that they may have thought to get more in trade than they were able to get.
That said, I think this vastly overstates how much people got their food from trade. Spices and some goods were, of course, big in trade. Mainline food? Not so much. Most people were not able to stockpile large quantities of food. Some cities maybe could. But it would have been grains/seeds or actual live stock. Not meats or anything that needed refrigeration. For... well, obvious reasons. Even cured meats typically have a very short timeline. So, fishing and hunting and basic gardening would have remained something that people had to do. Pretty much everywhere.
And indeed, this is inline with your edited in article. What were they trying to trade for? Corn. Why did they need to get it by trade, because their crop was bad. Why was the trade not working well? Because nobody had excess corn to trade. Long term stockpiles just couldn't exist to the scale that we think of today. And a lack of rain meant everyone was having a bad crop.
Finally, I want to be clear that I'm perfectly comfortable with the idea that I'm just flat out wrong here. I've just grown super doubtful of a lot of the "these morons were able to sail across the atlantic, but were too stupid to do anything that would have resembled living off the land." Despite most living at the time looking like what we would call that.
Two things… one, they didn’t sail across the Atlantic. They hired ships and professional sailors to sail them across. They were passengers. That crossing wasn’t necessarily an easy one, but it was much more like what happens today with wealthy people who pay sherpas to help them climb Everest. The climbers have to have some knowledge and experience, but they aren’t the experts, and without the sherpas they'd be pretty lost.
Second, the point of this whole thread is that even at home, these were not people who were living off the land. They were wealthy Londoners. They lived in the city. They weren’t even raising their own kitchen gardens, they had people for that.
Wealthy Londoners bought their food just like you and I do. They had food markets. They used currency to buy grain, vegetables, and meat.
FYI, salted meat and fish will last for years and, if stored in a reasonably cool place like a root cellar, for decades. I personally have had Virginia hams that were over ten years old. Dried corn will last for centuries if stored properly.
The reason the settlers made so many diplomatic mistakes with the natives was because their leadership was primarily former military, and they saw the natives as a military problem. This made some sense because when they set out, they thought their primary challenge was going to be fending off military attacks from the Spanish. But that assumption turned out to be tragically wrong.
I'm not saying these people were all incompetent buffoons. Some of them were trained military officers. Some were craftsmen — there’s ample evidence of metal work and glassmaking at Jamestown. They were all experienced horsemen, and they were comfortable with firearms and bladed weapons. But what they weren’t was outdoorsmen, or even farmers, and in hindsight that’s what they needed to be. Once they got actual farmers on site, their immediate problems started to clear up.
I can grant that very dense cities such as London had food markets necessary to supply the city. With the huge caveat that those markets had to have fairly rapid turnover for all of their offerings. Don't forget that most city dwellers had maybe a single shared room with others that they could call home. Such that they likely consumed the food on purchase, with nowhere else to really take it. Even "wealthy" dwellers that did have a place to take food likely couldn't take much. Where do you think they would be able to store it?
And passenger ships in the 1600s were very very different than any sort of passenger ship today. Sure, they were not responsible for ship duties. But they were likely on their own for basic survival on the ship.
Salted meat and fish can last for years if stored in modern containment techniques, sure. With what they would have had in the 1600s, I have serious doubts that you'd get such results. Especially without the resources of a full city at your disposal.
Again, I'm comfortable that I can be wrong on all of this. I would have to see much stronger proof, though. Most of what we call "living off the land" today was largely "typical rural life" for a long long time.
Always the same excuse, that it wasn't "real" communism.
Quoted from your reference:
"Communism is ... a stateless, classless society where resources are owned communally" which, if you read about Jamestown, was the situation with their agriculture.
Jamestown was hardly a dictatorship of the proletariat where workers owned the means of production, nor was it stateless or classes. It was quite literally a strong state's company town that was kept afloat by investors, where rich colonists had servants that grew cash crops lol
The farming was done as a communal activity. Jamestown abandoned it after the first year, and switched to the far more successful system of privately owned plots where the owner could sell the produce as he saw fit.
> "Communism is ... a stateless, classless society where resources are owned communally" which, if you read about Jamestown, was the situation with their agriculture.
A group owning sharing one resource does not imply communism.
Is this how you're read theorems or design programs? There is a difference between "one", "some", "all", most" - the statement you quote does not remotely prove your claim (which of course is nonsense.)
Likely every country on earth has socially some (or one, or many) communally owned, shared resources. Yet it's ludicrous to claim all countries are communist.
The US has co-ops, community gardens, ESOPs, community owned wifi places, power companies, housing projects, roads, lands - so now the US is communist since it has orders of more magnitude community items than Jamestown, right?
And, also from the words you quote, "... a stateless... society..." Not a single country most would call communist is stateless, so now, by your limited and flawed reasoning, there are no communist societies. Yet this too is silly.
And Jamestown was certainly not stateless either (or classless - they also had slaves - so it's baffling how you pick out a few words you like and ignore a lot more to get to your conclusion).
> "....which, if you read about Jamestown..."
... you would realize.
You're looking for boogey-men where they don't exist.
> Likely every country on earth has socially some (or one, or many) communally owned, shared resources
Usually their inefficiency and failure is propped up by taxpayer money. The more of the country that is communally owned, the more communist it is, and the more failure, until there isn't enough taxpayer money to subsidize it and the country fails.
In your reading of Jamestown, what's your explanation of why they abandoned collective farming after the first year? And the Pilgrims did the same?
It's funny how the father of communism was basically an intellectual who leeched off of his wealthier capitalist relatives (the family behind Philips NV).
You need some technical specs on the website. How many DOF does it have? Does it have joint angle sensing? If so, what's the resolution? What's the interface to the servos? What's the payload capacity? Does it have integrated motor controllers? How long is it, and what does the dexterous workspace look like?
As a roboticist, what I'd vote for, in order, is:
- more degrees of freedom
- interchangeable tools, either an actual tool changer (unlikely at the price point) or a fixed bolt pattern with electronic passthroughs
- better joint sensing, e.g. absolute encoders, joint torque sensing
Thank you for the feedback! Thinking out loud:
• Adding one DOF to match ARX kinematics is doable, with a price increase of $30–40.
• A tool changer is a great suggestion. A few of my friends are working on kinematic couplings, which would be ideal for this. I’ll need to give some thought to how to pass electrical signals and power to the tool, while also keeping it lightweight.
• Could you share what functionality you want in terms of encoders? The ST3215 uses 12-bit magnetic encoders, which can retain position after power loss. Are you looking for higher resolution? For torque sensing, if the order volume is large, I can add this for just a $20-30 price increase.
• Finger tip force sensing: Is this for applications like picking up an egg?
• Adding one DOF to match ARX kinematics is doable, with a price increase of $30–40.
You need at least six non redundant DOF to arbitrarily position the end effector in space, three for x-y-z translation and an additional three for roll-pitch-yaw. For research grade arms, I typically want at least a 7 DOF arm, which gives you a lot of cool abilities, most importantly the ability to work around kinematic singularities, and makes the inverse kinematics problem nontrivial in interesting ways. I understand you're hitting a price point, and each additional DOF costs money. I personally would pay for additional DOF. Maybe a modular design?
• A tool changer is a great suggestion. A few of my friends are working on kinematic couplings, which would be ideal for this. I’ll need to give some thought to how to pass electrical signals and power to the tool, while also keeping it lightweight.
Yeah, typically with industrial tool changers there are spring loaded pins on the tool changer that hit pads or insert into sockets on the tool side. There will also typically be a ball detent for positive locking that is driven by a motor in the end effector. But even just a passive mounting plate and a documented connector interface would be huge.
• Could you share what functionality you want in terms of encoders? The ST3215 uses 12-bit magnetic encoders, which can retain position after power loss. Are you looking for higher resolution? For torque sensing, if the order volume is large, I can add this for just a $20-30 price increase.
You take what you can get with encoders. Ideally, you want an encoder that uses grey code, so it always knows exactly where it is no matter what. But for cost reasons this is rarely done, and you get what is essentially a relative encoder and you have to count the steps. The reason the former is preferable is that it doesn't rely on the microcontroller keeping up with the encoder, so there's no issue if you miss counts. But, again, those are as far as I know a significant step up in cost.
You'd also ideally add torque sensing at the joints because it opens up a whole world of control techniques that you can't get with just joint position sensing. You can do compliance or force control, which lets the arm act as if it had a spring at the joints, so when it hits something the impact is nice and gentle, and importantly, so you can do things like e.g. a bolt insertion task where you have to control the position of the arm in x and y but you want to exert a small positive insertion force in z.
• Finger tip force sensing: Is this for applications like picking up an egg?
Yes, but even for picking up rigid objects this turns out to be very useful. If you're picking up an eg, you want to exert a controlled positive grip force that's big enough so you don't slip but not so big that you crack the egg. If you're picking up a bolt, you definitely won't break it but many robots are strong enough to deform the threads. If you're picking up something slippery, it would be great to try to detect the slip by touch. And so on. Often, you don't know exactly how big the object is or how flexible/brittle it is and it's hard to judge by vision alone whether the fingers are even in contact with it, or if they are how much it's being deformed, so being able to control grip force is very useful. Add force and position sensing to the grippers and you can judge how deformable the object is and make decisions accordingly.
Or if you're folding clothes or handling cables or wires or anything else flexible, you really need to have a sense of touch. You can't really do these tasks very well with position sensing and vision alone.
Another idea: Maybe add a passive mounting adapter and power leads at the end effector so people can add their own vision or lidar sensors, and just let them connect via bluetooth, so you don't have to route signal cables?
FYI, I am a space roboticist by trade and I teach a graduate level class in robotics at the University of Maryland.
Also, for the type of work I'd do with an arm like this, I'd be more than happy to just have the follower arm. You need a leader arm to do some types of teleoperation or imitation learning, but not really to do reinforcement learning or learn about control theory.
What you do need is an articulated rigid body model that you can import into e.g. NVIDIA Isaac Lab or Gazebo. The availability of a good digital model is a HUGE selling point.
Pardon my naïve imagination but, would stackable joints work - with same connector as the extremity tooling ? The joint would be a standard piece and more degrees of freedom would just mean stacking additional joints. I suppose this has already thought about...
> You need some technical specs on the website. How many DOF does it have? Does it have joint angle sensing? If so, what's the resolution? What's the interface to the servos? What's the payload capacity? Does it have integrated motor controllers? How long is it, and what does the dexterous workspace look like?
The post says "kit that keeps LeRobot SO-101’s kinematics" so it's probably very similar to [1] namely 5DOF and a gripper, using STS3215 servos [2]
> As a roboticist, what I'd vote for, in order, is:
As they are making a robot at the $219 price point, I very much doubt they have the money to add anything to the design.
Thank you for stepping in. Yes, it’s 5 DOF and a gripper using ST3215 (12V for the follower arms and 7.4V—various gear ratios—for the leader arms).
As for hardware features, we can’t add much to the current model since, as you mentioned, we are running on very thin margins. We’re gathering suggestions primarily for future models.
“I’d love your feedback! Beyond manufacturing, cleaning up the codebase, and writing docs, I’m considering: a force-controlled gripper, a parallel-jaw gripper, an extra wrist DOF (matching the new Trossen and ARX arms), full force feedback on the leader arm (though that may triple the price), a more affordable version with lower resolution each joint, and a longer-reach variant. Which of these—or something else—would be most useful to you?”
Are there any affordable robot kits you recommend for learning control, CV, RL etc.? I was budgeting for the SO-101 so I think I'll get OP's device and then something that's not an arm for variety.
Yep, and in 1986 I had just interned at a NASA lab where they were investigating multiprocessors, which at the time was a wild and crazy idea. I had the epiphany that I could bit-bang a driver between a C64 and one or more 1541s and make my own little multiprocessor, so I did. I made it all the way to the International Science Fair and ended up getting a college scholarship. I've written a lot of code but I'm probably the proudest of that couple of hundred lines of 6502 assembly I wrote when I was 17.
I mean breakthroughs like ChatGPT, etc. that were not possible back then computationally w.r.t. mass adoption and upending real industries. Are we entering a new age of ubiquitous AI computing?
So my kid has ARFID. I am not a doctor, but what I have learned is that eventually, anything that causes nausea associated with eating can progress into ARFID, even if the original underlying cause resolves. ARFID is technically an eating disorder, like anorexia, but it isn't related to poor body image; it is, basically, a food aversion to, well, food. All food, or nearly all. This is what happened to my kid; there’s an underlying disorder that can be treated with meds, but when not treated causes nearly constant nausea. Once we diagnosed and treated the underlying cause, the nausea didn’t go away. My kid's brain had learned to associate eating with being sick, and that association persisted even when the original illness resolved.
My kid at one point was admitted to the hospital for two weeks because said kid had lost so much weight. They inserted a gastric tube, and kid discovered that kid did not become nauseous when fed through the tube. We knew kid had ARFID, but this was ah “a-ha” moment for kid, because it showed kid irrefutable proof that the problem was not a physical issue with kid's gut. It was very clearly related to the experience of eating. Kid has subsequently learned to “eat through the nausea” as described in the post.
That's what this sounds like. There was likely an underlying physical cause of the nausea; that cause may or may not have resolved, but the nausea is now it's own thing. The OP indicates a series of consults with a behaviorist; I would imagine being screened for eating disorders is what that was about, but ARFID is not a common eating disorder, and may or may not have been considered.
Fwiw: Strenuous winter activities (eg mountain hiking and camping) can more than double baseline calorie demand. Sitting down to a base camp meal afterward can be a body-has-a-mind-of-its-own saliva-gushing "FEED ME NOW!!!" experience of really-need-to-pee intensity. Normally unappealing food commonly becomes just fine - uncooked pasta, blocks of lard, whatever. So I wonder... what happens if food-is-nauseating is repeatedly hit with a hammer of the "are you going to eat that vomit, or can I please have it? - it looks quite yummy" of extreme calorie appeal?
Another thought is the breastfeeding maternal-diet envelope expansion drill, of working outward from some one bland safe thing, experimentally adding a thing at a time, and backing off upon problems.
The absolute best meal I've ever had was a cup of clear chicken broth: my first meal after a month in the ICU without eating. As the old saying goes, hunger is the best seasoning, and as I always say, when you're hungry -- I mean really hungry -- nothing satisfies like... food.
As plausible as this might sound, it's kind of "just man up and power through it" advice. They stated they often don't feel well enough to leave their house.
Your son’s serotonin in his gut may still be high. Please do not assume that the nausea is related to a psychological disorder. My parents did this with me and it turns out my case was a lot more complicated.
I don’t know if you tried antinausea medication that blocks the serotonin HT3 receptor but if you try that and the nausea goes away, I would look ant increased serotonin release as a culprit instead.
"Research regarding ARFID is still emerging; the role of serotonin in sensory processing and anxiety suggests a potential mechanism through which neurotransmitter dysregulation could influence the disorder. Moreover, sensory processing issues, which are not exclusive to ARFID but are also present in other conditions (eg, autism), may be associated with abnormal serotonin function, further supporting the need to investigate serotonin's role in ARFID"
Another parent with an ARFID kid here, though it sounds like our case is somewhat different. My kid, almost 16 now, has always had an extremely limited diet repertoire, starting at a failure to switch to non-smooth baby food as a toddler. Early on he exhibited extreme fear reactions to being presented with new foods. Nausea has never really been a huge issue, other than gagging if we tried to force him to try something new.
As he's matured, the reactions are less intense, and with a lot of therapy sessions, most recently with a dietician who also has ARFID, we've made real progress. In our case, that means he's (enthusiastically!) eating cheese pizza, scrambled eggs, and chocolate (but not white) milk, along with the bacon which has been his main protein source since age 3 or so.
Not sure what we'll do when he heads off to college in 2.5 years.
Anyway, if you want to compare notes with another ARFID parent, my email's in my profile.
reply