Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Around the first .com wave, I used to develop on Windows (C and C++) and deploy on SPARC/PowerPC/PA-RISC servers.

The only x86 servers were running a mix of NT 4.0 and Windows 2000.

Apparently it worked.



Of course it works. Otherwise there would be no ARM on the servers. But it is second class to a setup, where the development and deployment happens on the same platform. Having good ARM machines available on the desktop will give ARM on the server a boost. As I wrote, don't take my word for it, listen to what Linus hat to say on that topic.


Ironically UNIX is exactly one kind of platform where remote development is first class.

One just needs to properly configure mount points and remote sessions.

There was hardly any difference between my X sessions and local development.


Sorry, in general that is not true. You need a very good network connection, both with bandwidth and latency, to make remote devlopment workable. Still, it never equals local development. If you follow the discussions here on hacker news, which terminal software has the smalles latency, remote development never can compete with that.


Apparently having UNIX servers on premises is a forgotten art.


It is. Even if companies have their own hardware, it is often in separate compute centers. And of course anything on AWS or Azure is not on premise.


Not everyone is FAANG, or pretends to be one.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: