Hacker Newsnew | past | comments | ask | show | jobs | submit | grymoire1's commentslogin

I have always made /home a separate partition. This makes it so much easier to reinstall and/or wipe out a distro and install a new one. All of my files are left undisturbed.

One complication caused by shared libraries was the security threat. An executable using a shared library allowed the user to execute with a different (updated) library without recompilation.

This is a security threat, especially with SETUID programs. If you could change the library, you could install new code and gain privileged access.

This was why /usr/sbin was created - all of the programs there were compiled with static libraries.


Some people think today's file hierarchy is complicated. .That's amusing.

I worked at an R&D center where we had hundreds of UNIX systems orf all types(i.e. Sun, Ultrix, HP, Symbolics, etc.) We also had Sun 2's , 3's and 4's - each with different CPU's/architectures and incompatible binaries. Some Suns had no disks at all. And with hundreds of systems, we literally had a hundred different servers across the entire site.

I would compile a program for a Sun 3, and needed a way to install the program once, for use on hundreds of computers. Also teams of people on dozens of different computers needed to share files with each other.

This was before SSH. We had to use NFS.

It was fairly seamless and .... interesting.


I did consulting work for a place with Sun 2's and 4's, AIX, HP-UX, Xenix, and SCO. There was NFS, Netware, and UUCP all cheerfully coexisting on an IPv4 network that used a public Class C for NATed internal use. (It's now a zombie IP range that doesn't do anything useful.)

Later, I wrote code a university that had even more heterogeneous clusters.

The problem today is the siloification and bias against system diversity that doesn't account for proper software configuration management and support of multiple platforms. Portability is a dying art.


Summary: The temperature will continue to rise after removing meat from a grill. The thicker the meat, the more temperature is retained. You want the cut the meat when the proper temperature is reached.

So if the meat is at the target temperature - slice it right away. If the temperature is below the target temp, and the meat is thick, wait until the target temperature is reached, then slice it.

The trick is knowing how thick the meat is, how much the temperature will continue to rise after removal, and therefore when to remove the meat.


While Gibson is overly pompous, I should point out that SpinRite works below the file system structure, and not all filesystems are robust like ZFS, etc. Second - there are two main SpinRite modes - Read/Check and Read/Write/Correct. SSD's should obviously never use the second mode. I suppose the first mode might be used to check if there are problems on a SSD.

SpinRite - last time I used it, was painfully slow - like days or even weeks to run. He's been working on a faster SpinRite 6.1 for at least 10 years now. FWIW, here's the current (2021) roadmap - https://www.grc.com/miscfiles/GRC-Development-Roadmap.pdf


What does Spinrite actually do that is materially different than ddrescue?

And the idea of repairing a failing disk and not just making an image is usually insane.

> I should point out that SpinRite works below the file system structure

This is not the flex that you seem to think it is.


> What does Spinrite actually do that is materially different than ddrescue?

Pretty sure Spinrite repairs/recovers, as you imply in your very next sentence (which I agree with, it's not a good idea to play with the filesystem of a failing disk.) I've never used it, though, so I may be wrong.

The real question for me is why would somebody pay for it over using ddrescue to get an image, and TestDisk/PhotoRec to do the filesystem recovery (on the image)? They're free and very good.

https://en.wikipedia.org/wiki/Ddrescue

https://en.wikipedia.org/wiki/TestDisk


I had an Team Group SSD that very occasionally would commit a successful write only to be followed by a read failure several weeks/months later. Eventually it got to the point where some blocks just wouldn't read at all (or just read corrupted data) I ended up getting a RMA replacement.

On the replacement drive I used the badblocks utility to do a read/write test to ensure every block on the SSD was fine after a write/read of every sector.

Probably not best practice but how do I check the SSD is fine in the first place, especially a blank SSD? My issue is reading a black SSD is likely to work just fine as presumably if there is no written data yet the SSD controller can short-circuit and just return a zero-filled response. This means the underlying media isn't tested at all if I am understanding it correctly.

The first SSD I got seemed fine (even SMART kept parroting that everything was fine even though some of the more detailed SMART data were showing worrying trends) and it was only when I noticed that some files on the NTFS partition was not reading correctly that I started to suspect disk failure. At best it would read "fine" but with corrupted data but over time it'd start to simply hang on a read and fail to read.

Luckily I had md5 sums of some of these files and was able to confirm that several files were corrupted from between when the file was written (and the md5sum computed) and several weeks later which is how I ended up running badblocks on the first drive to confirm the defect. I wish I used ZFS and not NTFS.


> Probably not best practice but how do I check the SSD is fine in the first place, especially a blank SSD?

The only way that is certain to check the memory cells is to overwrite the whole drive, flush all disk cache (power cycle the system), read all the written bytes, and check that the values read are the same as the values that have been written. This could be accomplished e.g. by setting up encryption on the whole drive on the block level (e.g. on Linux, LUKS), writing zeroes to the open (decrypted) volume, and after power cycle, opening (decrypting) the volume again and checking that all bytes read are zero.

A simpler, less reliable, but still worthy test would be to do the same, except instead of checking the read values, just throwing them away (e.g. on Linux, redirecting to /dev/null). The disk firmware should still try to read all the sectors, and if it is not lying too much, show read problems/reallocated sectors in the SMART data.



The original product was for low-level formatting and checking of WD, MFM, and RLL (pre IDE) drives where the drives either had or could alter the magnetic arrangement of tracks and sectors. For example, taking an MFM drive from an MFM controller and placing it on an RLL controller, DOS wouldn't be able to read, write, or format it. However, SpinRite could low-level reformat it to work and it would be both faster and a higher capacity (thanks to the RLL controller).

> last time I used it, was painfully slow

Yep. That's the breaks of format, write, and read every single sector while waiting for it to come around at least 3 times on a 3600 RPM HDD.


I remember connecting in 1983/84. It was a trial by fire. I started learning Unix by using the Eunice emulator on VMS. I was doing such a good job on documentation (using nroff), that I convinced my company to buy a Sun workstation. I poured through the manual, especially the section on UUCP, and first con a connection to a local college using a modem. I was able to use UUCP to copy the mail and news software. In those days, the standard response to a question was RTFM - in other words, read the source code and follow the instructions. If you can't do that - you shouldn't try to connect to the 'Net.

So you had to slowly bootstrap yourself in technology. Once you were able to read and post news, you next needed to send email to people. And that meant you had to master UUCP mail. Unlike domains, it was a route. You had to specify each step in the relay. So you might need to specify 5 or 6 specific systems by name to reach the desired person. And hope they could find a route back to you.

Most of all - the great part of the early days was respect, and the ration of information to noise. Inaccurate information didn't remain unchallenged. If you asked a question, it was very likely the author of the technology or program would answer you. Often others would pipe in answering simple questions, so that the program creator wouldn't have to be distracted from important work.

It was a humbling experience, especially when you said something that was factually inaccurate, or technically naive. One learned to think and research before responding to anything.

Until the freshmen classes came to college in September......

Normally there was a very formal process to creating newsgroups, but the alt.* distribution was uncontrolled, and a system that had a well regulated and automated process for the creation of newsgroups evolved into newsgroup creation/deletion wars.

And then we had the first spam. And then we had trolls. And anti-spam filters cause anti-anti-spam generators to be created. And then the web was implemented.


Usenet wasn't an app. It was a protocol. The programs we used were, as I recall, readnews (the official program), and rn - written by Larry Wall who later created perl. Rn was a wonderful interface. Besides blocking and filtering, it has threaded conversations - much like reddit.


Im not sure I don't define Usenet specifically. It wasn't the program (I've no memory of what it was, News or something like that.) The protocol is NNTP. Usenet wasn't a term we used a lot but when we did it kinda referred to the whole News eco system. So encompassing the software, protocol, forums and so on.

Kinda like we'd use yhe term "web" today to encompass HTTPS, browsers, servers and so on.


The final (Rev 10) version of BusPirate V5 is shipping.


Debugging, prototyping, hacking, and reverse engineering electronics. There are many other boards, such as the Tigard, Bruschetta Board, GreatFET, Glasgow, and many other boards. Most of them are FT2232H and FT232 based boards - which wrap the chip with level shifters, switches and interfaces.


The thing I love about the Bus Pirate is that you don't need to install any software to use it. Just connect to the serial port. The Glasgow is cool as hell but you have to use Python and to really master it you have to master Amaranth HDL to make use of the FPGA.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: