Author's Opinion

The views in this column are those of the author and do not necessarily reflect the views of iTWire.

Have your say and comment below.

Sunday, 15 June 2014 00:30

Recovering Linux after catastrophic deletion


A recurring Linux joke / horror story is running the command rm –rf /. Imagine if it actually happened? What would or could you do to recover?

Linux specialist Kyle Kelley recently decided to see what happened if he launched a new Linux server and ran rm –rf / as root.

This command is the remove (delete) command, with the flags –rf indicating to run recursively down all folders and subfolders, and to force deletion even if the file is ordinarily read-only. The / indicates the command is to run from the top-most root directory in Linux.

This command – with these parameters – is the stuff of legend, or at least of practical jokes. While nobody would ever be foolish (one would hope) to run this command in a live environment, the threat of doing so has long been a Linux joke. It’s the Linux version of deleting all the files on your hard disk on a Windows computer – but in contrast to Windows it is actually surprising how usable a Linux system can still be after such a disaster – in the right hands.

Actually, as Kelley discovered, modern Linux implementations actively try to prevent such a disaster; the rm command now also requires the verbose flag --no-preserve-root to do this damage.

Kelley is not the first person to document such a situation; Mario Wolczko from the Department of Computer Science at the University of Manchester previously wrote of such a problem in 2006 though with less experimentation, and more catastrophe, as the cause.

Both Kelley and Wolczko found the built-in functionality of the Linux shell to be a massive boon. So, for instance, even though /bin/ls may no longer exist you can still get a directory listing via echo * - this combines the shell’s built-in echo command and file-globbing to show the files which remain.

By using echo and the Linux I/O redirection operators it is possible to create new files, sending output to disk.

This isn’t limited to text strings; by using escape sequences of the form \xhh – where hh is a two-digit hexadecimal number – you can even write binary data direct to a file.

There is a catch; \x00 doesn’t write a zero byte as you might expect; instead it terminates the echo command. In this case you need to use an octal sequence with echo –ne $’\\0000’.

While this is tedious, if you have another system available and can make a hex dump of executable commands, you now have a way to recreate them on your damaged system using only the shell.

Of course, all still isn’t plain sailing. Your newly created file is not actually executable. Nevertheless, writing over an existing executable file can do the trick, because you can completely replace the contents and its executable bits will remain. Perhaps the chmod command might be the first command to recreate in this fashion.

Reddit user throw_away5046 provided a robust solution to getting an executable bit set, provided you have network access to another Linux system via /dev/tcp and can compile some custom C code.

With such power at your fingertips you can, and should, obtain BusyBox, the tiny swiss army knife of embedded Linux. With this one executable you can achieve the full gamut of a wide range of other valuable commands and utilities.

In fact, once Kelley was able to install BusyBox he had no difficulty in recreating the /bin folder, well on his way to rebuilding his trashed Linux system.

This experiment demonstrates the need to remain cool and calm under pressure. The first instinct for some in such a disaster may be to reboot, though it is dubious such a damaged system would reboot at all.

While the use of rm –rf / is surely apocryphal, there can be genuine disasters which occur such as a corrupted dynamic linker, meaning all dynamically-linked executables become unexecutable.

It is a testament to Linux and to the sharp minds of Linux users that in a seemingly impossible and catastrophic situation there can still be a means to get back to a usable system.


26-27 February 2020 | Hilton Brisbane

Connecting the region’s leading data analytics professionals to drive and inspire your future strategy

Leading the data analytics division has never been easy, but now the challenge is on to remain ahead of the competition and reap the massive rewards as a strategic executive.

Do you want to leverage data governance as an enabler?Are you working at driving AI/ML implementation?

Want to stay abreast of data privacy and AI ethics requirements? Are you working hard to push predictive analytics to the limits?

With so much to keep on top of in such a rapidly changing technology space, collaboration is key to success. You don't need to struggle alone, network and share your struggles as well as your tips for success at CDAO Brisbane.

Discover how your peers have tackled the very same issues you face daily. Network with over 140 of your peers and hear from the leading professionals in your industry. Leverage this community of data and analytics enthusiasts to advance your strategy to the next level.

Download the Agenda to find out more


David M Williams

David has been computing since 1984 where he instantly gravitated to the family Commodore 64. He completed a Bachelor of Computer Science degree from 1990 to 1992, commencing full-time employment as a systems analyst at the end of that year. David subsequently worked as a UNIX Systems Manager, Asia-Pacific technical specialist for an international software company, Business Analyst, IT Manager, and other roles. David has been the Chief Information Officer for national public companies since 2007, delivering IT knowledge and business acumen, seeking to transform the industries within which he works. David is also involved in the user group community, the Australian Computer Society technical advisory boards, and education.



Recent Comments