Home opinion-and-analysis The Linux Distillery Recovering Linux after catastrophic deletion

Author's Opinion

The views in this column are those of the author and do not necessarily reflect the views of iTWire.

Have your say and comment below.

Get all your tech news delivered to your mail box five days a week
iTWire UPDATE - it's FREE!


A recurring Linux joke / horror story is running the command rm –rf /. Imagine if it actually happened? What would or could you do to recover?

Linux specialist Kyle Kelley recently decided to see what happened if he launched a new Linux server and ran rm –rf / as root.

This command is the remove (delete) command, with the flags –rf indicating to run recursively down all folders and subfolders, and to force deletion even if the file is ordinarily read-only. The / indicates the command is to run from the top-most root directory in Linux.

This command – with these parameters – is the stuff of legend, or at least of practical jokes. While nobody would ever be foolish (one would hope) to run this command in a live environment, the threat of doing so has long been a Linux joke. It’s the Linux version of deleting all the files on your hard disk on a Windows computer – but in contrast to Windows it is actually surprising how usable a Linux system can still be after such a disaster – in the right hands.

Actually, as Kelley discovered, modern Linux implementations actively try to prevent such a disaster; the rm command now also requires the verbose flag --no-preserve-root to do this damage.

Kelley is not the first person to document such a situation; Mario Wolczko from the Department of Computer Science at the University of Manchester previously wrote of such a problem in 2006 though with less experimentation, and more catastrophe, as the cause.

Both Kelley and Wolczko found the built-in functionality of the Linux shell to be a massive boon. So, for instance, even though /bin/ls may no longer exist you can still get a directory listing via echo * - this combines the shell’s built-in echo command and file-globbing to show the files which remain.

By using echo and the Linux I/O redirection operators it is possible to create new files, sending output to disk.

This isn’t limited to text strings; by using escape sequences of the form \xhh – where hh is a two-digit hexadecimal number – you can even write binary data direct to a file.

There is a catch; \x00 doesn’t write a zero byte as you might expect; instead it terminates the echo command. In this case you need to use an octal sequence with echo –ne $’\\0000’.

While this is tedious, if you have another system available and can make a hex dump of executable commands, you now have a way to recreate them on your damaged system using only the shell.

Of course, all still isn’t plain sailing. Your newly created file is not actually executable. Nevertheless, writing over an existing executable file can do the trick, because you can completely replace the contents and its executable bits will remain. Perhaps the chmod command might be the first command to recreate in this fashion.

Reddit user throw_away5046 provided a robust solution to getting an executable bit set, provided you have network access to another Linux system via /dev/tcp and can compile some custom C code.

With such power at your fingertips you can, and should, obtain BusyBox, the tiny swiss army knife of embedded Linux. With this one executable you can achieve the full gamut of a wide range of other valuable commands and utilities.

In fact, once Kelley was able to install BusyBox he had no difficulty in recreating the /bin folder, well on his way to rebuilding his trashed Linux system.

This experiment demonstrates the need to remain cool and calm under pressure. The first instinct for some in such a disaster may be to reboot, though it is dubious such a damaged system would reboot at all.

While the use of rm –rf / is surely apocryphal, there can be genuine disasters which occur such as a corrupted dynamic linker, meaning all dynamically-linked executables become unexecutable.

It is a testament to Linux and to the sharp minds of Linux users that in a seemingly impossible and catastrophic situation there can still be a means to get back to a usable system.

PROTECT YOURSELF AGAINST BANDWIDTH BANDITS!

Don't let traffic bottlenecks slow your network or business-critical apps to a grinding halt. With SolarWinds Bandwidth Analyzer Pack (BAP) you can gain unified network availability, performance, bandwidth, and traffic monitoring together in a single pane of glass.

With SolarWinds BAP, you'll be able to:

• Detect, diagnose, and resolve network performance issues

• Track response time, availability, and uptime of routers, switches, and other SNMP-enabled devices

• Monitor and analyze network bandwidth performance and traffic patterns.

• Identify bandwidth hogs and see which applications are using the most bandwidth

• Graphically display performance metrics in real time via dynamic interactive maps

Download FREE 30 Day Trial!

CLICK TO DOWNLOAD!

ITWIRE SERIES - IS YOUR BACKUP STRATEGY COSTING YOU CLIENTS?

Where are your clients backing up to right now?

Is your DR strategy as advanced as the rest of your service portfolio?

What areas of your business could be improved if you outsourced your backups to a trusted source?

Read the industry whitepaper and discover where to turn to for managed backup

FIND OUT MORE!

David M Williams

joomla site stats

David has been computing since 1984 where he instantly gravitated to the family Commodore 64. He completed a Bachelor of Computer Science degree from 1990 to 1992, commencing full-time employment as a systems analyst at the end of that year. Within two years, he returned to his alma mater, the University of Newcastle, as a UNIX systems manager. This was a crucial time for UNIX at the University with the advent of the World-Wide-Web and the decline of VMS. David moved on to a brief stint in consulting, before returning to the University as IT Manager in 1998. In 2001, he joined an international software company as Asia-Pacific troubleshooter, specialising in AIX, HP/UX, Solaris and database systems. Settling down in Newcastle, David then found niche roles delivering hard-core tech to the recruitment industry and presently is the Chief Information Officer for a national resources company where he particularly specialises in mergers and acquisitions and enterprise applications.

Connect