Jump to content

Life saving script


newbi3
 Share

Recommended Posts

I just accidentally deleted a script that I have been working on for a few days and after shouting "f***" a couple of times I went to google and found this script:

#!/usr/local/bin/perl

open(DEV, '/dev/sda6') or die "Can't open: $!\n";

while (read(DEV, $buf, 4096)) {
  print tell(DEV), "\n", $buf, "\n"
    if $buf =~ /littleWindow/;
}

Replace /dev/sda6 with what ever partion the file was on and "littleWindow" with a string that was in your file. It will print everything to the screen and you can get the contents of your file back!

Credits to this site: http://www.perlmonks.org/?node_id=106709

Link to comment
Share on other sites

did u have to boot up from live disk? For those that want to recover deleted files shut down the machine right Away!

a really nice tool I have used a few weeks ago, testdrive

Just open the location of the file you want to recover and you will see highlights in red.

Edited by i8igmac
Link to comment
Share on other sites

Im sure you could do that yes. I have linux installed on the machine that the script was on and I found this script pretty quick so luckily my file didn't have time to get over written

Link to comment
Share on other sites

1. You should keep your source files in a VCS repository. I recommend Git.

2. We've only been saying it since the 1980's, BACKUP YOUR FILES. CrashPlan (as pimp'd by Darren) works quite well.

TestDisk is great, but you shouldn't rely on it. It can fail. Even when it works, you lose the paths/filenames since it can't recover inodes.

Link to comment
Share on other sites

You're missing this bit: "have been working on for a few days"

While I'm working on something, I don't commit until it's in working order. It helps prevent people from sabotaging my daily coffee intake due to me breaking the build.

Version control, backups... neither helps prevent simple user error. And in that case this is a perfectly workable solution. I would put the machine in single-user mode first though.

Link to comment
Share on other sites

Use Git, work in your own branch, you can even use a private branch that you don't push if you want. At least that way you have historic copies of the file, even if they're only on your machine. Your excuse is weak.

Also... what exactly is your argument against backups? Automated backup software (like CrashPlan) will ABSOLUTELY help with simple user errors like this one. CrashPlan continuously monitors your filesystem and backs up changes, so you're protected if you accidentally delete a file that you were working on.

Link to comment
Share on other sites

If you seriously expect a dev to commit his work about as often as he saves the file he's working on, you're insane.

When I save the data in a file that takes X amount of time. When I have to wait for that data to copy to a 'safe' location that will add Y amount of time. X is unavoidable. Y is overhead. Y in and of itself is insignificant. Y * times you hit save is not. And all of that just to prevent you from erroneously removing the file you were working on. So yes, it can be done, but the cost, I feel, is inhibitive.

If someone's worked up about accidentally removing his work as he's doing it, maybe that someone should take a long, hard look at how they're doing their work, the extent to which they master their environment, and yes, if they should be doing this sort of thing in the first place.

My arguments against backups (your words) is that they record a moment in time. Whay happens between then and armageddon is lost. So user error can still cost you. In that regard a backup will protect you more, but not completely. If you accidentally copy file A over file B and only notice this in a week or 2 (say) because you thought you were done with it and syntactically the file is valid and correct. Yes, I'm sure there are ways to notice the problem and various tools can aid you. Say those weren't around or they don't apply to your scenario. Your backup strategy will make a copy of the files on your filesystem. It won't tell you the files contain what you WANT them to contain.

Keeping a spare copy of every permutation of a file on the system around for a week or over will require significant storage and cost a significant amount of CPU cycles. And seriously... how often do you accidentally remove/replace file contents? Last time you did, how long did it take you to recover? Is that really work all this hassle and investment?

Edited by Cooper
Link to comment
Share on other sites

If you seriously expect a dev to commit his work about as often as he saves the file he's working on, you're insane.

When I save the data in a file that takes X amount of time. When I have to wait for that data to copy to a 'safe' location that will add Y amount of time. X is unavoidable. Y is overhead. Y in and of itself is insignificant. Y * times you hit save is not. And all of that just to prevent you from erroneously removing the file you were working on. So yes, it can be done, but the cost, I feel, is inhibitive.

I never said commit every single time you save, but you should still be committing very frequently. Since it's a local operation (and performance-wise much faster than copying the file) it's extremely cheap in Git, and it pays dividends in giving you the ability to track changes and regressions over time. Have you ever used git-bisect?
But sure, let's say that you're using CVS because you still live in the stone age. You can still avoid breaking the build by using a feature branch while you're working. Also, I would argue that if you're working on the same piece of code for "a few days" without having anything to commit, your change is probably too big or convoluted and should be re-thought or broken up anyways. I know this is a touch subject because developers don't like having their "process" criticized, but going "a few days" between commits tends to correlate with bad developer habits, sloppy change history, and spaghetti code.

If someone's worked up about accidentally removing his work as he's doing it, maybe that someone should take a long, hard look at how they're doing their work, the extent to which they master their environment, and yes, if they should be doing this sort of thing in the first place.

I'm not sure what you're getting at here. Are you saying that someone who accidentally deletes a file is not fit to be a programmer? People make mistakes. Shit happens. The only thing that would make someone "unfit" in my view would be if they refuse to learn from the experience.

My arguments against backups (your words) is that they record a moment in time. Whay happens between then and armageddon is lost. So user error can still cost you. In that regard a backup will protect you more, but not completely. If you accidentally copy file A over file B and only notice this in a week or 2 (say) because you thought you were done with it and syntactically the file is valid and correct. Yes, I'm sure there are ways to notice the problem and various tools can aid you. Say those weren't around or they don't apply to your scenario. Your backup strategy will make a copy of the files on your filesystem. It won't tell you the files contain what you WANT them to contain.

Yes, backup is not 100% idiot-proof and you can still potentially lose data. But are you really going to sit there and neg on one of the most obvious and simple ways to MITIGATE THE RISK of losing valuable data? Sure, user error is a thing, but throwing your hands up and saying "well there's nothing to be done about" when existing tools can actually protect you in 95% of cases is pretty ridiculous.

Keeping a spare copy of every permutation of a file on the system around for a week or over will require significant storage and cost a significant amount of CPU cycles. And seriously... how often do you accidentally remove/replace file contents? Last time you did, how long did it take you to recover? Is that really work all this hassle and investment?

Wow, that's a really good point. Storing full copies of every permutation of a file would take a lot of space and be really expensive. I wonder why nobody else has ever thought of that before...

... because it's not like we ever developed advanced compression or differential backup technology.

How often do I lose data and need to recover it? Not that often. I follow a number of best practices and good habits to help minimize my risk of losing data in the first place. But even still, I've had accidents and needed to recover data before. More often, I've had other people come to me for help when they've lost important files to accidents or hardware failures. Data loss which could have been trivially prevented by the use of backup software.

The last time I had significant data loss was a multi-drive failure of my principal RAID-5 array. I am still in the process of re-downloading some of the content I lost, and despite the best efforts of tools like TestDisk some of my data will never be recovered. At the time I did not use remote backups on that system because either buying the disks or using online storage for over 3TB of data were both seemed prohibitively expensive. But after rebuilding that system (with RAID-6 this time) I looked around and found that CrashPlan's unlimited storage plans were actually quite reasonably priced.

Compared to the value of my data and the time lost trying to recover it, CrashPlan was absolutely worth the cost. CrashPlan is free if you (or your friends) have the storage space to hold your own backups. You only pay if you want to use their cloud storage options, which are quite reasonably priced. As for the "hassle"... well there's not much hassle. The software takes about five minutes to install and configure, and then you can just forget about and let it do its own thing. It stays out of your way and runs at a low priority so it's only consume idle resources.

Your objections to this simple, common sense mitigation technique seem overblown compared to the reality.

Edited by Sitwon
Link to comment
Share on other sites

I have a local SVN server and I usually back it up in there but in this case I didn't and while saying "rm [tab complete file name]" I accidentally deleted my script before ever backing it up. But yes back ups are very important

Link to comment
Share on other sites

I have a local SVN server and I usually back it up in there but in this case I didn't and while saying "rm [tab complete file name]" I accidentally deleted my script before ever backing it up. But yes back ups are very important

Link to comment
Share on other sites

Before we continue, let's start off by saying that for the problem posed in the topic, the solution provided is an entirely valid one.

Everything beyond that is more about the cause of the problem and solutions to that problem.

I never said commit every single time you save, but you should still be committing very frequently. Since it's a local operation (and performance-wise much faster than copying the file) it's extremely cheap in Git, and it pays dividends in giving you the ability to track changes and regressions over time. Have you ever used git-bisect?

But sure, let's say that you're using CVS because you still live in the stone age.

Release frequently is the adage of open source development and with the advent of source control tools such as Git that can handle that it's turned into commit frequently. I'll be the last person on the planet to argue against that.

In Git, doing a commit to your local tree may very well be a painless endeavor. But here's the kicker: Not everybody on the planet uses Git.

I'm sure Git is fantastic. Problem for me is that this particular tool was created when I was going through the dark ages (long story I won't bother you with) that I only recently popped out of. So no, I don't _know_ git as well as I should.

At work we use Subversion. There was talk about moving to Git I think last christmas, but toolchain integration with Git was absolute crap when it came to either Netbeans or Eclipse... I forget. I don't use either - I dev in vi (which I'm sure answers a lot of questions for you).

Where I work there are products in our toolchain that work with Eclipse version a.b.c (can't remember) and ONLY a.b.c. You could easily argue that that part should get replaced, and you'd be right of course. It should. But it won't. Because it cost more to acquire than I stand to make in 5 years and currently there's nothing out there that can do the same thing.

In february of this year I finally put the nail in the coffin for one product of ours that was using OpenESB running on Glassfish 2.1.1. We were using it for OpenESB and the newer versions of Glassfish didn't work with OpenESB anymore. So some bright spark decided to continue with this version - after all, it's an internal bus and we're all friendly on the same machine, right? GF 2.1.1 was released somewhere in 2009. We kept on using it like this because over time more and more solutions to problems were made within this product making it continuously harder to remove it, even though we wanted to. The only reason I was finally given the budget to take it out was because Java 7 started complaining that this particular piece of software was using JVM features that were so obsolete by now that they would be removed in Java 8.

That's life in the real world for you: You make do with what you get. Want something different? Make the suggestion, sit out the shit storm and hope your arguments work out money-wise as that's the only thing that will work.

You can still avoid breaking the build by using a feature branch while you're working. Also, I would argue that if you're working on the same piece of code for "a few days" without having anything to commit, your change is probably too big or convoluted and should be re-thought or broken up anyways. I know this is a touch subject because developers don't like having their "process" criticized, but going "a few days" between commits tends to correlate with bad developer habits, sloppy change history, and spaghetti code.

The bit I quoted was the emphasize that he was working on it. Not the "a few days" part, and nowhere in his text does he say he didn't commit in between - that's your conclusion.

I agree that if you go a few days without even so much as a single commit, you're doing something deliberately wrong and stand to pay the price for it, typically by not knowing what you were changing anymore by the time you commit the change. Bad habits, sure. Sloppy change history, definately. Spaghetti code.... Not so much. That's got a LOT more to do with the quality of the developer than his committing practices.

I'm not sure what you're getting at here. Are you saying that someone who accidentally deletes a file is not fit to be a programmer? People make mistakes. Shit happens. The only thing that would make someone "unfit" in my view would be if they refuse to learn from the experience.

When you make a habit or destroying your own work, you should work on that before doing more work. Nothing more, nothing less.

Compare it to having a shitty dev on your team. You could make him to a lot of work and hope that he eventually sees the light, or you give him a Coding 101 book and ask him to not come back before having read it.

Yes, backup is not 100% idiot-proof and you can still potentially lose data. But are you really going to sit there and neg on one of the most obvious and simple ways to MITIGATE THE RISK of losing valuable data? Sure, user error is a thing, but throwing your hands up and saying "well there's nothing to be done about" when existing tools can actually protect you in 95% of cases is pretty ridiculous.

Will you please read back and find a quote of mine where I say backups should not be made?

Thank you.

Wow, that's a really good point. Storing full copies of every permutation of a file would take a lot of space and be really expensive. I wonder why nobody else has ever thought of that before...

... because it's not like we ever developed advanced compression or differential backup technology.

Well, fuck you too. If you want to wait for your system to protect you against silly mistakes you shouldn't be making in the first place, go right ahead. I'll be the last person on the planet stopping you.

How often do I lose data and need to recover it? Not that often. I follow a number of best practices and good habits to help minimize my risk of losing data in the first place. But even still, I've had accidents and needed to recover data before. More often, I've had other people come to me for help when they've lost important files to accidents or hardware failures. Data loss which could have been trivially prevented by the use of backup software.

My WoW is I commit when the unit of work I'm working on is done. Most changes are about half a days work including testing. The subversion repo gets a daily backup.

The last time I accidentally removed something I was working on before committing I can't remember anymore. It must've been years by now. And I don't consider myself a unique little snowflake when it comes to this. When I use potentially file-destroying operations I pay close attention to what I'm doing. That might have something to do with me witnessing the fallout of a Windows user not following a script and doing "rm -rf /" on a Solaris box which at that time was being used to demo a product we'd developed (aborted the command, waited for the demo to thankfully complete without incident and then restore from tape).

I've had my 2.25TB RAID5 blow up in my face (WD consumer drives... never again). As with you, at the time there was no viable, affordable solution to backing all that up, save for building a copy. Instead I kept a list of externally acquired stuff that I felt I could easily find again. This meta-data was regenerated once a week whereas the irreplaceable stuff was put on a RAID1. The combination of the two was small enough to be regularly full backed up to a few DVDs. I rebuilt the array using Seagate drives this time and recovered everything. I'm still using this process with my current file server, discussed elsewhere on this forum, which is JBOD except for the irreplaceable stuff which is still on RAID1 and backed up with the meta-data onto DVDs, now also stored at my parents' house in their fire-proof safe. I'm there typically once a week so it's sufficiently current.

Your objections to this simple, common sense mitigation technique seem overblown compared to the reality.

Fair enough. Like I said, what works for you works for you. I prefer my WoW. And that's all there is to it.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...