Due to the corrupted bkf files, the headers did not properly contain the drive letter designation required to use the -p option with ntbkup. I also had problems using the -d option. I got the prompt message like this:
DIR Tree warning, 1st node not a VOLUME! Force '?:'
This prompt lead me to believe that it wanted some info to replace the drive letter, so I tried a drive letter or enter Y, but to no avail. However, as of this writing, I discovered that it was working in the background. I was working with 20Gig files when I did this originally and was expecting some kind of response which it didn't give me and so aborted the operation. While writing this document, I used a 6Gig file and as I was writing, the directory listed was produced. I now noticed that the harddrive was very busy after I pressed enter.
The normal script to run ntbkup.exe is:
ntbkup sample.bkf -x -pc:\dump
But all this did was start dumping all the files into one directory which the author states that it will do. This cause a huge problem because the directory structure was important, but also that many of the files had the same name. It required the directory structure to keep them from being overwritten. Ntbkup is very happy about overwriting files, so this would not do.
What I needed was to create the directory structure, change into a single directory at a time and extract the files from the backup file that belonged in that folder and then move/create to the next folder. This required that I know the directory that's coming and the contents that should be in that directory and additionally, I didn't want to process the entire 20Gig backup file just to extract 255 files and then start over. That would have been very time consuming and wasteful.
Enter the verbose mode
I ran ntbkup with this command:
ntbkup sample.bkf -v > sample.txt
This produced results that can be seen by following this link. The file size ended up being 32Meg, but the link only has a portion of the file which has some areas of interest which I will include here.
I was interested in this section of the file:
This had the directory structure, the offset to the beginning of the directory structure and it has the end of the previous directory structure. I indexed the file on the keyword DIRB and extracted the offset to the beginning of the directory, used the next record to find the end of the directory structure and also extracted the directory name. From this information I produced a batch file which would create the directory, change into that directory and then begin extracting only the data from the DIRB offset and end at the last file of the directory or the line before the next DIRB.
Here is a sample of the batch file:
As you can see, I made use of the -j option which allows ntbkup to start extracting at the first offset, specified in hex which is the same number system produced in the verbose listing and exit when it's reached the end of the block specified by the second hexadecimal number.
The process of getting these numbers was very involved and I intend to explain that also. The process is procedural and could be scripted, but my understanding of vbscript in MS Access is limited. I could probably write it, but it seems cumbersome that Microsoft didn't include built-in methods to easily access the databases and queries in the same VB database. If I was using a separate VB engine and accessing the databases in another program, I could understand the process of defining each connection and name of each object all the way down the field levels, but since they are in the same program, I find this to be very annoying.
I only have one major stumbling block with the layout of the verbose extract from ntbkup. In the middle of a block of data, say from folder \124 to folder \125 the offset suddenly jumps back. Sometimes this jump is huge, going to the beginning of the file, which happily causes ntbkup to start extracting everything into folder 124 until the new end is reached. I don't know why it would make such an erratic shift except that because these files were erased and frequently overwritten by other backups that some overlaying may have taken place. These jumps back to beginning points in the file are not frequent, but needed to be looked at in order to keep from dumping the whole backup file into one directory and then doing it again later on in the restore process.
See this sample for a jump back in between one folder group.
Copyright © Flexible Web Services, Inc Design by Iron Spider