Brighttree integration with shipping systems such as Fedex or UPS

Posted at 3:10:42 PM in Hardware (8)

Brighttree is a web-based DME/HME Billing application.  My customer is currently using the app to handle the shippment and billing for medical supplies to individuals covered my military medical plans.  

My customer contacted Pitney Bowes to install Send Suite Shipping to handle their shipping needs.  Unlike other Pitney Bowes products this one is not stand alone and not easy to setup.  A product specialist came out to setup the computer (which we needed to provide) and connect the hardware to do the shipping.  The requirements for the computer were Windows XP (won't run on Windows 7), 2 Gig Ram, 40 Gig HD, Network and internet.  The internet was needed to update the shipping costs and transmitting info to the various carriers they support.  The network was required to interface with the billing software to get the shipping addresses from the system.  However, Brighttree is web based and has no direct connection to the database.

The product specialist called me to find out the odbc connection info to Brighttree, which doesn't exist.  So we called Brighttree.  Our solution was to run an AD HOC report to extract the customer database to a file.  At first we tried exporting to excel, which didn't work.  For some reason, when we sent the Excel file via email to someone, they couldn't open it.  They'd get a message that stated ... file.htm is not available.  The location of that file was in the originators temporary internet files folder.  That didn't make any sense to me, but apparently the original web page was embedded in the exported Excel Spreadsheet.  I didn't want to use the Excel Spreadsheet anyway.  The zipcodes or any number field with a leading zero gets truncated in Excel if no special formating is provided.  i.e. 09273 becomes 9273 in Excel.

The easiest file to create was the CSV file which can also be opened with Excel.  I'd recommend leaving it alone though.  Excel could possibly remove the leading zeros on the zipcodes if the file was saved.  We're still waiting on Pitney Bowes to give us the goahead and start using the CSV file.

Written by Leonard Rogers on Monday, February 7, 2011 | Comments (0)

pure-ftpd on Fedora Linux

Posted at 9:38:17 AM in Hardware (8)

This site had some installation instructions for pure-ftpd.  Note line 3 has spaces between the hypen and words in pure-ftpd.  Do not put the hypens in.  Yum will not download the correct package.  I've made some modifications to the instructions to give additional guidance.  See:

Note also that pure-ftpd does not allow for matching the date stamp of the source files.  This may be critical because if you restore those files, all the dates will be as of the date you FTP'd them to your storage location.  I believe this will also cause problems with design packages like Dreamweaver.

1. install pure-ftpd using yum: yum install pure-ftpd

2. run: /usr/sbin/ /etc/pure-ftpd/pure-ftpd.conf (this will create the default conf file for use with pure user database

3. start service: service pure-ftpd start

4. create the pure user database: pure-pw mkdb

5. create users on the fedora box: useradd <uname> and create a password for that user userpw <uname>

6. create a pure-ftpd user account: pure-pw useradd <uname> -u <uname> -g <usame> -d /home/<uname>

Running this command will prompt you for a password.  I used the same as the name I setup in linux, however; I think it can be different.

You can also check user info in the pure-db file by typing pure-pw list.

Note: installing pure-ftpd does not configure iptables to allow access on port 21.  You will have to do this manually, I don't have the notes for that here yet, but to test, I just turned off iptables: service iptables off


Written by Leonard Rogers on Sunday, February 6, 2011 | Comments (0)

Deploying Peachtree integration shipping for Fedex

Posted at 11:25:55 PM in Hardware (8)

In November, when we initially started this process, we knew there was an import/export feature built-in to both Peachtree and Fedex where a daily batch could be exchanged, but this requires too many steps.  We wanted to implement integration to take advantage of several improvements.  These were:

  • Email notification to customers when a package was shipped.
  • Improved address accuracy (Sales Order number to Address instead of pulling the address based on a previous entry)
  • Tracking number and cost of shipment entered on the Sales Order (a process that was too cumbersome without integration)

Fedex offered a solution they came up with which used the ODBC driver supplied by Peachtree to connect to the database.  My client has two locations where this would need to be implemented.  One here in California which has been using Peachtree for over 10 years and one in Virginia which has been using Peachtree for only 2 years.  The California office discovered as they upgraded Peachtree every two years that the system was getting slower and slower.  They also discovered on one of the upgrades that they could no longer reasonably use the F3 find function in Peachtree.  Any query takes between 15 and 20 minutes to start showing data.  We assumed this was because of the large amount of data we have, but I just discovered that this same function also performs poorly in the Virginia office. 

The integration that Fedex offered was extremely slow.  It took about 15 to 20 seconds to pull up one ship to address for a shipment.  This seemed acceptable since it would take a lot more time to enter the address and then put the shipment info back into Peachtree, however; we discovered that all of the workstations connecting to Peachtree would freeze for that 15 to 20 seconds which was not acceptable.  This same issue appeared in both the CA office and in the VA office regardless of the amount of data.

The first solution indicated was Shipgear but Fedex recommended another solution Shiprush which they can "give tech dollars to buy."  I will not be able to discuss the Shipgear installation unless we run into problems with Shiprush.  If you look at the Fedex web site, you can see that V-Technologies has not be shoved off the plate by Shiprush.  I will document the Shiprush installation


We were not able to use Shiprush.  I’ll explain the issue later.  We discovered that the Fedex “tech dollars” could be used to purchase Shipgear.  V-technologies is listed on the contract which opened the door for us to investigate using the original suggestion.  

At first glance, it appears that Shipgear  is primarily designed for UPS.  For example: if you go to the tech support page where you can submit a ticket, they only request the UPS meter number.  I suppose if you are using Fedex, that you put in the Fedex number and hope there is enough of a difference between the system numbering that they can't get  you confused with a UPS customer.  When calling tech support, they also ask for your UPS meter number.  But I found the software to be an excellent integration tool between Peachtree and Fedex FSM.

The workstation where Shipgear is to be installed requires Fedex FSM to be installed also without FXI (which is a integration package provided by Fedex).  The workstation does not require Peachtree to be installed (this works great if the Shipper normally doesn’t do anything in Peachtree).  However, the workstation does need to have access to the Peachtree data files.

To process a shipment after all of the software is installed, the shipper simply clears the Fedex screen and a Shipgear input field pops up over the FSM software for the shipper to enter the Sales Order number.  Shipgear can also be configured to pull the address info from the customer record or from the invoice.  Once the Sales order number is entered, Shipgear searches Peachtree for the address info and adds it to FSM’s ship screen.  From there, the shipper fills out and prints labels just like they normally would before Shipgear was installed.  Once the ship or print label button is pressed, Shipgear enteres the freight amount back into the Sales Order and updates the tracking info in either the notes for the invoice or the internal notes.  The FSM screen is cleared and the Shipgear Sales Order entry field is again displayed. 

A major benefit to keeping the FSM system is that all of the data from any previous installation can be carried over to the new system.  Our system has several package sizes setup so all the shipper has to do is select the package type and the dimensions are already entered.   We were able to backup the data from the old installation and restore it to the new FSM system and almost be 100% ready to ship.  I did need to get Fedex to dial in as there were some Admin functions that still needed to adjusted that did not come over in the restore.  And once you leave the FSM system, you cannot get any support from Fedex.

Issues with Shiprush:

  • The biggest problem we had with Shiprush is it's lack of ability to process multiple packages on a single order.  Every package needed to be entered separately. Between entries, Shiprush would wipe out the address, so pulling the address from Peachtree seemed to be defeated if you have multiple packages that need to be shipped.  This company ships several packages with every order, so this became a problem.  In Fedex, you can specify the dimensions and weight and click a button and type in the number of package that have the same info.  Then FSM produces a label for each box. 
  • A second issue was Shiprush uses web based tracking numbers.  These cannot be tracked using the regular tracking channels on the Fedex web site.  In order to track these orders, you need to install insight from Fedex, and then you do not have the ability to track by Sales Order number.  
  • Shiprush also has no facility to add other tracking information to the Fedex label / shipment.  It only puts in the Sales Order number.  It’s pretty common knowledge that the customer doesn’t need our Sales Order number.  They need their PO number.  The PO number, once entered was also wiped out in multiple package shipments.

Issues with Shipgear:

  • Shipgear runs several programs on the primary PC.  Among them are a web server, the program itself and the database server.  These programs demand a huge amount of memory.  Installing them with FSM consumes up to 800Meg of RAM which caused us some issues when testing the addresses.  When the data entry field pops up, if you click the button, it gives you options to pull a list of Sales Orders for a Customer or for a period.  These did not work.  When I called Shipgear for my “one” free tech support, they couldn’t figure it out and  I lost my one free tech support call.  The software was still usable because look ups on the Sales Order number did work though the tech support guy complained that our PC was slow and it should take as long as it did to pull up an address.
  • The system has been installed and running for over a week now and we are trying to iron out the freight cost formulas.  One of the benefits of using this integration software was to capture the freight costs.  We use FSM to increase our cost of shipping by 10% and Shipgear is to add a handling fee based on where it’s shipped and the overall sales order total.  We have not be able to verify that the freight is calculated correctly, but I think that’s attributed to the manual processes that were in place before.
  • There are not enough fields from Peachtree that would make ease of freight calculation easier.  For one, our Web orders already have freight and I don’t want to over write it.  One is if there is an amount in the freight already, I want to handle it differently.  I cannot access that field from Shipgear.

Benefits of Shipgear:

  • Shipgear has not caused any interference with any other workstation during shipping process. 
  • Orders can be tracked through normal Fedex channels.
  • No cut and pasting required to get data from Shipgear into Peachtree.
  • The web server running on the main PC allows users in the office to connect via their web browser to check on the status of shipping.  However, I do not know if the delivery confirmation can be checked on that web server. 


Written by Leonard Rogers on Monday, January 31, 2011 | Comments (0)

win 7 64 bit HP laserjet 1020 install

Posted at 1:13:12 PM in Hardware (8)

What a pain.  If you go to the HP web site and enter the HP Laserjet 1020 as your printer and then pick Windows 7 64-bit for the OS, it gives you drivers that it claims are compatible with Windows 7 64-bit.  Not so.  

The drivers are actually for Vista 64.  The z-7 zip file complains that it won't run on a windows 7 64 bit machine and to contact the vendor.  So you end up having to use winrar to extract the files.  Then when you run setup.exe, you get a similar message that says it's not compatible with Windows 7 64-bit.  Very frustrating, since HP offered the driver as a solution for Windows 7 64-bit.  And yes, I submitted feedback on their web site. 

Finally, I used these instructions on HP's forums that helped a lot.  See:

Be sure to check the video links for a better explanation.  The opening instructions has a lot of assumptions.  I did my install all from devices and printers and there was no install driver from specified location option.  I used the troubleshooting option which worked to reinstall the drivers.  On my system, it had been working.  I don't know if the customer moved the usb cable to another port or what reason it suddenly disconnected itself from the system.  I had a printer driver that indicated it was working, but wasn't and a HP 1020 unidentified item on the devices and printers screen.

I removed the HP 1020 printer and went into print server properties and removed the driver as well.  Then I right clicked on the unidentified icon and selected troubleshoot.  The trouble shooter found the driver and installed it again and it worked.



Written by Leonard Rogers on Saturday, January 29, 2011 | Comments (1)

First Apple iPad Install

Posted at 10:19:07 PM in Hardware (8)

The new Apple iPad is the first Apple appliance that I had considered buying and using for myself.  I use ebooks a lot.  I like reading and my cell phone is just too small.  It's an added plus to have email and web browsing available and an abundant number of apps available for download.

I've had to setup the email on several iPads in the past couple of months. I was pretty happy with the interface and general feel.  It had the feel of an over-sized iPhone; an easy to use and navigate touch screen.  Most of the menus are exactly the same as the iPhone.  I recommended the iPad to one of my friends instead of one of the other ereaders because of the added features.  He was concerned that he'd have to buy a data service such as AT&T to use the iPad which I cleared up.  The iPad has WiFi capability, so if you have wireless internet, you can use the iPad's email and web apps and you don't need the internet once you've downloaded the ebooks.  He purchased the iPad and offered me a chance to setup my first iPad from the ground up.  It's still a good app for him, but I have some reservations about it's utility for me.

I had an issue with the lack of a java enabled browser.  As far as I know, the browser brand that they use in the iPad (Safari) can normally execute java apps, so it appears that they intentionally gimped the browser.  The first thing I wanted to check after connecting to the internet was the internet speed at  Rather than bring up the web page, the browser shut down and pushed me over to the app store where i could download a app.  At the time, I couldn't download the app because I didn't know my apple id.  It was free, but I didn't want the bloat for a one time test.  I doubt my friend would have ever used the app.  I tried several more times and tried different ways to get to, but ended with the same results.  It was driving me bananas.  So I used another web site that does speed tests and found that the java app wouldn't install.  Then I tried to install java from and that didn't install.  The error message I got was Operating System unsupported.

I do a lot of remote work in my office, so I tried my gotomypc account and that wouldn't work either.  This alone would prevent me from considering the iPad, but this is not an issue for other users.  We finally reset our apple id which was setup at the store and my friend couldn't remember the password.  The apple id is required to download music from itunes and download apps.

What really ended it for me about the iPad is when we wanted to download a free app and we had to enter credit card info.  It wouldn't let us until we stored a valid credit card.  "store" being the operative word.  Every indication was that our credit card would be stored for future purchases.  The item was free.  This would not have bothered me had their been a charge, but the item was free.  I'd rather put in the credit card info every time I buy something or even go with the itunes structure and put a certain amount of cash on account and pay using that account until it's empty then add more cash later.  But this seems to store the credit card info and when you do buy something that costs money (which sounds dumb, but in the apple store, you can buy something that's free, so...), they just use that card unless it doesn't clear.

Anyway, I didn't get to try buying something that actually had a price to see if it'd submit the credit card info without requiring information to complete the transaction other than saying yes.  However, I got immediate nightmares of children playing with the iPad running amuk buying everything that peaked their curiosity and then I'd have to figure out how to get that money back.  If that is the case, then I'd wrap up the appliance and return it asking them to cancel my apple id, if that could be done.

I like the iPad.  It's still a nice appliance, but gimping the internet functionality and storing credit card info for items that are free from their app store... that's just too much.  I think the iPad is going to mature like the iPhone did and prehaps in a couple of years, they'll have an appliance I can use.  Until then, I'm going to buy a laptop.  It's cheaper anyway.


Written by Leonard Rogers on Monday, January 17, 2011 | Comments (0)

Server Crash NT4.0 and Restored

Posted at 12:43:59 AM in Hardware (8)

What a great way to start the holidays.  Just as everyone was wrapping up to leave for the holidays, we discovered that the server drive had crashed.  All the diagnostics pointed to the drive being the issue.

The server is a Dell PowerEdge 2450 using the integrated RAID controller.  The the indication to the user was that the file couldn't be read and programs aborted.  However, on the server console the errors were write-behind cache issues. Mft$ couldn't to be written to, some data may have been lost and this same error displayed on several folders and files specifically on the drive.  The OS on this server is NT4.0.

The OS is installed on a striped array of 2 drives, 9.1Gig each giving a total of 18gig.  Raid 0 is not a configuration that should be used on the OS drive.  The drive was partitioned with 2Gigs for the OS and 14Gigs for exchange server files which was no longer being used and a Dell utility partition.  Thankfully, there was nothing wrong with the drives in that set.  The data drive was a single partition mounted in the RAID as a volume with 146Gig storage.  All the drives were u160 SCSI-2 hot swappable drives.

The backups are being performed by backuppc which has been in operation for about 5 to 6 years.  It has performed flawlessly, but I've never had to restore a whole drive from it before.  I used Acronis V9 workstation to make a bare metal image of the OS drive and all it's partitions.  I also tried to backup the defective drive just in case it was the NT that was causing the problems, but Acronis couldn't back it up either.

Once all the data was backed up, I pulled the integrated RAID controller plug off the mother board and took a look at the drives in Acronis again.  All the drives were uninitialized.  I restored the bare metal backup to drive 0 which was a 9.1Gig HD.  Acronis restored the OS partition as it was and shrunk the partition used for Exchange without any problems.  I was able to boot back into NT4.0 without any errors, but I still didn't have a data drive.

The new drive I purchased wasn't recognized by NT, but the SCSI controller recognized it.  When I did a data verification, it "red screened" right away indicating that the media wasn't any good.  I tried a drive that we had on hand but was not marked as being bad and found the same problem when I did the media test.  I was only left with the original HD that was bad to begin with.  When I did the media test in the in the SCSI controller interface, it only reported 3 bad spots on the drive.  NT also recognized the drive, so I went ahead and formatted it and started the restore.  This drive will have to be replaced, but appears to be usable now.

The drive was purchased from  I ordered it late on 12/22/2010 (Wednesday) and was told that it wouldn't arrive til Monday even with the overnight delivery I requested.  However, it showed up on 12/23/2010 at 11:00am which I thought was pretty good service.  I have submitted an RMA and will follow up with their service on that item.  I found them on, but called anyway because I needed the item to be delivered to a location that was not registered with the credit card.  They said it wouldn't be a problem as long as the ship to location was a business.

It took 2 hours to get a bare metal backup. And then I pulled the plug on the controller and restored the OS.  I spent the next 6 hours trying to get the system to take the drive back without restoring a backup and couldn't do it.  Then 2 hours formatting the replacement disk and 2 hours getting the restore of the data drive going.  The automatic backups had started for all the PCs in the office which cause a lot of problems getting the restore to go.

Backuppc never shows the xfer PID like the backups do.  I kept checking the status and since no xfer PID was showing I thought it wasn't running.  When I checked the server, Rsync was eating a lot of CPU which is usually an indication that the backup is running, so I checked the drive and it was filling up.  The restore operation took over 7 hours.  It restored 55G of data.

I should have selected the current incremental backup as it would have brought everything up to date.  However, I did the full restore and then applied the incremental backup, but the incremental backup is taking just as long to restore.

I was really pleased that Acronis backed up the RAID and allowed me to restore it to a SCSI drive and the system booted.  I have tried this on HP servers and Acronis can't recognized the RAID on HP servers.  They have a bare metal implementation for HP servers, but it requires installing Acronis on the OS.  That becomes a problem when restoring the system because you need to install the OS and then Install Acronis in order to restore the bare metal system which isn't really bare metal.

I approach every restore with a lot of trepidation.  It's bad enough that the data is lost but if the restore doesn't work then the problems really begin.  I have restored a system that didn't have a bare metal backup and all they had was a day old SQL backup.  I had to install everything and all the users and prepare the SQL database correctly then restore the data.  It was 36 hours of work, but on Monday morning the system was back on line and I was a mess.  I never want to do that again.

Written by Leonard Rogers on Friday, December 24, 2010 | Comments (0)

Solid State Drives

Posted at 9:13:13 PM in Hardware (8)

Investigating the deployment of Solid State Drives SSD HD to determine the benefit and cost.

Written by Leonard Rogers on Thursday, December 23, 2010 | Comments (0)

Hard Drive Failure

Posted at 5:17:47 AM in Hardware (8)

Raid Drive Failure:

The company had setup a junk raid server to handle bulk image storage for a document imaging service. The document imaging program was Docuware which used SQL as a database and allowed for storing the images on any attached storage device. However; we were told that certain options of the software we had purchased would not work unless the software was installed with the images on the same server with the SQL installation. The option that we were not able to use was the ability to store the images on CDs so that the CDs could be shipped to another location and viewed there.

Apparently, the RAID already had a faulty drive in a 3 disk configuration. The created a potentially bad situation. We had identified the incorrect drive as being the problem, but didn't think it was an issue as we were backing up the data to an external removable drive.

The backup software we used was NTBackup with a backup script that ran in the scheduler. It backed up only the changed or added files during the week nights and on Fridays it backed up the entire drive. The backup file size was approximately 69Gig with over 800 Tiff and Jpeg images an a huge directory structure.

A description of the file structure would be helpful as the recovery process depended a lot on knowing how this structure worked and what the file types were. Docuware creates the directories and the file names using a math algorithm which allows it to determine where the document can be found just by the number of the document. Each directory can hold 255 images and then a 3 digit folder increments up to 255. When the folder reaches 255 and new upper folder is created and then sub folders are created under that folder and the increments start over at 000. So a folder for royalties might be in the folder royalties.000/000/00000001.001. The extension on the document increments to show the number of pages in the document and the filename continues to increment in order to limit the number of files that are contained in one directory. Because windows file types are determined my extension, this system doesn't allow for the documents types to be indicated. These files are tiff files and in the header of each Tiff files in data about the document or any attachments that may be required. I mentioned Jpeg images before and these are stored in number format, but end in JPG. These files are linked to their tiff files by the data in the headers and the tiff files contain no image info.

The reason the file structure is important is that to restore the data, it is imperative that the directory structure remain in tact. A simple undelete program might recover tons of images, but most recovery software examines the contents of the file and creates a file name of it's own and no directory structure is provided, so simply having the images is useless.

A second drive in the RAID failed causing the entire RAID to become useless. The original sporatic drive which was failing was thought to be in the Operating system and the thought at that time was that we'd obtain an image of the OS and when it failed entirely, we could restore the data image to a single drive. But it turns out that the original failed drive was in the data array.

Since the OS continued to work, the backup scheduler continued to work. This exacerbated the issue. The drive failed on Friday. No one contacted IT about the issue until Monday which meant that the Saturday full tape backup was run. The NTbackup software was configured to overwrite the existing file with the new backup. This resulted in all the data being lost and the main backup being lost as well.

The RAID was disassembled and images were made of the drives. An attempt was then made to re-create the RAID with software, but many problems such as no knowing the striping algorithm and drive header information prevented an easy rebuild. Two of the drives were accessible when assembled in restructuring environment, but no directory structure could be recovered.

We then located a data recovery company who would attempt to recover the data. If they were able to recover the data, then we'd pay. Otherwise no money would be involved and the amount as negligible anyway, so we chose to send the drives off for repair and then turn our attention to the backup drive.

The company we used was This company offers to recover the data from raids for $800 USD. You really have to examine the web site to determine the actual cost, but because these were SCSI drives, there was a 150% markup. There is also an additional fee for getting the drives back and supposed additional fee to get any additional information such as a file listing of the recovered files (very important to take advantage of this). Still the price to recover the data was way below other organizations that wanted the money up front and charged 10 to 100 times that amount.

They had a location in California also and I thought this was handy, but it turns out the company is actually in Canada and all the other locations are UPS drop off spots. I really confused the shipping process by showing up at the UPS drop off store. The worst part was that I couldn't get a tracking number as all the drives that were being sent to Canada were aggregated into a larger package and shipped in bulk. I called later and got the tracking number of the bulk shipment. was very prompt at getting back to me to get additional information, even though I was put off at having to answer the same question over and over, still there was almost daily communication which I was very impressed with. The only problem was when they said they had the data recovered, I asked for a partial directory listing which they were happy to send me for free. That was a major folly. We paid the 1300 and had the data sent to me and then found out that the partial listing they sent was the only 2gig of data that they got a structure listing of. In addition, all of the files were cross-hashed with bogus data as they had the stripping completely wrong. The remaining 100gig of data was all in one directory that had all made up names which was exactly the same as I would have gotten with my recovery program. offered to re-extract the data, but I was having pretty good luck with the backup drive so I ignored that because I didn't want to pay to get the data a second time only to find the data still not correct. In addition, they didn't have a external drive to save the data to so I'd either have to buy a drive or send them a drive to save the data to.

During the re-building phase, I had purchased Active UNDELETE 7 Enterprise which claimed the ability to rebuild and recover from RAIDs. The recovery process was very flakey and I had frequent problems with the program crashing. I determined that perhaps I should build a bootable CD as that is offered in the software, however; the CD is seriously gimped and did not offer any ability to rebuild the RAID, so I had to install the software on the server. After much work, I realized I didn't have enough information to rebuild that drive and there was no assistance in the program to help determine striping or parity. I eventually abandoned the software for rebuilding the RAID.

After I sent off the drives to, I thought I might be able to use Active UNDELETE 7 Enterprise to extract the lost BKF files on the backup drive. However, that also became a problem as Active UNDELETE 7 Enterprise does not have an image for BKF and though it has an option for upgrading it, apparently no one is adding other image detection thumbprints, so the money I paid for it was wasted.

I later found Handy Recovery from Softlogica. This had a thumb print for BKF files and found several clusters on the drive that were intact enough to give me files 20 and 30 gig in size. I used the evaluation to extract one or two larges files (you can only extract one a day during the evaluation, but the large files really made this worth it). After I had extracted the files, I couldn't find any software that would rebuild files from the corrupted BKF files until I ran across NTBKUP.exe on This page gave me the info I needed to recover the data. The designer of this package blows past a lot of the overhead of NTbackup and allows the contents to be read even if the headers are missing. It would not build the directory structure without the drive letter being in the file and I didn't have the drive letter in all of the clusters, but this worked for me.

I was able to extract the directory structures and the locations of each file by running NTBKUP in the verbose mode and redirect the output to a text file which I later manipulated to create the directories, then change into the directory and run the extract for the files that were inside that group.

There are some anomalies that I can't explain in the output of the NTBKUP file, but the recovery of data was over 95% of the data and the file structures.(see details)

Lessons learned:

1. Don't rely on only one backup device. Currently, I am rotating two external backup devices. And checking them for consistency and error.

2. Pay attention to failed drives. Of course, my resources are limited to what the owners will pay for and it always bothers me when my recommendations are ignored and then I later have to present to them an issue that could have been avoided.

3. Obtain the entire evidence of recovery rather than a portion. Of course, even if I had a complete listing of the directories, I couldn't be sure that the files were complete. couldn't inspect the files either, since with the extension being numbers, they couldn't tell the files were tiff files, though I had explained it. I think I might as well been speaking a foreign language as what I was telling them was unfamiliar to them.

Written by Leonard Rogers on Tuesday, December 21, 2010 | Comments (0)