copying data to new server

As you may know I built a new file server, so the logical next step is to copy over the data from the old server. I did a simple rsync but soon realized that copying 214GB over the old server’s meager 100Mbps NIC was going to be painful. Not sure why but it was going very slow. Started the copy around lunch and by 5pm it had only done 30GB.

What can I do? Oh I can take out one of the drives from the old server and put it into the new one. Wait it’s a RAID drive. hrm. But it’s a RAID1 so I can use just one of the drives as a RAID device in degraded mode. This will surely work.

So I put the drive in the new machine. Next problem was how do I get the RAID drive as a new /dev/md2 device so I can mount the LVM partition from it to do the copy. WORDS OF CAUTION: if you do not know how to use mdadm read up on it BEFORE you attempt to mess around with it 🙂

I proceeded to try out mdadm –create /dev/md2 –raid-devices=1 /dev/sde1 –force. This is NOT what you want to do if you want to keep the data on that drive 😦 No amount of lvm commands would help, which is obvious since I wiped it out. Ok now what?

I put the drive back into the old server, and since it was a RAID1, I can rebuild it. I first add the drive back to the RAID array:

# mdadm --manage /dev/md1 --add /dev/sdb1
# cat /proc/mdstat
md1 : active raid1 sda1[0] sdb1[1]
      24418688 blocks [2/1] [U_]
      [=>...................]  recovery =  6.4% 

I let that rebuild overnight. The next morning I move the drive back to the new server for yet another attempt at copying over the data. This time I am armed with more information.

First let’s see which drive it is:

# fdisk -l

...
Disk /dev/sde: 250.1 GB, 250059350016 bytes
255 heads, 63 sectors/track, 30401 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00075dc6

   Device Boot      Start         End      Blocks   Id  System
/dev/sde1   *           1       30401   244196001   fd  Linux raid autodetect
...

Let’s scan the drive:

# mdadm --examine --scan /dev/sde1
ARRAY /dev/md2 UUID=061bae16:75f0a757:29aadef0:45edc6c0

I added the above to the bottom of /etc/mdadm.conf.

ARRAY /dev/md2 UUID=061bae16:75f0a757:29aadef0:45edc6c0 devices=/dev/sde1,missing

Then I restarted the array:

# mdadm -A -s

# cat /proc/mdstat 
Personalities : [raid6] [raid5] [raid4] [raid1] 
md2 : active raid1 sde1[1]
      244195904 blocks [2/1] [_U]
      
md0 : active raid5 sdd1[3] sdb1[0] sdc1[1]
      1953518592 blocks super 1.1 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
      bitmap: 2/8 pages [8KB], 65536KB chunk

unused devices: 

Whew done. Wait no I’m not. I still need to mount the LVM volumes. Thankfully I didn’t
use the default of VolGroup00 (the default is now vg-machinename) on either of my machines. I used vol1 on the old machine and vg(0,1) on the new one. No conflicts to deal with. A quick

# vgchange -a y
# mkdir /mnt/oldvol
# mount /dev/vol1/lvvol /mnt/oldvol

Let’s check it:

# ls /mnt/oldvol/
backups  lost+found  music
# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/vg1-lvroot
                       70G  2.9G   64G   5% /
tmpfs                 877M     0  877M   0% /dev/shm
/dev/sda1             485M   28M  432M   7% /boot
/dev/mapper/vg2-lvvol
                      1.8T   48G  1.7T   3% /vol
/dev/mapper/vol1-lvvol
                      226G  214G  641M 100% /mnt/oldvol

YAY! Now we can start copying data over the SATA bus which is rated at 3Gbps vs the NIC at 100Mbps.

Remember kids, if you don’t know how to use mdadm TREAD CAREFULLY!

new server

I have a file server at home with two 250GB hard drives running in RAID1. It is meant for
storing my backups and acts as a DLNA server for my PlayStation 3.

Unfortunately, I fell into the same trap as most folks thought Bill Gates did, when I told myself that 250GB should be enough data for anyone.

Apparently backup data, music and pictures take up a lot of room as you can see from my /vol partition.

$ df -h
Filesystem    Type    Size  Used Avail Use% Mounted on
/dev/mapper/vol1-lvvol
              ext3    226G  214G  641M 100% /vol

I figured it was time for an upgrade. I replaced motherboard, cpu, memory, and of course the drives. The old machine specs are as follows:

  • Pentium III 933MHz
  • 1 GB memory
  • two 40GB IDE hard drives RAID1 hosting /
  • two 250GB Seagate 7200.10 SATA drives RAID1 hosting /vol
  • megabit NIC

The new machine has TERABYTE drives 🙂

  • AMD Sempron 140 2.7GHz
  • 2GB memory
  • one 80GB Western Digital SATA drive hosting /
  • three 1TB Samsung Spinpoint SATA drives RAID5 hosting /vol
  • gigabit NIC
  • It was fun to see T after the number for the /vol partition.

    # df -h
    Filesystem            Size  Used Avail Use% Mounted on
    /dev/mapper/vg1-lvroot
                           70G  2.8G   64G   5% /
    tmpfs                 878M  128K  878M   1% /dev/shm
    /dev/sda1             485M   28M  432M   7% /boot
    /dev/mapper/vg2-lvvol
                          1.8T  196M  1.7T   1% /vol
    

    Finally, the OS install experience. I wanted to use CentOS 5.5 for my new server. I start with the CD netinstall disk of CentOS, got to the point of writing to the partitions and the installer just died. I then went with the CentOS LiveCD and that didn’t even boot for me. Stumped, and not wanting to wait for CentOS to download stage2.img again (took 15 minutes), I decided to try the Fedora 13 netinstall CD. WIN!

    The Fedora 13 installation went flawlessly and extremely quick boot, and a very nice experience. Kudos to the Fedora team for an excellent installation experience.

    new server

    drive cage