Happy 9th Birthday Adan!

Today Adan celebrated his 9th birthday. We hung out this morning building his new Clone Wars LEGO set, then had Sonic for lunch. In the late afternoon we had a chance to go roller skating (my first time on roller skates), then had dinner @ Carrabbas. It was definitely a good day (at least for me) 🙂

Fedora 13 *installed*

I finally got Fedora 13 installed after having some issues with my software RAID partition last night.

I have 3 250GB drives. Two of them were software raided in RAID 1 configuration holding /home. The third is where the OS lives. I ignored the error message anaconda presented me since I could still use /dev/sda which is where I wanted to install anyway.

I choose to Fresh Install from the menu, then Basic Storage Devices. I proceeded to choose the only hard drive that showed up, and said I’d create my own custom partition layout. This part probably wasn’t necessary but I wanted to make sure the partitions that existed were the ones on /dev/sda 🙂

Once the installation was done, I switched to runlevel 3 and logged in as root. This is what I did to recover my software RAID partitions to mount them as /home.

# mdadm --assemble --scan
# cat /prod/mdstat
Personalities : [raid1] 
md0 : active raid1 sdc1[1] sdb1[0]
      244195904 blocks [2/2] [UU]
      
unused devices: 
# mdadm --examine --scan >> /etc/mdadm.conf
# tail -n 1 /etc/mdadm.conf
ARRAY /dev/md0 UUID=93ea08fa:1a7ae881:f59ceb98:8b2b169f

So far so good. But df -h didn’t show my LVM partition from the RAID device.
lvscan showed it as inactive:

# lvscan
  ACTIVE            '/dev/vg0/lvswap' [3.00 GiB] inherit
  ACTIVE            '/dev/vg0/lvroot' [214.69 GiB] inherit
  inactive          '/dev/vg1/lvhome' [232.88 GiB] inherit

Ok simple need to change that.

# vgchange -ay
  2 logical volume(s) in volume group "vg0" now active
  1 logical volume(s) in volume group "vg1" now active
# lvscan
  ACTIVE            '/dev/vg0/lvswap' [3.00 GiB] inherit
  ACTIVE            '/dev/vg0/lvroot' [214.69 GiB] inherit
  ACTIVE            '/dev/vg1/lvhome' [232.88 GiB] inherit
# ls /dev/mapper/
control  vg0-lvroot  vg0-lvswap  vg1-lvhome

Last thing to do is get /etc/fstab updated:

/dev/mapper/vg1-lvhome  /home                   ext4    defaults        1 1

And for good measure, blow away the old user home dir from the install, and mount
the homedir:

# rm -rf /home/jmrodri
# mount /home
# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/vg0-lvroot
                      212G  5.8G  195G   3% /
tmpfs                1005M     0 1005M   0% /dev/shm
/dev/sda2             190M   50M  131M  28% /boot
/dev/mapper/vg1-lvhome
                      230G   91G  128G  42% /home

Sweet back in business. Let’s reboot and make sure everything’s good.
Everything was good upon reboot except when logging in as jmrodri it couldn’t find my homedir. I checked the permissions and they were fine. What could it be? ponder. run series of commands. google. ponder some more. google. SELinux! The new /home had the wrong SELinux context.

# restorecon -R /home

There now we’re good.

Fedora 13 install a no go

Tonight I tried to install Fedora 13 over my old Fedora 11. The plan was to use /dev/sda1 as / and keep /home on my software raid RAID1 partition (/dev/sdb1 and /dev/sdc1). But during the install I got an error message:

                 Warning

Disks sdb, sdc contain BIOS RAID metadata, but are not
part of any recognized BIOS RAID sets. Ignorning disks sdb,
sdc.

So I went through all of the CTRL-ALT-F{1-6} to gather information.

CTRL-ALT-F1

Running anaconda 13.42, the Fedora system installer - please wait.
00:20:36 Starting graphical installation.
ERROR: sil: RAID tyep 253 not supported
ERROR: adding /dev/sdc to RAID set "sil_aiaicadebade"
ERROR: sil: RAID tyep 253 not supported
ERROR: adding /dev/sdb to RAID set "sil_aiaicadebade"
ERROR: no RAID set found

CTRL-ALT-F2

tail storage.log shows:

DEBUG storage: registered device format class LVMPhysicalVolume as lvmpv
DEBUG storage: registered device format class MDRaidMember as mdmember
DEBUG storage: registered device format class MultipathMember as multipath_member
DEBUG storage: registered device format class PPCPRePBoot as prepboot
DEBUG storage: registered device format class SwapSpace as swap
INFO storage: devices to scan for multipath: ['sda', 'sdb', 'sdc']
INFO storage: adding sda to singlepath_disks
INFO storage: adding sdb to singlepath_disks
INFO storage: adding sdc to singlepath_disks
INFO storage: devices post multipath scan: (['sda', 'sdb', 'sdc'], [], [])

fdisk -l shows:

Disk /dev/sda: 250.1 GB, 250... bytes
...
Device Boot    Start    End    Blocks  Id  System
/dev/sda1          1   1958             7  HPFS/NTFS
/dev/sda2       1959   1984            83  Linux
/dev/sda3       1984  30401            8e  Linux LVM

Disk /dev/sdb: 250.1 GB, 250... bytes
...
Device Boot    Start    End    Blocks  Id  System
/dev/sdb1          1  30401            fd  Linux raid autodetect

Disk /dev/sdc: 250.1 GB, 250... bytes
...
Device Boot    Start    End    Blocks  Id  System
/dev/sdc1          1  30401            fd  Linux raid autodetect

CTRL-ALT-F3
last 5 lines show

INFO storage: devices to scan for multipath: ['sda', 'sdb', 'sdc']
INFO storage: adding sda to singlepath_disks
INFO storage: adding sdb to singlepath_disks
INFO storage: adding sdc to singlepath_disks
INFO storage: devices post multipath scan: (['sda', 'sdb', 'sdc'], [], [])

CTRL-ALT-F4

DEBUG kernel:SELinux: ....
NOTICE kernel:tyep=1403 audit(1285633217.220;2): policy loaded auid=4294967295 ses=4294967295
INFO kernel:md: raid0 personality registered for level 0
INFO kernel:md: raid1 personality registered for level 1
INFO kernel:async_tx: api intialized (async)
INFO kernel:xor: automatically using best checksumming function: generic_sse
...
WARN kernel:raid6: int64x1  2117 MB/s
WARN kernel:raid6: int64x2  2378 MB/s
WARN kernel:raid6: int64x4  1835 MB/s
WARN kernel:raid6: int64x8  1371 MB/s
WARN kernel:raid6: sse2x1   2429 MB/s
WARN kernel:raid6: sse2x2   3335 MB/s
WARN kernel:raid6: sse2x4   3812 MB/s
INFO kernel:md: raid6 personality registered for level 6
INFO kernel:md: raid5 personality registered for level 5
INFO kernel:md: raid4 personality registered for level 4
INFO kernel:md: raid10 personality registered for level 10
INFO kernel:md: linear personality registered for level -1

Anyone have any thoughts? This setup works fine in Fedora 11. How can I get past this error so I can mount /home on my software raid partitions.