Openfiler Will Not Map iSCSI Luns to iSCSI Targets (Part 2)

In my previous post with this title, I went through installing a script to rebuild the openfiler volumes.xml file on every reboot. This works great for iSCSI, but yesterday I added an NFS volume. On reboot, it was marked as iSCSI. Turns out that the script only works for iSCSI, even though it’s supposed to work for all the supported openfiler volume types.

First, a quick review of the problem.

  • My mdadm software raid array will not auto-assemble when called by the default lines in /etc/rc.
  • Because of this, LVM does not load the volume.
  • Openfiler deletes the volumes.xml entry once it starts since the volume is gone.
  • When I manually start /dev/md0 with “mdadm –assemble –scan” and restart the openfiler service my volumes do not reappear.
Here’s how I got everything working:
  • call the mdadm twice in /etc/rc.sysinit to get the array to assemble before LVM starts.
  • remove /etc/rc3.d/S85Openfiler so that openfiler doesn’t automatically start on reboot.
  • crontab script to backup volumes.xml every 1 minute.
  • edit /etc/rc.local to restore the backup volumes.xml and start the openfiler service.
The result? Everything works as advertised. Here’s how you do it.

The Process

1) Edits to /etc/rc.sysinit

Find the lines below in your /etc/rc.sysinit file, and add the lines that say “added by JP”.

# Start any MD RAID arrays that haven't been started yet
[ -r /proc/mdstat -a -r /dev/md/md-device-map ] && /sbin/mdadm -IRs

if [ -x /sbin/lvm ]; then
        export LVM_SUPPRESS_LOCKING_FAILURE_MESSAGES=1
        action $"Setting up Logical Volume Management:" /sbin/lvm vgchange -a y --ignorelockingfailure --ignoremonitoring
        unset LVM_SUPPRESS_LOCKING_FAILURE_MESSAGES
fi

#This section added by JP 08/19/11 to fix software raid not auto-assembling.
/sbin/mdadm --assemble --scan
#If you notice that a specific volume group won't activate properly add this line too:
#You can diagnose this with 'lvscan' then 'vgscan'
#/sbin/vgchange -ay VGName

if [ -f /etc/crypttab ]; then
    init_crypto 0
fi

if [ -f /fastboot ] || strstr "$cmdline" fastboot ; then
        fastboot=yes
fi

2) rc.3 changes

  1. rm /etc/rc3.d/S85Openfiler

3) Crontab Additions

  1. crontab -e
  2. add the following line:
    * * * * * cp -f /opt/openfiler/etc/volumes.xml /root/volumes.xml.DONOTDELETE.bak

4) /etc/rc.local Additions

Here is my /etc/rc.local:

#!/bin/sh
#
# This script will be executed *after* all the other init scripts.
# You can put your own initialization stuff in here if you don't
# want to do the full Sys V style init stuff.

touch /var/lock/subsys/local

#reactive lvm volumes
###if your LV's don't come online you might need the lines below:
#lvchange -Ay /dev/VGName/LVName
#lvchange -Ay /dev/VGName/LVName
#lvchange -Ay /dev/VGName/LVName
#restore volumes in the Openfiler GUI
cp /root/volumes.xml.DONOTDELETE.bak /opt/openfiler/etc/volumes.xml

#resart iscsi-target service because it doesn't work right for some reason at this point
service iscsi-target restart

#restart openfiler service with all volumes
service openfiler start

After all this; my openfiler box is now working properly with NFS :). Good luck!

Advertisements

Openfiler Will Not Map iSCSI Luns to iSCSI Targets

NOTE: I found a much better way to do this.

See my new post: Openfiler Will Not Map iSCSI Luns to iSCSI Targets (Part 2). Use the directions below at your own risk! The solution below doesn’t fix the problem for NFS, btrfs, ext2\3\4, or xfs and can possibly lead to data loss.

Last night I was putting the finishing touches on my new Openfiler 2.99 box. Once strange thing, although I could see my volumes in the ‘Volume Management’ page, clicking ‘Map’ in the iSCSI target’s Lun Mapping page would just refresh the page without any changes.

In my case, running lvscan showed that the volumes were ‘inactive’. Changing them to ‘active’ by running ‘lvchange -ay /dev/vol/volname’ allowed me to map again! Sadly, rebooting changed them back to ‘inactive’, probably since they’re on a software raid volume that isn’t automatically assembling on boot.

so, back to editing the rc.local.

Solution

  1. run lvscan to find your inactive volumes.
  2. vi /etc/rc.local and add the following lines (change to your /dev/vg/volume names though):
    service openfiler stop
    #reactive lvm volumes
    lvchange -ay /dev/jetstor/axisdisk
    lvchange -ay /dev/jetstor/dpmbackups
    lvchange -ay /dev/jetstor/desktopimages
    
    #reimport openfiler volumes
    /root/remake_vol_info2
    
    service openfiler start
    

One important note! You might need the remake_vol_info2 script from my previous post here: Openfiler Software Raid Volumes Disappear on Reboot. Because of the software raid problem, my final /etc/rc.local looks like this:

#!/bin/sh
#
# This script will be executed *after* all the other init scripts.
# You can put your own initialization stuff in here if you don't
# want to do the full Sys V style init stuff.

touch /var/lock/subsys/local

#stop openfiler service to make changes to storage subsystem
service openfiler stop

#recreate software raid volumes on JetStor
mdadm -A -s

#recreate volumes in the Openfiler GUI
/root/remake_vol_info2

#reactive lvm volumes
lvchange -ay /dev/jetstor/axisdisk
lvchange -ay /dev/jetstor/dpmbackups
lvchange -ay /dev/jetstor/desktopimages

#restart openfiler service with all volumes
service openfiler start

Using MegaCli to Monitor Openfiler (rev2)

There were a few problems with the last post on monitoring lsi on openfiler. First of all, it wouldn’t send any useful data in the email alerts! For example, it would notify that the online disks were ‘online’, but it doesn’t show any of the offline disks. I was able to find some scripts and modify them to be more useful. Most of this code is from a project called megaraidsas-status, which hasn’t been updated in a while and didn’t work out of the box. I found the project here: HWRaid – LSI MegaRAID SAS.

Configuration Steps

  1. create a file /root/lsi-raidinfo with the following contents:
    #!/usr/bin/python
    
    # megaclisas-status 0.6
    #
    # This program is free software; you can redistribute it and/or modify
    # it under the terms of the GNU General Public License as published by
    # the Free Software Foundation; either version 2 of the License, or
    # (at your option) any later version.
    #
    # This program is distributed in the hope that it will be useful,
    # but WITHOUT ANY WARRANTY; without even the implied warranty of
    # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
    # GNU General Public License for more details.
    #
    # You should have received a copy of the GNU General Public License
    # along with Pulse 2; if not, write to the Free Software
    # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston,
    # MA 02110-1301, USA.
    #
    # Copyright (C) 2007-2009 Adam Cecile (Le_Vert)
    
    ## modified by johnpuskar@gmail.com 08/14/11
    # fixed for LSI 9285-8e on Openfiler
    
    import os
    import re
    import sys
    
    if len(sys.argv) > 2:
        print 'Usage: megaraid-status [-d]'
        sys.exit(1)
    
    # if argument -d, only print disk info
    printarray = True
    printcontroller = True
    if len(sys.argv) > 1:
        if sys.argv[1] == '-d':
            printarray = False
            printcontroller = False
        else:
            print 'Usage: megaraid-status [-d]'
            sys.exit(1)
    
    # Get command output
    def getOutput(cmd):
        output = os.popen(cmd)
        lines = []
        for line in output:
            if not re.match(r'^$',line.strip()):
                lines.append(line.strip())
        return lines
    
    def returnControllerNumber(output):
        for line in output:
            if re.match(r'^Controller Count.*$',line.strip()):
    	    return int(line.split(':')[1].strip().strip('.'))
    
    def returnControllerModel(output):
        for line in output:
            if re.match(r'^Product Name.*$',line.strip()):
    	    return line.split(':')[1].strip()
    
    def returnArrayNumber(output):
        i = 0
        for line in output:
    	if re.match(r'^Virtual Disk.*$',line.strip()):
                i += 1
        return i
    
    def returnArrayInfo(output,controllerid,arrayid):
        id = 'c'+str(controllerid)+'u'+str(arrayid)
        # print 'DEBUG: id = '+str(id)
        operationlinennumber = False
        linenumber = 0
        for line in output:
            if re.match(r'^RAID Level.*$',line.strip()):
                type = 'RAID'+line.strip().split(':')[1].split(',')[0].split('-')[1].strip()
                # print 'debug: type = '+str(type)
            if re.match(r'^Size.*$',line.strip()):
                # Size reported in MB
                if re.match(r'^.*MB$',line.strip().split(':')[1]):
                    size = line.strip().split(':')[1].strip('MB').strip()
                    size = str(int(round((float(size) / 1000))))+'G'
                # Size reported in TB
                elif re.match(r'^.*TB$',line.strip().split(':')[1]):
                    size = line.strip().split(':')[1].strip('TB').strip()
                    size = str(int(round((float(size) * 1000))))+'G'
                # Size reported in GB (default)
                else:
                    size = line.strip().split(':')[1].strip('GB').strip()
                    size = str(int(round((float(size)))))+'G'
            if re.match(r'^State.*$',line.strip()):
                state = line.strip().split(':')[1].strip()
            if re.match(r'^Ongoing Progresses.*$',line.strip()):
    	    operationlinennumber = linenumber
            linenumber += 1
            if operationlinennumber:
                inprogress = output[operationlinennumber+1]
            else:
                inprogress = 'None'
        return [id,type,size,state,inprogress]
    
    def returnDiskInfo(output,controllerid,currentarrayid):
        arrayid = False
        oldarrayid = False
        olddiskid = False
        table = []
        state = 'Offline'
        model = 'Unknown'
        enclnum = 'Unknown'
        slotnum = 'Unknown'
        enclsl = 'Unknown'
    
        firstDisk = True
        for line in output:
            if re.match(r'Firmware state: .*$',line.strip()):
                state = line.split(':')[1].strip()
            if re.match(r'Slot Number: .*$',line.strip()):
                slotnum = line.split(':')[1].strip()
            if re.match(r'Inquiry Data: .*$',line.strip()):
                model = line.split(':')[1].strip()
                model = re.sub(' +', ' ', model)
            if re.match(r'Enclosure Device ID: [0-9]+$',line.strip()):
                enclnum = line.split(':')[1].strip()
                if firstDisk == True:
                    firstDisk = False
                else:
                    enclsl = str(enclnum)+':'+str(slotnum)
                    table.append([str(enclsl), model, state])
        # Last disk of last array
        enclsl = str(enclnum)+':'+str(slotnum)
        table.append([str(enclsl), model, state])
        arraytable = []
        for disk in table:
            arraytable.append(disk)
        return arraytable
    
    cmd = '/opt/MegaRAID/MegaCli/MegaCli64 -adpCount -NoLog'
    output = getOutput(cmd)
    controllernumber = returnControllerNumber(output)
    
    bad = False
    
    # List available controller
    if printcontroller:
        print '-- Controllers --'
        print '-- ID | Model'
        controllerid = 0
        while controllerid < controllernumber:
            cmd = '/opt/MegaRAID/MegaCli/MegaCli64 -AdpAllInfo -a'+str(controllerid)+' -NoLog'
            output = getOutput(cmd)
            controllermodel = returnControllerModel(output)
            print 'c'+str(controllerid)+' | '+controllermodel
            controllerid += 1
        print ''
    
    if printarray:
        controllerid = 0
        print '-- Volumes --'
        print '-- ID | Type | Size | Status | InProgress'
        # print 'controller number'+str(controllernumber)
        while controllerid < controllernumber:
            arrayid = 0
    	cmd = '/opt/MegaRAID/MegaCli/MegaCli64 -LDInfo -lall -a'+str(controllerid)+' -NoLog'
    	output = getOutput(cmd)
    	arraynumber = returnArrayNumber(output)
    	# print 'array number'+str(arraynumber)
            while arrayid             cmd = '/opt/MegaRAID/MegaCli/MegaCli64 -LDInfo -l'+str(arrayid)+' -a'+str(controllerid)+' -NoLog'
    #            print 'DEBUG: running '+str(cmd)
    	    output = getOutput(cmd)
    	    # print 'DEBUG: output '+str(output)
                arrayinfo = returnArrayInfo(output,controllerid,arrayid)
    	    print 'volinfo: '+arrayinfo[0]+' | '+arrayinfo[1]+' | '+arrayinfo[2]+' | '+arrayinfo[3]+' | '+arrayinfo[4]
                if not arrayinfo[3] == 'Optimal':
                    bad = True
    	    arrayid += 1
            controllerid += 1
        print ''
    
    print '-- Disks --'
    print '-- Encl:Slot | Model | Status'
    
    controllerid = 0
    while controllerid < controllernumber:
        arrayid = 0
        cmd = '/opt/MegaRAID/MegaCli/MegaCli64 -LDInfo -lall -a'+str(controllerid)+' -NoLog'
        output = getOutput(cmd)
        arraynumber = returnArrayNumber(output)
        while arrayid         #grab disk arrayId info
            #cmd = '/opt/MegaRAID/MegaCli/MegaCli64 -LdPdInfo -a'+str(controllerid)+' -NoLog'
            cmd = '/opt/MegaRAID/MegaCli/MegaCli64 -PDList -a'+str(controllerid)+' -NoLog'
            #print 'debug: running '+str(cmd)
            output = getOutput(cmd)
            arraydisk = returnDiskInfo(output,controllerid,arrayid)
    
            for array in arraydisk:
                print 'diskinfo: '+array[0]+' | '+array[1]+' | '+array[2]
    	arrayid += 1
        controllerid += 1
    
    if bad:
        print '\nThere is at least one disk/array in a NOT OPTIMAL state.'
        sys.exit(1)
    
  2. chown 700 /root/lsi-raidinfo
  3. create a file /root/lsi-checkraid with the following contents:
    #!/usr/bin/python
    
    # created by johnpuskar@gmail.com on 08/14/11
    # rev 01
    
    import os
    import re
    import sys
    
    if len(sys.argv) > 1:
      print 'Usage: accepts stdin from lsi-raidinfo'
      sys.exit(1)
    
    blnBadDisk = False
    infile = sys.stdin
    for line in infile:
      #print 'DEBUG!! checking line:'+str(line)
      if re.match(r'diskinfo: .*$',line.strip()):
        if re.match(r'^((?!Online, Spun Up|Online, Spun down|Hotspare, Spun Up|Hotspare, Spun down).)*$',line.strip()):
          blnBadDisk = True
          #print 'DEBUG!! bad disk found!'
      if re.match(r'volinfo: ',line.strip()):
        if re.match(r'^((?!Optimal).)*$',line.strip()):
          #print 'DEBUG!! bad vol found!'
          blnBadDisk = True
    
    if blnBadDisk == True:
      print 'RAID ERROR'
    else:
      print 'RAID CLEAN'
    
  4. chown 700 /root/lsi-checkraid
  5. create a file /root/lsi-emailalerts with the following contents
    #!/bin/sh
    
    #get raid status info
    /root/lsi-raidinfo > /tmp/lsi-raidinfo.txt
    cat /tmp/lsi-raidinfo.txt | /root/lsi-checkraid > /tmp/lsi-checkraid.txt
    
    #check raid status info
    if grep -qE "RAID ERROR" /tmp/lsi-checkraid.txt
    then
    /opt/MegaRAID/MegaCli/MegaCli64 -PdList -aALL > /tmp/lsi-megaCLIdump.txt
    cat /tmp/lsi-raidinfo.txt | mailx -s "Warning: HOSTNAME failed disk or degraded array" MAILTOADDR -r MAILFROMADDR -a /tmp/lsi-megaCLIdump.txt
    fi
    
    rm -f /tmp/lsi-megaCLIdump.txt
    rm -f /tmp/lsi-raidinfo.txt
    rm -f /tmp/lsi-checkraid.txt
    exit 0
    
  6. modify the mailx line in /root/emailalerts with the correct from/to/subject
  7. chown 700 /root/lsi-emailalerts
  8. run crontab -e and add the following line:
    */5 * * * * /root/lsi-emailalerts
Now, when there’s a volume not ‘optimal’ or a disk not not ‘hotspare’ or ‘online, spun up’. You’ll get an email with useful info!

Example Email

subject: “Warning: HOSTNAME failed disk or degraded array”

attachment: lsi-megaCLIdump.txt

message:

— Controllers —
— ID | Model
c0 | LSI MegaRAID SAS 9285-8e

— Volumes —
— ID | Type | Size | Status | InProgress
volinfo: c0u0 | RAID6 | 57298G | Optimal | None

— Disks —
— Encl:Slot | Model | Status
diskinfo: 32:1 | SEAGATE ST33000650SS 0002Z2902LAT | Online, Spun down
diskinfo: 32:2 | SEAGATE ST33000650SS 0002Z2905B0E | Online, Spun down
diskinfo: 32:3 | SEAGATE ST33000650SS 0002Z2905ANS | Online, Spun down
diskinfo: 32:4 | SEAGATE ST33000650SS 0002Z2903DM0 | Online, Spun down
diskinfo: 32:5 | SEAGATE ST33000650SS 0002Z2905HE4 | Online, Spun down
diskinfo: 32:6 | SEAGATE ST33000650SS 0002Z2905B3F | Online, Spun down
diskinfo: 32:7 | SEAGATE ST33000650SS 0002Z2903DVH | Online, Spun down
diskinfo: 32:8 | SEAGATE ST33000650SS 0002Z2905B1W | Online, Spun down
diskinfo: 32:9 | SEAGATE ST33000650SS 0002Z29040GF | Online, Spun down
diskinfo: 32:10 | SEAGATE ST33000650SS 0002Z29032B5 | Online, Spun down
diskinfo: 32:11 | SEAGATE ST33000650SS 0002Z2905C23 | Online, Spun down
diskinfo: 32:12 | SEAGATE ST33000650SS 0002Z2904RMH | Online, Spun down
diskinfo: 32:13 | SEAGATE ST33000650SS 0002Z29035VM | Online, Spun down
diskinfo: 32:14 | SEAGATE ST33000650SS 0002Z2905H0C | Online, Spun down
diskinfo: 32:15 | SEAGATE ST33000650SS 0002Z29031SY | Online, Spun down
diskinfo: 32:16 | SEAGATE ST33000650SS 0002Z29031ZZ | Online, Spun down
diskinfo: 32:17 | SEAGATE ST33000650SS 0002Z2905AVN | Online, Spun down
diskinfo: 32:18 | SEAGATE ST33000650SS 0002Z2905DW9 | Online, Spun down
diskinfo: 32:19 | SEAGATE ST33000650SS 0002Z2905B2E | Online, Spun down
diskinfo: 32:20 | SEAGATE ST33000650SS 0002Z2903DP9 | Online, Spun down
diskinfo: 32:21 | SEAGATE ST33000650SS 0002Z2903YTQ | Online, Spun down
diskinfo: 32:22 | SEAGATE ST33000650SS 0002Z2906NEL | Online, Spun down
diskinfo: 32:23 | SEAGATE ST33000650SS 0002Z2906NMY | Online, Spun down
diskinfo: 32:24 | SEAGATE ST33000650SS 0002Z29035RL | Hotspare, Spun Up

Using MegaCLI to Monitor Openfiler

UPDATE — There’s a better way to do this. See my latest post here: Using MegaCLI to Monitor Openfiler (rev2).

There’s quite a few posts around the ‘net on Openfiler and MegaCLI. Why another one? I wanted to customize my scripts a bit differently, and none of them worked 100% for me. Most of the code is scavenged but attributed where possible.

Installing MegaCLI

  1. First, find and download the latest MegaCLI supported by your controller on the LSI website.
  2. Extract the RPM’s on your workstation and scp them to your openfiler box to /root (on windows, WinSCP works).
  3. SSH to the openfiler box
  4. /root/rpm2cpio MegaCli-8.00.11-1.i386.rpm  | cpio -idmv
  5. /root/rpm2cpio Lib_Utils-1.00-05.noarch.rpm   | cpio -idmv
  6. Test with /opt/MegaRAID/MegaCLI/MegaCli64 -PDList -aAll

Configuring MegaCLI and Cron

  1. vi /opt/MegaRAID/MegaCLI/analysis.awk
  2. Add the following to this new file:
    # This is a little AWK program that interprets MegaCLI output
    #using two spaces at the beginning of each line for outlook -- see http://stackoverflow.com/questions/247546/outlook-autocleaning-my-line-breaks-and-screwing-up-my-email-format
    #ref: http://timjacobs.blogspot.com/2008/05/installing-lsi-logic-raid-monitoring.html
    /Device Id/ { counter += 1; device[counter] = $3 }
    /Firmware state/ { state_drive[counter] = $3 }
    /Inquiry/ { name_drive[counter] = $3 " " $4 " " $5 " " $6 }
    END {
    for (i=1; i
  3. vi /root/raidstatus
  4. Add the following to this new file. NOTE: make sure to change SYSTEMNAME, and the target\sender addresses.
    #!/bin/sh
    #ref: http://timjacobs.blogspot.com/2008/05/installing-lsi-logic-raid-monitoring.html
    /opt/MegaRAID/MegaCli/MegaCli64 -PdList -aALL > /tmp/SYSTEMNAME-megaCLIdump.txt
    cat /tmp/SYSTEMNAME-megaCLIdump.txt | awk -f /opt/MegaRAID/MegaCli/analysis.awk > /tmp/megarc.raidstatus
    
    #ref: https://forums.openfiler.com/viewtopic.php?id=4711 (fishguy; post 6)
    cd /opt/MegaRAID/MegaCli
    ./MegaCli64 -AdpAllInfo -aALL | grep "Degraded" > degraded.txt
    ./MegaCli64 -AdpAllInfo -aALL | grep "Failed Disks" > failed.txt
    a='cat degraded.txt'
    b='cat failed.txt'
    echo $a $b | grep "1" > dead.txt
    if [[ $? -eq 0 ]];
    then
    cat /tmp/megarc.raidstatus | mailx -s "Warning: SYSTEMNAME.chem failed disk or degraded array" alerts@email.com -r sender@email.com -a /tmp/SYSTEMNAME-megaCLIdump.txt
    fi
    
    rm -f /opt/MegaRAID/MegaCli/degraded.txt
    rm -f /opt/MegaRAID/MegaCli/failed.txt
    rm -f /opt/MegaRAID/MegaCli/dead.txt
    rm -f /tmp/SYSTEMNAME-megaCLIdump.txt
    rm -f /tmp/megarc.raidstatus
    exit 0
    
    
  5. chmod 700 /root/raidstatus
  6. Next, open the Openfiler GUI, click the “System” tab, then click “Notifications” in the right navigation panel.
  7. Enter your notifications (email) settings into this configuration page and click “save”. My emails from the command line wouldn’t send until I completed this step.
  8. Next run “chrontab -e” and add the following line:
  9. */5 * * * * /root/raidstatus

Once complete, cron should run this script every 5mins and email if any raid members are offline.

Reference:

Building the LSI 9285-8e Driver for Openfiler 2.99

Update – The Chemistry Linux Wizard strikes again! Mr. Fabian asked me to post his latest version of the notes instead which clean up many of the steps. Enjoy!

This is a post of the notes our linux guy (Wizard Fabian) took while building the LSI MegaRaid 9285-8e driver for Openfiler 2.99. Once again, this is not my code or process and it at your own risk, but it will surely help if you’re struggling with this card and Openfiler.

# Reference:
# http://forums.openfiler.com/viewtopic.php?id=6232

# Install kernel development tools (gcc):
conary update conary
conary update gcc

# Change to working directory
cd /usr/local/src

# Get RAID controller driver package
# Reference:
# http://www.lsi.com/products/storagecomponents/Pages/MegaRAIDSAS9285-8e.aspx

# Download driver package
wget http://www.lsi.com/downloads/Public/MegaRAID%20Common%20Files/5.30_RHEL_Linux_Drivers.zip

# Unzip the package
mkdir temp
cd temp
unzip ../5.30_RHEL_Linux_Drivers.zip

# Note: It contains module source file
megaraid_sas-v00.00.05.30-src.tgz

# Copy module source package to working directory
cp megaraid_sas-v00.00.05.30-src.tgz /usr/local/src
cd /usr/local/src
tar -zxvf megaraid_sas-v00.00.05.30-src.tgz

# Compile the module for the active kernel
cd megaraid_sas-v00.00.05.30-src
make -C /lib/modules/`uname -r`/build M=$PWD modules

# Note: Here's the new compiled module:
/usr/local/src/megaraid_sas-v00.00.05.30/megaraid_sas.ko

# Note: Here's the old/current module:
/lib/modules/2.6.32-71.18.1.el6-0.20.smp.gcc4.1.x86_64/kernel/drivers/scsi/megaraid/megaraid_sas.ko

# Remove/backup the old module:
rmmod megaraid_sas.ko
cd /lib/modules/2.6.32-71.18.1.el6-0.20.smp.gcc4.1.x86_64/kernel/drivers/scsi/megaraid
mv megaraid_sas.ko megaraid_sas.ko.orig

# Install the new module:
cp /usr/local/src/megaraid_sas-v00.00.05.30/megaraid_sas.ko megaraid_sas.ko
chmod 644 megaraid_sas.ko
modprobe megaraid_sas

# Make sure the module is loaded:
lsmod | grep megaraid

# Can I see my disk array?
cat /proc/scsi/scsi
fdisk -l

# To make permanent, update initrd
cd /boot
mv initrd-2.6.32-71.18.1.el6-0.20.smp.gcc4.1.x86_64.img initrd-2.6.32-71.18.1.el6-0.20.smp.gcc4.1.x86_64.img.orig
mkinit initrd-$(uname -r).img $(uname -r)

# Reboot, and see if changes stick
shutdown -r now

Openfiler Only Uses 95% of Physical Disk

I’m working through bugs in Openfiler 2.99.1. The latest? When I tried creating a new volume on the latest block device added, it only used 95% of the free space. My device was /dev/sdc, so I ran the following commands:


parted /dev/sdc
print
rm 1
mklabel gpt
mkpart primary 0 -0
set 1 lvm on
q
pvcreate /dev/sdc1

Once complete, it allowed me to create a volume group!

Reference:

 

Help! I deleted my ifcfg files!

I was trying to fix an ethernet problem on my openfiler installation, and per a forum suggestion deleted my /etc/sysconfig/network-scripts/ifcfg* files without a backup (I’m new to linux and learning :).

The ifcfg files are all of a standard format and can be recreated manually according to the following structure:

DEVICE=eth0
MTU=1500
USERCTL=no
ONBOOT=yes
BOOTPROTO=static
IPADDR=xxx.xxx.xxx.xxx
NETMASK=xxx.xxx.xxx.xxx

Once created, use the following to bring the interface online:
ifup eth0

Reference: CentOS ifcfg-eth0 config file deleted. Utility to recreate it?