Openfiler Will Not Map iSCSI Luns to iSCSI Targets (Part 2)

In my previous post with this title, I went through installing a script to rebuild the openfiler volumes.xml file on every reboot. This works great for iSCSI, but yesterday I added an NFS volume. On reboot, it was marked as iSCSI. Turns out that the script only works for iSCSI, even though it’s supposed to work for all the supported openfiler volume types.

First, a quick review of the problem.

  • My mdadm software raid array will not auto-assemble when called by the default lines in /etc/rc.
  • Because of this, LVM does not load the volume.
  • Openfiler deletes the volumes.xml entry once it starts since the volume is gone.
  • When I manually start /dev/md0 with “mdadm –assemble –scan” and restart the openfiler service my volumes do not reappear.
Here’s how I got everything working:
  • call the mdadm twice in /etc/rc.sysinit to get the array to assemble before LVM starts.
  • remove /etc/rc3.d/S85Openfiler so that openfiler doesn’t automatically start on reboot.
  • crontab script to backup volumes.xml every 1 minute.
  • edit /etc/rc.local to restore the backup volumes.xml and start the openfiler service.
The result? Everything works as advertised. Here’s how you do it.

The Process

1) Edits to /etc/rc.sysinit

Find the lines below in your /etc/rc.sysinit file, and add the lines that say “added by JP”.

# Start any MD RAID arrays that haven't been started yet
[ -r /proc/mdstat -a -r /dev/md/md-device-map ] && /sbin/mdadm -IRs

if [ -x /sbin/lvm ]; then
        action $"Setting up Logical Volume Management:" /sbin/lvm vgchange -a y --ignorelockingfailure --ignoremonitoring

#This section added by JP 08/19/11 to fix software raid not auto-assembling.
/sbin/mdadm --assemble --scan
#If you notice that a specific volume group won't activate properly add this line too:
#You can diagnose this with 'lvscan' then 'vgscan'
#/sbin/vgchange -ay VGName

if [ -f /etc/crypttab ]; then
    init_crypto 0

if [ -f /fastboot ] || strstr "$cmdline" fastboot ; then

2) rc.3 changes

  1. rm /etc/rc3.d/S85Openfiler

3) Crontab Additions

  1. crontab -e
  2. add the following line:
    * * * * * cp -f /opt/openfiler/etc/volumes.xml /root/volumes.xml.DONOTDELETE.bak

4) /etc/rc.local Additions

Here is my /etc/rc.local:

# This script will be executed *after* all the other init scripts.
# You can put your own initialization stuff in here if you don't
# want to do the full Sys V style init stuff.

touch /var/lock/subsys/local

#reactive lvm volumes
###if your LV's don't come online you might need the lines below:
#lvchange -Ay /dev/VGName/LVName
#lvchange -Ay /dev/VGName/LVName
#lvchange -Ay /dev/VGName/LVName
#restore volumes in the Openfiler GUI
cp /root/volumes.xml.DONOTDELETE.bak /opt/openfiler/etc/volumes.xml

#resart iscsi-target service because it doesn't work right for some reason at this point
service iscsi-target restart

#restart openfiler service with all volumes
service openfiler start

After all this; my openfiler box is now working properly with NFS :). Good luck!


Openfiler Will Not Map iSCSI Luns to iSCSI Targets

NOTE: I found a much better way to do this.

See my new post: Openfiler Will Not Map iSCSI Luns to iSCSI Targets (Part 2). Use the directions below at your own risk! The solution below doesn’t fix the problem for NFS, btrfs, ext2\3\4, or xfs and can possibly lead to data loss.

Last night I was putting the finishing touches on my new Openfiler 2.99 box. Once strange thing, although I could see my volumes in the ‘Volume Management’ page, clicking ‘Map’ in the iSCSI target’s Lun Mapping page would just refresh the page without any changes.

In my case, running lvscan showed that the volumes were ‘inactive’. Changing them to ‘active’ by running ‘lvchange -ay /dev/vol/volname’ allowed me to map again! Sadly, rebooting changed them back to ‘inactive’, probably since they’re on a software raid volume that isn’t automatically assembling on boot.

so, back to editing the rc.local.


  1. run lvscan to find your inactive volumes.
  2. vi /etc/rc.local and add the following lines (change to your /dev/vg/volume names though):
    service openfiler stop
    #reactive lvm volumes
    lvchange -ay /dev/jetstor/axisdisk
    lvchange -ay /dev/jetstor/dpmbackups
    lvchange -ay /dev/jetstor/desktopimages
    #reimport openfiler volumes
    service openfiler start

One important note! You might need the remake_vol_info2 script from my previous post here: Openfiler Software Raid Volumes Disappear on Reboot. Because of the software raid problem, my final /etc/rc.local looks like this:

# This script will be executed *after* all the other init scripts.
# You can put your own initialization stuff in here if you don't
# want to do the full Sys V style init stuff.

touch /var/lock/subsys/local

#stop openfiler service to make changes to storage subsystem
service openfiler stop

#recreate software raid volumes on JetStor
mdadm -A -s

#recreate volumes in the Openfiler GUI

#reactive lvm volumes
lvchange -ay /dev/jetstor/axisdisk
lvchange -ay /dev/jetstor/dpmbackups
lvchange -ay /dev/jetstor/desktopimages

#restart openfiler service with all volumes
service openfiler start

Using MegaCli to Monitor Openfiler (rev2)

There were a few problems with the last post on monitoring lsi on openfiler. First of all, it wouldn’t send any useful data in the email alerts! For example, it would notify that the online disks were ‘online’, but it doesn’t show any of the offline disks. I was able to find some scripts and modify them to be more useful. Most of this code is from a project called megaraidsas-status, which hasn’t been updated in a while and didn’t work out of the box. I found the project here: HWRaid – LSI MegaRAID SAS.

Configuration Steps

  1. create a file /root/lsi-raidinfo with the following contents:
    # megaclisas-status 0.6
    # This program is free software; you can redistribute it and/or modify
    # it under the terms of the GNU General Public License as published by
    # the Free Software Foundation; either version 2 of the License, or
    # (at your option) any later version.
    # This program is distributed in the hope that it will be useful,
    # but WITHOUT ANY WARRANTY; without even the implied warranty of
    # GNU General Public License for more details.
    # You should have received a copy of the GNU General Public License
    # along with Pulse 2; if not, write to the Free Software
    # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston,
    # MA 02110-1301, USA.
    # Copyright (C) 2007-2009 Adam Cecile (Le_Vert)
    ## modified by 08/14/11
    # fixed for LSI 9285-8e on Openfiler
    import os
    import re
    import sys
    if len(sys.argv) > 2:
        print 'Usage: megaraid-status [-d]'
    # if argument -d, only print disk info
    printarray = True
    printcontroller = True
    if len(sys.argv) > 1:
        if sys.argv[1] == '-d':
            printarray = False
            printcontroller = False
            print 'Usage: megaraid-status [-d]'
    # Get command output
    def getOutput(cmd):
        output = os.popen(cmd)
        lines = []
        for line in output:
            if not re.match(r'^$',line.strip()):
        return lines
    def returnControllerNumber(output):
        for line in output:
            if re.match(r'^Controller Count.*$',line.strip()):
    	    return int(line.split(':')[1].strip().strip('.'))
    def returnControllerModel(output):
        for line in output:
            if re.match(r'^Product Name.*$',line.strip()):
    	    return line.split(':')[1].strip()
    def returnArrayNumber(output):
        i = 0
        for line in output:
    	if re.match(r'^Virtual Disk.*$',line.strip()):
                i += 1
        return i
    def returnArrayInfo(output,controllerid,arrayid):
        id = 'c'+str(controllerid)+'u'+str(arrayid)
        # print 'DEBUG: id = '+str(id)
        operationlinennumber = False
        linenumber = 0
        for line in output:
            if re.match(r'^RAID Level.*$',line.strip()):
                type = 'RAID'+line.strip().split(':')[1].split(',')[0].split('-')[1].strip()
                # print 'debug: type = '+str(type)
            if re.match(r'^Size.*$',line.strip()):
                # Size reported in MB
                if re.match(r'^.*MB$',line.strip().split(':')[1]):
                    size = line.strip().split(':')[1].strip('MB').strip()
                    size = str(int(round((float(size) / 1000))))+'G'
                # Size reported in TB
                elif re.match(r'^.*TB$',line.strip().split(':')[1]):
                    size = line.strip().split(':')[1].strip('TB').strip()
                    size = str(int(round((float(size) * 1000))))+'G'
                # Size reported in GB (default)
                    size = line.strip().split(':')[1].strip('GB').strip()
                    size = str(int(round((float(size)))))+'G'
            if re.match(r'^State.*$',line.strip()):
                state = line.strip().split(':')[1].strip()
            if re.match(r'^Ongoing Progresses.*$',line.strip()):
    	    operationlinennumber = linenumber
            linenumber += 1
            if operationlinennumber:
                inprogress = output[operationlinennumber+1]
                inprogress = 'None'
        return [id,type,size,state,inprogress]
    def returnDiskInfo(output,controllerid,currentarrayid):
        arrayid = False
        oldarrayid = False
        olddiskid = False
        table = []
        state = 'Offline'
        model = 'Unknown'
        enclnum = 'Unknown'
        slotnum = 'Unknown'
        enclsl = 'Unknown'
        firstDisk = True
        for line in output:
            if re.match(r'Firmware state: .*$',line.strip()):
                state = line.split(':')[1].strip()
            if re.match(r'Slot Number: .*$',line.strip()):
                slotnum = line.split(':')[1].strip()
            if re.match(r'Inquiry Data: .*$',line.strip()):
                model = line.split(':')[1].strip()
                model = re.sub(' +', ' ', model)
            if re.match(r'Enclosure Device ID: [0-9]+$',line.strip()):
                enclnum = line.split(':')[1].strip()
                if firstDisk == True:
                    firstDisk = False
                    enclsl = str(enclnum)+':'+str(slotnum)
                    table.append([str(enclsl), model, state])
        # Last disk of last array
        enclsl = str(enclnum)+':'+str(slotnum)
        table.append([str(enclsl), model, state])
        arraytable = []
        for disk in table:
        return arraytable
    cmd = '/opt/MegaRAID/MegaCli/MegaCli64 -adpCount -NoLog'
    output = getOutput(cmd)
    controllernumber = returnControllerNumber(output)
    bad = False
    # List available controller
    if printcontroller:
        print '-- Controllers --'
        print '-- ID | Model'
        controllerid = 0
        while controllerid < controllernumber:
            cmd = '/opt/MegaRAID/MegaCli/MegaCli64 -AdpAllInfo -a'+str(controllerid)+' -NoLog'
            output = getOutput(cmd)
            controllermodel = returnControllerModel(output)
            print 'c'+str(controllerid)+' | '+controllermodel
            controllerid += 1
        print ''
    if printarray:
        controllerid = 0
        print '-- Volumes --'
        print '-- ID | Type | Size | Status | InProgress'
        # print 'controller number'+str(controllernumber)
        while controllerid < controllernumber:
            arrayid = 0
    	cmd = '/opt/MegaRAID/MegaCli/MegaCli64 -LDInfo -lall -a'+str(controllerid)+' -NoLog'
    	output = getOutput(cmd)
    	arraynumber = returnArrayNumber(output)
    	# print 'array number'+str(arraynumber)
            while arrayid             cmd = '/opt/MegaRAID/MegaCli/MegaCli64 -LDInfo -l'+str(arrayid)+' -a'+str(controllerid)+' -NoLog'
    #            print 'DEBUG: running '+str(cmd)
    	    output = getOutput(cmd)
    	    # print 'DEBUG: output '+str(output)
                arrayinfo = returnArrayInfo(output,controllerid,arrayid)
    	    print 'volinfo: '+arrayinfo[0]+' | '+arrayinfo[1]+' | '+arrayinfo[2]+' | '+arrayinfo[3]+' | '+arrayinfo[4]
                if not arrayinfo[3] == 'Optimal':
                    bad = True
    	    arrayid += 1
            controllerid += 1
        print ''
    print '-- Disks --'
    print '-- Encl:Slot | Model | Status'
    controllerid = 0
    while controllerid < controllernumber:
        arrayid = 0
        cmd = '/opt/MegaRAID/MegaCli/MegaCli64 -LDInfo -lall -a'+str(controllerid)+' -NoLog'
        output = getOutput(cmd)
        arraynumber = returnArrayNumber(output)
        while arrayid         #grab disk arrayId info
            #cmd = '/opt/MegaRAID/MegaCli/MegaCli64 -LdPdInfo -a'+str(controllerid)+' -NoLog'
            cmd = '/opt/MegaRAID/MegaCli/MegaCli64 -PDList -a'+str(controllerid)+' -NoLog'
            #print 'debug: running '+str(cmd)
            output = getOutput(cmd)
            arraydisk = returnDiskInfo(output,controllerid,arrayid)
            for array in arraydisk:
                print 'diskinfo: '+array[0]+' | '+array[1]+' | '+array[2]
    	arrayid += 1
        controllerid += 1
    if bad:
        print '\nThere is at least one disk/array in a NOT OPTIMAL state.'
  2. chown 700 /root/lsi-raidinfo
  3. create a file /root/lsi-checkraid with the following contents:
    # created by on 08/14/11
    # rev 01
    import os
    import re
    import sys
    if len(sys.argv) > 1:
      print 'Usage: accepts stdin from lsi-raidinfo'
    blnBadDisk = False
    infile = sys.stdin
    for line in infile:
      #print 'DEBUG!! checking line:'+str(line)
      if re.match(r'diskinfo: .*$',line.strip()):
        if re.match(r'^((?!Online, Spun Up|Online, Spun down|Hotspare, Spun Up|Hotspare, Spun down).)*$',line.strip()):
          blnBadDisk = True
          #print 'DEBUG!! bad disk found!'
      if re.match(r'volinfo: ',line.strip()):
        if re.match(r'^((?!Optimal).)*$',line.strip()):
          #print 'DEBUG!! bad vol found!'
          blnBadDisk = True
    if blnBadDisk == True:
      print 'RAID ERROR'
      print 'RAID CLEAN'
  4. chown 700 /root/lsi-checkraid
  5. create a file /root/lsi-emailalerts with the following contents
    #get raid status info
    /root/lsi-raidinfo > /tmp/lsi-raidinfo.txt
    cat /tmp/lsi-raidinfo.txt | /root/lsi-checkraid > /tmp/lsi-checkraid.txt
    #check raid status info
    if grep -qE "RAID ERROR" /tmp/lsi-checkraid.txt
    /opt/MegaRAID/MegaCli/MegaCli64 -PdList -aALL > /tmp/lsi-megaCLIdump.txt
    cat /tmp/lsi-raidinfo.txt | mailx -s "Warning: HOSTNAME failed disk or degraded array" MAILTOADDR -r MAILFROMADDR -a /tmp/lsi-megaCLIdump.txt
    rm -f /tmp/lsi-megaCLIdump.txt
    rm -f /tmp/lsi-raidinfo.txt
    rm -f /tmp/lsi-checkraid.txt
    exit 0
  6. modify the mailx line in /root/emailalerts with the correct from/to/subject
  7. chown 700 /root/lsi-emailalerts
  8. run crontab -e and add the following line:
    */5 * * * * /root/lsi-emailalerts
Now, when there’s a volume not ‘optimal’ or a disk not not ‘hotspare’ or ‘online, spun up’. You’ll get an email with useful info!

Example Email

subject: “Warning: HOSTNAME failed disk or degraded array”

attachment: lsi-megaCLIdump.txt


— Controllers —
— ID | Model
c0 | LSI MegaRAID SAS 9285-8e

— Volumes —
— ID | Type | Size | Status | InProgress
volinfo: c0u0 | RAID6 | 57298G | Optimal | None

— Disks —
— Encl:Slot | Model | Status
diskinfo: 32:1 | SEAGATE ST33000650SS 0002Z2902LAT | Online, Spun down
diskinfo: 32:2 | SEAGATE ST33000650SS 0002Z2905B0E | Online, Spun down
diskinfo: 32:3 | SEAGATE ST33000650SS 0002Z2905ANS | Online, Spun down
diskinfo: 32:4 | SEAGATE ST33000650SS 0002Z2903DM0 | Online, Spun down
diskinfo: 32:5 | SEAGATE ST33000650SS 0002Z2905HE4 | Online, Spun down
diskinfo: 32:6 | SEAGATE ST33000650SS 0002Z2905B3F | Online, Spun down
diskinfo: 32:7 | SEAGATE ST33000650SS 0002Z2903DVH | Online, Spun down
diskinfo: 32:8 | SEAGATE ST33000650SS 0002Z2905B1W | Online, Spun down
diskinfo: 32:9 | SEAGATE ST33000650SS 0002Z29040GF | Online, Spun down
diskinfo: 32:10 | SEAGATE ST33000650SS 0002Z29032B5 | Online, Spun down
diskinfo: 32:11 | SEAGATE ST33000650SS 0002Z2905C23 | Online, Spun down
diskinfo: 32:12 | SEAGATE ST33000650SS 0002Z2904RMH | Online, Spun down
diskinfo: 32:13 | SEAGATE ST33000650SS 0002Z29035VM | Online, Spun down
diskinfo: 32:14 | SEAGATE ST33000650SS 0002Z2905H0C | Online, Spun down
diskinfo: 32:15 | SEAGATE ST33000650SS 0002Z29031SY | Online, Spun down
diskinfo: 32:16 | SEAGATE ST33000650SS 0002Z29031ZZ | Online, Spun down
diskinfo: 32:17 | SEAGATE ST33000650SS 0002Z2905AVN | Online, Spun down
diskinfo: 32:18 | SEAGATE ST33000650SS 0002Z2905DW9 | Online, Spun down
diskinfo: 32:19 | SEAGATE ST33000650SS 0002Z2905B2E | Online, Spun down
diskinfo: 32:20 | SEAGATE ST33000650SS 0002Z2903DP9 | Online, Spun down
diskinfo: 32:21 | SEAGATE ST33000650SS 0002Z2903YTQ | Online, Spun down
diskinfo: 32:22 | SEAGATE ST33000650SS 0002Z2906NEL | Online, Spun down
diskinfo: 32:23 | SEAGATE ST33000650SS 0002Z2906NMY | Online, Spun down
diskinfo: 32:24 | SEAGATE ST33000650SS 0002Z29035RL | Hotspare, Spun Up

Using MegaCLI to Monitor Openfiler

UPDATE — There’s a better way to do this. See my latest post here: Using MegaCLI to Monitor Openfiler (rev2).

There’s quite a few posts around the ‘net on Openfiler and MegaCLI. Why another one? I wanted to customize my scripts a bit differently, and none of them worked 100% for me. Most of the code is scavenged but attributed where possible.

Installing MegaCLI

  1. First, find and download the latest MegaCLI supported by your controller on the LSI website.
  2. Extract the RPM’s on your workstation and scp them to your openfiler box to /root (on windows, WinSCP works).
  3. SSH to the openfiler box
  4. /root/rpm2cpio MegaCli-8.00.11-1.i386.rpm  | cpio -idmv
  5. /root/rpm2cpio Lib_Utils-1.00-05.noarch.rpm   | cpio -idmv
  6. Test with /opt/MegaRAID/MegaCLI/MegaCli64 -PDList -aAll

Configuring MegaCLI and Cron

  1. vi /opt/MegaRAID/MegaCLI/analysis.awk
  2. Add the following to this new file:
    # This is a little AWK program that interprets MegaCLI output
    #using two spaces at the beginning of each line for outlook -- see
    /Device Id/ { counter += 1; device[counter] = $3 }
    /Firmware state/ { state_drive[counter] = $3 }
    /Inquiry/ { name_drive[counter] = $3 " " $4 " " $5 " " $6 }
    END {
    for (i=1; i
  3. vi /root/raidstatus
  4. Add the following to this new file. NOTE: make sure to change SYSTEMNAME, and the target\sender addresses.
    /opt/MegaRAID/MegaCli/MegaCli64 -PdList -aALL > /tmp/SYSTEMNAME-megaCLIdump.txt
    cat /tmp/SYSTEMNAME-megaCLIdump.txt | awk -f /opt/MegaRAID/MegaCli/analysis.awk > /tmp/megarc.raidstatus
    #ref: (fishguy; post 6)
    cd /opt/MegaRAID/MegaCli
    ./MegaCli64 -AdpAllInfo -aALL | grep "Degraded" > degraded.txt
    ./MegaCli64 -AdpAllInfo -aALL | grep "Failed Disks" > failed.txt
    a='cat degraded.txt'
    b='cat failed.txt'
    echo $a $b | grep "1" > dead.txt
    if [[ $? -eq 0 ]];
    cat /tmp/megarc.raidstatus | mailx -s "Warning: SYSTEMNAME.chem failed disk or degraded array" -r -a /tmp/SYSTEMNAME-megaCLIdump.txt
    rm -f /opt/MegaRAID/MegaCli/degraded.txt
    rm -f /opt/MegaRAID/MegaCli/failed.txt
    rm -f /opt/MegaRAID/MegaCli/dead.txt
    rm -f /tmp/SYSTEMNAME-megaCLIdump.txt
    rm -f /tmp/megarc.raidstatus
    exit 0
  5. chmod 700 /root/raidstatus
  6. Next, open the Openfiler GUI, click the “System” tab, then click “Notifications” in the right navigation panel.
  7. Enter your notifications (email) settings into this configuration page and click “save”. My emails from the command line wouldn’t send until I completed this step.
  8. Next run “chrontab -e” and add the following line:
  9. */5 * * * * /root/raidstatus

Once complete, cron should run this script every 5mins and email if any raid members are offline.


Building the LSI 9285-8e Driver for Openfiler 2.99

Update – The Chemistry Linux Wizard strikes again! Mr. Fabian asked me to post his latest version of the notes instead which clean up many of the steps. Enjoy!

This is a post of the notes our linux guy (Wizard Fabian) took while building the LSI MegaRaid 9285-8e driver for Openfiler 2.99. Once again, this is not my code or process and it at your own risk, but it will surely help if you’re struggling with this card and Openfiler.

# Reference:

# Install kernel development tools (gcc):
conary update conary
conary update gcc

# Change to working directory
cd /usr/local/src

# Get RAID controller driver package
# Reference:

# Download driver package

# Unzip the package
mkdir temp
cd temp
unzip ../

# Note: It contains module source file

# Copy module source package to working directory
cp megaraid_sas-v00.00.05.30-src.tgz /usr/local/src
cd /usr/local/src
tar -zxvf megaraid_sas-v00.00.05.30-src.tgz

# Compile the module for the active kernel
cd megaraid_sas-v00.00.05.30-src
make -C /lib/modules/`uname -r`/build M=$PWD modules

# Note: Here's the new compiled module:

# Note: Here's the old/current module:

# Remove/backup the old module:
rmmod megaraid_sas.ko
cd /lib/modules/2.6.32-71.18.1.el6-0.20.smp.gcc4.1.x86_64/kernel/drivers/scsi/megaraid
mv megaraid_sas.ko megaraid_sas.ko.orig

# Install the new module:
cp /usr/local/src/megaraid_sas-v00.00.05.30/megaraid_sas.ko megaraid_sas.ko
chmod 644 megaraid_sas.ko
modprobe megaraid_sas

# Make sure the module is loaded:
lsmod | grep megaraid

# Can I see my disk array?
cat /proc/scsi/scsi
fdisk -l

# To make permanent, update initrd
cd /boot
mv initrd-2.6.32-71.18.1.el6-0.20.smp.gcc4.1.x86_64.img initrd-2.6.32-71.18.1.el6-0.20.smp.gcc4.1.x86_64.img.orig
mkinit initrd-$(uname -r).img $(uname -r)

# Reboot, and see if changes stick
shutdown -r now

Openfiler Only Uses 95% of Physical Disk

I’m working through bugs in Openfiler 2.99.1. The latest? When I tried creating a new volume on the latest block device added, it only used 95% of the free space. My device was /dev/sdc, so I ran the following commands:

parted /dev/sdc
rm 1
mklabel gpt
mkpart primary 0 -0
set 1 lvm on
pvcreate /dev/sdc1

Once complete, it allowed me to create a volume group!



Openfiler Software Raid Volumes Disappear on Reboot

My software RAID volumes disappear on every reboot of my Openfiler box. For me running lvdisplay, pvdisplay, or pvscan had no results. However, fdisk -l showed my volumes. Openfiler configures software raid volumes with a package named mdadm, so I started there.

The following command will scan your system for unmounted raid members:
mdadm --examine --scan

It should output a UUID number for any arrays it finds. To assemble your arrays for pvscan use the following:
mdadm -A -s

On every reboot I still lose the volumes and get the error, “mdadm: cannot re-read metadata from /dev/.tmp-block-8:33 – aborting”. To resolve this (warning, this is a hack!):

  1. Copy the following code and save it as /root/remake_vol_info2
    # code taken from: post#28
    # strip the /mnt lines from fstab as we will be rebuilding them
    grep /mnt /etc/fstab -v > _fstab
    # create the new volumes.xml file
    echo -e "<!--?xml version=\"1.0\" ?-->\n" > _volumes.xml
    # find all logical volumes and loop
    for i in `lvdisplay | grep "LV Name" | sed 's/[^\/]*//'`; do
    fstype=`vol_id $i -t 2> /dev/null`;
    mntpoint=`echo $i | sed 's/\/dev\//\/mnt\//'`/
    vgname=`echo $i | cut -d '/' -f3`
    volid=`echo $i | cut -d '/' -f4`
    if [ "$fstype" == "" ]; then
    # assume iscsi since filesystem is unknown
    if [ $fstype == ext3 ] ; then
    if [ $fstype == reiserfs ] ; then
    if [ $fstype == xfs ] ; then
    if [ $fstype != "iscsi" ]; then
    echo "$i $mntpoint $fstype defaults,usrquota,grpquota$args 0 0" >> _fstab
    echo "" >> _volumes.xml
    echo "Mounting $mntpoint"
    mkdir -p $mntpoint > /dev/null 2> /dev/null
    umount $mntpoint 2> /dev/null
    mount $mntpoint
    echo "$i - assuming iSCSI"
    echo " " >> _volumes.xml
    echo "" >> _volumes.xml
    mv -f _fstab /etc/fstab
    mv -f _volumes.xml /opt/openfiler/etc/volumes.xml
    chown openfiler.openfiler /opt/openfiler/etc/volumes.xml
  2. chmod 700 /root/remake_vol_info2
  3. vi /etc/rc.local
  4. add the following:
    mdadm -A -s
    service openfiler restart
  5. Save, reboot, and hope for the best!