This section provides information and alternative solutions
for known problems and limitations of Linux Scripting Toolkit.
Operating system installation halts after reboot when
using LSI SAS RAID controller
Some combinations of LSI SAS
RAID controllers and operating systems might experience a system halt
after rebooting during an operating system installation. The affected
operating systems are:
- SLES 10
- SLES 11
- RHEL 5
- VMware 4
in combination with one of these RAID controllers:
- LSI-SAS-1078-IR
- LSI-SAS-(1064,1068)
- ServeRAID-BR10i
- ServeRAID-BR10ie
This problem occurs when the server has a drive that is
not part of a RAID array and is not configured as a hot-spare. The
problem is caused by the ordering of Linux mptsas
devices.
The following example depicts the problem. A system
has four drives with two configured in a RAID 1 array, one configured
as a hot-spare, and one outside the array. The BIOS sees the drive
outside the array, /dev/sda, as HDD1. The RAID
array, /dev/sdb, is treated as HDD0. The operating
system installation puts the boot files on /dev/sda,
the drive outside the array, but after the reboot, the installation
looks to HDD0 for the boot files.
To work around this problem,
use one of these options:
- Do not configure RAID.
- Change the RAID configuration so that all drives are included
in a RAID array.
- Remove the drive outside the RAID array from the controller.
- Modify the boot order of the system to point to the drive outside
the array instead of the array.
UpdateXpress System Pack Installer returns
errors when supported hardware is not present
Deployment
tasks that include installation of UpdateXpress System Packs (UXSPs) will return
errors if the hardware supported by the UXSPs is not present
in the target system. These errors can be safely ignored.
Missing files in USB key network deployment
When
using a USB key as a boot method for network Linux Scripting Toolkit deployments
with a key that was used previously for local deployments, you might
receive errors due to missing files.
To perform network installations
with a key that has been used for local installations, manually remove
the sgdeploy directory from the key before creating
the boot media with Linux Scripting Toolkit.
Unattended Linux installation requests network device
When
performing unattended Linux operating system installations, the process
might pause to ask which network device to use if there are multiple
devices available. To avoid this problem, you can add a kernel parameter
to specify the desired network device during the workflow creation
process.
In the OS install section of
the workflow, a field is provided for optional kernel parameters.
The
kernel parameter varies by operating system:
- For Red Hat Linux and VMware: ksdevice=eth,
where eth is the network device to use. For example
eth0, eth1, and so on.
- For SUSE Linux: netdevice=eth where eth is
the network device to use. For example eth0, eth1, and so on.
Unattended file not found during installation of SLES
on uEFI systems
When using Linux Scripting Toolkit to install SLES
on a uEFI based system, the installation task might be unable to find
the answer file, causing the installation to attempt to continue in
manual mode.
To resolve this issue, perform these steps:
- Edit the workflow for your installation.
- In the OS install section of the workflow,
add brokenmodules=usb_storage to the optional kernel
parameters.
- Save the workflow.
- Create bootable media from the workflow, and perform the installation.
- After the installation is complete, edit the file /etc/modules.d/blacklist.
It is recommended that you make a copy of this file before editing
it.
- Remove the line blacklist usb_storage.
This limitation affects the following systems:
- System x3400 M2, types 7836 and 7837
- System x3500 M2, type 7839
- System x3550 M2, types 7946 and 4198
- System x3650 M2, types 7947 and 4199
- System xiDataPlex dx360
M2 types 7321, 7323 and 6380
- BladeCenter HS22,
types 7870 and 1936
ServeRAID BR10i adapter not supported on iDataPlex dx360
M2 with 12 Bay Storage Chassis (Machine type 7321)
The ServeRAID
BR10i adapter is not supported on the iDataPlex dx360
M2 with 12 Bay Storage Chassis, machine type 7321.
RAID configuration fails for LSI SATA RAID
When
performing RAID configuration to configure an LSI 1064/1064e SATA
controller, you might receive error code 7 or 11. This error is caused
when the cfggen utility is unable to remove or create a configuration
on SATA drives larger than 250 GB.
To avoid this problem, remove
any logical volumes, including RAID arrays, on the adapters by using
the Ctrl + C menu on system POST prior to using Linux Scripting Toolkit.
Incorrect association of OS unattended files for SLES x64
During
the OS Install step in the workflow creation
process, the operating system repositories for SLES 10x64 and SLES 11x64 are associated
with the 32bit versions of the unattended files by default.
This can either cause the installation to fail or cause the operating
system to installed without the correct packages.
To avoid this
potential problem, you must manually associate the correct operating
system unattended files with the operating system repositories when
creating a workflow to install
SLES 10 x64 or
SLES 11 x64. The
correct file associations are shown below.
Operating System |
Unattended File Name |
SUSE Linux Enterprise Server 10 x64 |
sles10x64.xml |
SUSE Linux Enterprise Server 10 x64 with
Xen |
sles10x64_xen.xml |
SUSE Linux Enterprise Server 11 SP1/SP2/SP3/SP4 x64 |
sles11x64.xml |
SUSE Linux Enterprise Server 11 SP1/SP2/SP3/SP4 x64 with
Xen |
sles11x64_xen.xml |
Default Fibre Configurations not supported on Emulex
Fibre HBAs
The Target WWNN, Target WWPN and LUN number on
the Fibre HBA Toolkit variables need to be set to configure the Primary,
Alternate 1, Alternate 2 and Alternate 3 boot device settings. The
default settings will not work on Emulex Fibre HBA adapters.
All
values are case sensitive. You must ensure that the configured values
are identical to the adapter values with regard to case.
ASU configuration
fails for Load Defaults
When performing
ASU configuration
to load the system defaults, you might receive an error code of
45.
This error is caused when the
ASU utility is unable
to load defaults for the
ISCSI.InitiatorName setting.
This limitation affects the following systems:
- System x3200 M3, types 7327 and 7328
- System x3250 M3, types 4251, 4252, and 4261
- System x3400 M2, types 7836 and 7837
- System x3500 M2, type 7839
- System x3550 M2, types 7946 and 4198
- System x3650 M2, types 7947 and 4199
- System x iDataPlex dx360
M2 types 7321, 7323 and 6380
- BladeCenter HS22,
types 7870 and 1936
To avoid this problem, create a new
asu.ini file
with the following contents:
loaddefault uEFI
loaddefault SYSTEM_PROD_DATA
loaddefault BootOrder
loaddefault IMM
VMware ESX
4 installation requires a minimum of 4 GB of memory
When
performing an installation of VMware ESX 4, ensure
that the target system has a minimum of 4 GB of memory.
VMware ESX
requires that NUMA system memory be balanced
VMware installations
may fail to load the VMkernel when Non-Uniform Memory Access (NUMA)
is enabled and each processor does not have memory in its adjoining
memory banks.
VMware ESX
Server 4.1 installation hangs at "Starting vmkernel initialization"
When
installing
VMware ESX
Server 4.1 on a system with a MAX5 memory expansion module, the installation
might hang on this screen. This issue can occur on the following systems:
- BladeCenter HX5,
type 7872
- System x3690 X5, types 7148, 7149
- System x3850 X5, type 7145
To avoid this problem, add the kernel parameter allowInterleaveNUMAnodes=TRUE during
the Workflow Creation and OS installation task phases.
This
deployment requires a new kickstart file. Create the new file by following
these steps:
- Create a new OS installation task based on the esx4.ks kickstart
file.
- Modify the new task to add the necessary kernel parameter:
- Modify the line:
bootloader --location=mbr
to
be:bootloader --location=mbr --append="allowInterleavedNUMAnodes=TRUE"
- In the OS installation section of the workflow, a field is provided
for optional kernel parameters. Add the following value to this field:
allowInterleavedNUMAnodes=TRUE
uEFI operating system installations
do not boot from hard drive
During native uEFI operating
system installations, the target system might fail to boot from the
hard drive after Linux Scripting Toolkit processes
are complete. This can occur if the target system does not automatically
boot the .efi file (bootx64.efi for Red Hat Enterprise Linux 6 or elilo.efi for SUSE Linux Enterprise Server 11 SP1/SP2/SP3/SP4) from the
drive.
The solution to this problem is dependent upon the operating
system. Consult the operating system information for instructions
about adding a new boot option entry for the .efi file.
For
example, to correct this problem on most
Lenovo systems,
you can create a new boot entry for the
.efi file
and continue the installation using that option. Follow these steps
to create a new boot entry for the
.efi file:
- Power on the system, and, press F1 to enter
setup.
- Select Boot Manager.
- Select Add Boot Option.
- Select the boot entry that includes the string *.efi"
- Enter the description as OS_Install, and
select Commit Changes.
Follow these steps to continue the installation:
- Power on the system, and press F1 to enter
setup.
- Select Boot Manager.
- Select Boot from File.
- Select the GUID Partition Tables (GPT) System Partition with the
name OS_Install.
- Select EFI.
- Select Boot.
- Select efi file.
Note: If the installation completes and the system does not boot
to the operating system, go to the Start Options section
of the setup menu and select the boot entry for the operating system