automated installations (10)

It's been a while since I wrote about the automated installations project and I feel it's time for an update :)

*female startrek voice* Last time on StarTrek Enterprise^H^H^H^H^H^H^H^H^H^H^H^H^H^H^H^H^H^H^H Automated installations

I had made a kulnet-postfix, kulnet-ntpdate and kulnet-openntpd package. I was wondering why the postinstallation script hung when trying to restart the daemon. I dunno about that yet, but I found that not loading the debconf functions in the postinst script, fixes the problem.

Since then, I have also made a kulnet-root-account, kulnet-snmp, kulnet-ssh, kulnet-syslog-ng, kulnet-tsm and a brand new kulnet-watchdog package. Most of these packages just either configure the underlying software correctly or add it into the kulnet-watchdog, or both. The kulnet-watchdog package contains a new software watchdog written from scratch in perl. The kulnet-tsm package is used to take daily backups to the TSM (Tivoli Storage Manager) from IBM. I also had to create Debian packages from the RPM's that are provided by IBM.

I've currently run short on packages I can add to the default installation and I'm faced with the core of the automated installation again. Specifically, I want to be able to install Debian on a RAID 1 set. While the Sarge installer supports installing on RAID1, it does not allow you to do it with a preseeded configfile (using partman-auto). The next generation debian-installer is supposed to be able to do this (in fact, I read a mail on the mailinglist about that just today)

Too bad I don't have the luxury to wait for Etch to become stable...

The most logical place to start is in the debian installer itself. Since it has the capability to create and assemble RAID 1 sets manually, the code is already there. Now I just need to figure out how to automate it.

The udeb package is called partman-md_20_all.udeb. Unpacking it shows that there is NO binary in it. Yay \o/ !
This means I can probably do this entire thing with whacko bash scripting.

Sigh, again my enthousiasm was premature :(

There is an mdadm-udeb package that contains the binary to do RAID 1.
However, I think I need to look at the very start of the partman suite. That's why I'm checking out the partman udeb itself.
I've learned that it starts by executing all files in /lib/partman/init.d/. After that it loops untill a user exits from choose_partition and then executes all files in /lib/partman/commit.d
Finally, all files in /lib/partman/finish.d are executed. That's where partman ends.

Onto the init.d of partman!

/lib/partman/init.d contains the following files which seem to do the following things:

01unsupported

This script seems to check whether the underlying system is known and supported. If not, an error is shown.

10umount_target

This script checks whether /target is mounted or not. If it's mounted, it is unmounted.

30parted

This script starts the parted server. [Note: in definitions.sh, DEVICES=/var/lib/partman/devices] Besides that, it also moves around a couple directories but I don't know where they came from or what they are supposed to represent. I guess each directory represents an existing partition.

35dump

This script dumps information of each partition into a logfile. Probably for debugging purposes.

69no_media

This one checks whether partitionable media exist and errors if there aren't

70update_partitions

This script reads all known partitions and invokes the scripts in /lib/partman/update.d/ for each one of them.

71filesystems_detected

This script does nothing but create an empty file called /var/lib/partman/filesystems_detected. Probably just a way to tell later scripts to be careful about existing partitions.

95backup

This script makes a backup of /var/lib/partman/devices. Probably to have a copy before changing things that can get screwed up



All of this seems to be just preparation.
The directory /var/lib/partman/devices seems to be a parallel status directory to keep track of the partitions. I wouldn't be surprised if changes are made to this directory first untill the user is satisfied and then all the changes can be processed at the end.

The scripts called by 70update_partitions in /lib/partman/update.d are the following:


20bootable

This script checks whether a partitions bootable flag is set. It also updates the status dir

20detected_filesystem

This one checks for existing partitions and marks the status dir

59default_visuals

This script fetches a human readable description of the filesystem on the given partition and stores it in the status dir

80visual

This script calls all scripts in /lib/partman/valid_visuals.d and parses the output of all those scripts. The output is one of grep "number","type","size","name","filesystem","bootable","method" or "mountpoint". For each of these output "words", a script exists: /lib/partman/visual.d/[name], which is called if that name appears in the output of the previous script. The output of THESE scripts is then stored in a file called "view"



We just keep getting deeper and deeper in nested script calls...
I'll take the red pill and see how deep the rabbithole goes </matrix>

/lib/partman/valid_visuals.d seems to contain these scripts:

05number

prints "number\tPartition number\n"

10type

prints "type\tType - primary or logical\n" if parted returns yes for USES_EXTENDED

15size

prints "size\tSize\n"

20bootable

prints "bootable\tThe bootable flag\n"

25method

prints "method\tUsage method: F - format K - keep and use existing data\n"

30parted_fs

prints "parted_fs\tThe file system as known by parted\n"

35filesystem

prints "filesystem\tFile system\n"

40name

prints "name\tName\n" if the partition uses named partitions

45mountpoint

prints "mountpoint\tMount point\n"



Most of these lines are printed, except for "type" and "name" which depend on a test.

The scripts in /lib/partman/visual.d are:

bootable


filesystem


method


mountpoint


name


number


parted_fs


size


type




I'm not gonna sum up what these scripts print. Suffice it to say that they look at the terminal type and print something visually pleasing. This output is probably used when showing the contents of a partition in partman.

In case you forgot, we're still at the end of /lib/partman/init.d...

At this point, the partman script loops over "ask_user /lib/partman/choose_partition" and asks to confirm changes when the user is done.

The ask_user function located in definitions.sh looks like this:


ask_user () {
local IFS dir template priority default choices plugin name option
dir="$1"; shift
template=$(cat $dir/question)
priority=$(cat $dir/priority)
if [ -f $dir/default_choice ]; then
default=$(cat $dir/default_choice)
else
default=""
fi
choices=$(
for plugin in $dir/*; do
[ -d $plugin ] || continue
name=$(basename $plugin)
IFS="$NL"
for option in $($plugin/choices "$@"); do
printf "%s__________%s\n" $name "$option"
done
restore_ifs
done
)
code=0
debconf_select $priority $template "$choices" "$default" || code=$?
if [ $code -ge 100 ]; then return 255; fi
echo "$RET" >$dir/default_choice
$dir/${RET%__________*}/do_option ${RET#*__________} "$@" || return $?
return 0
}


This is the first reference to plugins I've found so far.