automated installations (10)

It's been a while since I wrote about the automated installations project and I feel it's time for an update :)

*female startrek voice* Last time on StarTrek Enterprise^H^H^H^H^H^H^H^H^H^H^H^H^H^H^H^H^H^H^H Automated installations

I had made a kulnet-postfix, kulnet-ntpdate and kulnet-openntpd package. I was wondering why the postinstallation script hung when trying to restart the daemon. I dunno about that yet, but I found that not loading the debconf functions in the postinst script, fixes the problem.

Since then, I have also made a kulnet-root-account, kulnet-snmp, kulnet-ssh, kulnet-syslog-ng, kulnet-tsm and a brand new kulnet-watchdog package. Most of these packages just either configure the underlying software correctly or add it into the kulnet-watchdog, or both. The kulnet-watchdog package contains a new software watchdog written from scratch in perl. The kulnet-tsm package is used to take daily backups to the TSM (Tivoli Storage Manager) from IBM. I also had to create Debian packages from the RPM's that are provided by IBM.

I've currently run short on packages I can add to the default installation and I'm faced with the core of the automated installation again. Specifically, I want to be able to install Debian on a RAID 1 set. While the Sarge installer supports installing on RAID1, it does not allow you to do it with a preseeded configfile (using partman-auto). The next generation debian-installer is supposed to be able to do this (in fact, I read a mail on the mailinglist about that just today)

Too bad I don't have the luxury to wait for Etch to become stable...

The most logical place to start is in the debian installer itself. Since it has the capability to create and assemble RAID 1 sets manually, the code is already there. Now I just need to figure out how to automate it.

The udeb package is called partman-md_20_all.udeb. Unpacking it shows that there is NO binary in it. Yay \o/ !
This means I can probably do this entire thing with whacko bash scripting.

Sigh, again my enthousiasm was premature :(

There is an mdadm-udeb package that contains the binary to do RAID 1.
However, I think I need to look at the very start of the partman suite. That's why I'm checking out the partman udeb itself.
I've learned that it starts by executing all files in /lib/partman/init.d/. After that it loops untill a user exits from choose_partition and then executes all files in /lib/partman/commit.d
Finally, all files in /lib/partman/finish.d are executed. That's where partman ends.

Onto the init.d of partman!

/lib/partman/init.d contains the following files which seem to do the following things:


This script seems to check whether the underlying system is known and supported. If not, an error is shown.


This script checks whether /target is mounted or not. If it's mounted, it is unmounted.


This script starts the parted server. [Note: in, DEVICES=/var/lib/partman/devices] Besides that, it also moves around a couple directories but I don't know where they came from or what they are supposed to represent. I guess each directory represents an existing partition.


This script dumps information of each partition into a logfile. Probably for debugging purposes.


This one checks whether partitionable media exist and errors if there aren't


This script reads all known partitions and invokes the scripts in /lib/partman/update.d/ for each one of them.


This script does nothing but create an empty file called /var/lib/partman/filesystems_detected. Probably just a way to tell later scripts to be careful about existing partitions.


This script makes a backup of /var/lib/partman/devices. Probably to have a copy before changing things that can get screwed up

All of this seems to be just preparation.
The directory /var/lib/partman/devices seems to be a parallel status directory to keep track of the partitions. I wouldn't be surprised if changes are made to this directory first untill the user is satisfied and then all the changes can be processed at the end.

The scripts called by 70update_partitions in /lib/partman/update.d are the following:


This script checks whether a partitions bootable flag is set. It also updates the status dir


This one checks for existing partitions and marks the status dir


This script fetches a human readable description of the filesystem on the given partition and stores it in the status dir


This script calls all scripts in /lib/partman/valid_visuals.d and parses the output of all those scripts. The output is one of grep "number","type","size","name","filesystem","bootable","method" or "mountpoint". For each of these output "words", a script exists: /lib/partman/visual.d/[name], which is called if that name appears in the output of the previous script. The output of THESE scripts is then stored in a file called "view"

We just keep getting deeper and deeper in nested script calls...
I'll take the red pill and see how deep the rabbithole goes </matrix>

/lib/partman/valid_visuals.d seems to contain these scripts:


prints "number\tPartition number\n"


prints "type\tType - primary or logical\n" if parted returns yes for USES_EXTENDED


prints "size\tSize\n"


prints "bootable\tThe bootable flag\n"


prints "method\tUsage method: F - format K - keep and use existing data\n"


prints "parted_fs\tThe file system as known by parted\n"


prints "filesystem\tFile system\n"


prints "name\tName\n" if the partition uses named partitions


prints "mountpoint\tMount point\n"

Most of these lines are printed, except for "type" and "name" which depend on a test.

The scripts in /lib/partman/visual.d are:










I'm not gonna sum up what these scripts print. Suffice it to say that they look at the terminal type and print something visually pleasing. This output is probably used when showing the contents of a partition in partman.

In case you forgot, we're still at the end of /lib/partman/init.d...

At this point, the partman script loops over "ask_user /lib/partman/choose_partition" and asks to confirm changes when the user is done.

The ask_user function located in looks like this:

ask_user () {
local IFS dir template priority default choices plugin name option
dir="$1"; shift
template=$(cat $dir/question)
priority=$(cat $dir/priority)
if [ -f $dir/default_choice ]; then
default=$(cat $dir/default_choice)
for plugin in $dir/*; do
[ -d $plugin ] || continue
name=$(basename $plugin)
for option in $($plugin/choices "$@"); do
printf "%s__________%s\n" $name "$option"
debconf_select $priority $template "$choices" "$default" || code=$?
if [ $code -ge 100 ]; then return 255; fi
echo "$RET" >$dir/default_choice
$dir/${RET%__________*}/do_option ${RET#*__________} "$@" || return $?
return 0

This is the first reference to plugins I've found so far.