gnome-control-center/archiver/versioning-spec
Bradford Hovinen f089da2886 Added archiver directory Imported archiver for time travel, location
Tue Dec 19 09:04:00 2000  Bradford Hovinen  <hovinen@helixcode.com>

	* Makefile.am (SUBDIRS): Added archiver directory
	* archiver/*: Imported archiver for time travel, location management
2000-12-19 14:05:04 +00:00

193 lines
9.6 KiB
Text

Configuration rollback, location management, and cluster support
Copyright (C) 2000 Helix Code, Inc.
Written by Bradford Hovinen <hovinen@helixcode.com>
I. Basic architecture
A. Components
1. Helix Configuration Manager
The GUI shell, here referred to as the Helix Configuration Manager
(HCM), acts as launching point for the capplets and Helix Setup Tools,
and a control center for managing rollback, location management, and
clustering. It launches other components and knows how to find
archived configuration data for rollback. When rollback or a change of
location is required, it invokes the required backends or capplets
with the --set option and feeds the required XML to them.
2. Capplets
Capplets handle user configuration; they combine the front and back
ends into one process. They will eventually run as Bonobo controls
implementing the Bonobo::Capplet interface, but for now they are run
as regular processes. They all support the --get and --set command
line options (which will be methods in the Bonobo::Capplet
interface). --get returns an XML description of the capplet's state
for archival purposes and --set takes an XML description of the
capplet's state and applies those settings to the desktop.
3. Helix Setup Tools (HSTs)
These programs are for system-wide configuration and must run as root
in order to apply changes. They may also run as a regular user in
`read-only' mode. They have separate front- and backends, the former
typically written in C and the latter normally written in Perl. This
facilitates in-place modification of existing configuration files
without the need for creating out own separate, incompatible way of
configuring the system. The backends support the --get and --set
arguments, analogous to the arguments in capplets mentioned above.
2. Root manager
The root manager process runs as root and is launched through
gnome-su. It accepts on stdin a set of programs to launch, one per
line, with command line arguments. HCM uses it to launch Helix Setup
Tools so that they run as root, without needing to ask the user for a
password each time a tool is run. The root manager is run exactly once
through console-helper the first time a tool that must be run as root
is invoked. On subsequent occasions the command to start the tool is
passed to the root manager through stdin.
3. The script do-changes
do-changes is responsible for archiving changes made to the system's
configuration and passing them on to the backend, if appropriate. It
accepts a stream of XML on stdin and stores this XML in the
configuration archive directory. If a backend is specified on the
command line, it also spawns the backend process with the --set option
and feeds the XML to it through stdin.
II. Configuration process
When a user makes changes to either his own configuration or that of
the system, those changes must be archived so that the system may be
rolled back in the future. In the case of capplets, the capplet
currently dumps an XML snapshot of its current state to the script
do-changes when the user clicks `Ok'. do-changes then archives the
state in ~/.gnome/config/<location>/<revision> where <location> is the
name of the active location (cf. section IV) and <revision> is
incremented after each change.
When the capplets are converted into Bonobo controls, the situation
will be slightly different. HCM will be the recipient of the `Ok'
signal, so it will invoke the OKClicked() method of the
Bonobo::Capplet interface on the appropriate capplet. It will also
invoke the GetXML() method of the same interface in order to retrieve
an XML snapshot of the state and store that snapshot with
do-changes. Hence, much of the action moves from the capplet to HCM.
In the case of Helix Setup Tools, the frontend passes the XML through
the do-changes script to the backend whenever the `Ok' button is
clicked. It passes to do-changes the argument --backend <backend name>
so that do-changes will also invoke the indicated backend and pass the
XML to it.
III. Rollback process
From within the HCM, the user may elect to roll back either his
personal configuration or that of the system to a particular
date. HCM looks for a revision directory in the current location
profile with the most recent modification date that is not more recent
than the date specified by the user. HCM also has a list of what
capplets (or HSTs) constitute a complete snapshot of the system's
configuration. In order to perform a complete rollback, it backtracks
through the revision directories, picking up XML snapshots of capplets
until it has a complete set and applies them through the --set
method. In the case of HSTs, the HCM knows how to invoke the backend
and does so as necessary.
IV. Location management
The system may have one or more profiles, each giving different system
configurations. For example, a user may own a laptop and wish to hook
that laptop up to different networks at different times. Each network
may be located in a different time zone, have different network
configuration parameters (e.g., DHCP vs. static IPs), and use different
default printers. When the user hooks his laptop up in a particular
network, it would be advantageous to switch to that network's
configuration with a minimum of hassle.
As mentioned above, configuration data is stored in separate
directories corresponding to the name of a given location. HCM has the
ability to apply a set of configuration files from a given location in
a manner similar to the rollback procedure described above. When the
user selects an alternative configuration, it simply goes through the
revision history for that location, pulls a complete set of
configuration files, and applies them. The procedure is similar for
both capplets and HSTs.
In addition, locations may be expressed hierarchically. For example, a
user might specify a location called `Boston' that describes language,
time zone, currency, and other locale data, and another location called
`Boston Office' that includes network parameters. `Boston Office'
inherits its locale data from `Boston', overriding the latter's
network configuration.
To implement this, each location directory contains some metadata that
describes what configuration data is valid for it and what other
configuration it inherits from. There are one or more root
configurations that contain a complete set of data. When applying a
new location, HCM looks first at that location's directory, pulling a
complete set of all the configuration data defined by that location,
and then goes to the next level up in the location hierarchy and does
the same thing. It also keeps track of the common subtree root between
the old and new locations so that only the configuration items that
actually change are collected.
From a user's perspective, the HCM will present a tree showing the
existing locations. Users may create a new location derived from an
existing one. When the user elects to configure a particular location,
the HCM shell includes icons that are grayed out, indicating that
those configuration items are not set in this particular location. If
the user attempts to change them, they become specific to that
particular location and are recolored accordingly.
V. Clustering
A single server may archive the configuration for a large number of
individual workstations, providing configuration data to each of the
clients on demand. An administrator can then push configuration
updates out to each machine with the press of a button, rather than
having to go to each machine and update it manually.
To enable this, each client machine will run a daemon that accepts
configuration data pushed out by the server. Some sort of public key
signing will be implemented to ensure that this is done securely. On
the server end, a series of host groups is maintained, each one
containing a set of hosts. These form the top two levels of a
configuration hierarchy not unlike what is described above. Each host
may override certain configuration values for the cluster as a
whole. The cluster may also have multiple `locations', e.g. for
configuring a computer lab for computer science during one class and
for math during another. Locations may be selected down to the
granularity of a single host, or for the entire cluster at
once. Cluster-wide configurations occur between the cluster and host
level in the configuration hierarchy.
VI. Issues
1. We need a way to get an XML state without actually applying
changes, so that the user can configure a location without switching
to it.
2. Can we make the HST frontends Bonobo controls, and can we have them
run as the regular user rather than as root? This would ensure that
certain user interface preferences, such as themes, are kept
consistent for a given user between capplets and HSTs. The way to
implement this is to have a method on the HCM interface called
RunBackend() which returns a BonoboObject referring to the backend
that implements the Bonobo::HSTBackend interface, which is similar to
the Bonobo::Capplet interface mentioned above. The interface defines
the GetXML and SetXML methods. The object should also implement
another, HST-specific interface to facilitate setting specific
configuration variables, so that live update may be implemented. The
root manager must then be extended to support some sort of secure
forwarding, allowing the user to access that particular CORBA object.
3. If we make the HSTs into Bonobo controls, can we give them the same
Bonobo::Capplet interface that is given to the Capplets? This would
make everything a bit simpler from the HCM's perspective, since it
then does not need to know the difference between Capplets and
HSTs -- it then only needs to implement the RunBackend() method for
the benefit of the HSTs.