This guide includes software build and configuration instructions for setting up a Rappture Render Server.

Description of the servers:

  • nanovis - OpenGL-based volume rendering for uniform 3D grids.
  • pymolproxy - Pymol-based molecule rendering. Used for Rappture structure inputs/outputs
  • vtkvis - VTK-based rendering of Rappture meshes, fields and drawing outputs.
  • vmdshow - VMD-based molecule rendering.
  • geovis - OSGEarth-based rendering of 2D and 3D maps/globe.

Hardware Requirements

At least one graphics card supporting OpenGL 2.1 or greater plus the following extensions:

ARB_vertex_program/ARB_fragment_program or NV_vertex_program3/NV_fragment_program2

An NVIDIA Quadro or GeForce card is recommended. Our current render servers use NVIDIA GeForce GTX 770 cards with 4 GB of memory.

OS Setup

A base Debian 7 install is recommended, with development packages for building C/C++ code. Red Hat Enterprise 6.5 has also been tested, but third party repositories may be necessary for some dependencies.

You will need an X server installed, with some custom configuration. See the configuration section below.

You should create a user to run the servers, we call this user 'rappture' in the following instructions. It is recommended that you do the build below as that user.

To use the admin scripts to start the nanoscale server, you will need to create the directory /opt/hubzero/rappture, make the rappture user the owner, and create a link /opt/hubzero/rappture/render to point to the install directory created after performing the build and install steps below.


For the packages listed, headers and shared libraries are required. These can be obtained from your distribution's development packages or from the rappture-runtime repository in the render-trunk branch. Eventually, source tarballs will be made available in place of the runtime repository.

For all servers:

  • Tcl 8.4
    • Included in rappture-runtime


  • No additional dependencies.


  • Python >= 2.6
  • GLEW >= 1.10
    • Included in rappture-runtime
  • pymol + patches
    • pymol needs to be built from the source in rappture-runtime


  • GLEW >= 1.10
    • Included in rappture-runtime
  • NVIDIA Cg runtime
    • For Debian, install 'nvidia-cg-toolkit'
  • Rappture >= 1.2
  • VTK 6.2.0
    • Included in rappture-runtime


  • VTK 6.2.0
    • Included in rappture-runtime


  • VMD: see INSTALL.txt in the vmdshow source tree


  • fonts-liberation package
  • GDAL >= 1.9.0
  • GEOS >= 3.3.3
  • Proj >= 4.7.0
  • OpenSceneGraph >= 3.4.0
    • Included in rappture-runtime
  • OSGEarth >= 2.7
    • Included in rappture-runtime

Building the Dependencies

Currently, the best way to build the dependencies is to use the render-trunk branch of the rappture-runtime repository. Even though the branch is called 'render-trunk', it is currently used to build both the trunk and release versions of the render servers.

Checking out the runtime source and generating the configure script:

$ svn co runtime
$ cd runtime && ./bootstrap && cd ..

The runtime repository is currently set up to build packages in a series of stages in order to satisfy dependencies. For each stage, the top-level runtime configure script determines the appropriate settings based on the name of the current directory, so the build must proceed in directories named 'stage1', 'stage2', 'stage3', etc. For each stage, the runtime configure script is run with arguments specifying the install prefix and the packages to be built for that stage. Then 'make all' and 'make install' is run to build all the stage's packages and install them before moving to the next stage. Currently, a minimal build of rappture is required for the nanovis server, and so an additional stage is included in most build scripts to build rappture, although the source for rappture must be checked out from a different repository:

To check out the rappture source, run this command from the build directory (the same directory containing the runtime directory from the runtime checkout command above):

$ svn co rappture

A sample build script named '' is supplied in the runtime repository to serve as a starting point, but you may want to make modifications. In particular, you may want to modify the stage flags to include/exclude specific packages:

# You may be able to change one or more of these if you have system packages 
# for them that meet the minimum version requirements
stage1_flags=" \
 --with-cmake \
 --with-glew \
 --with-tcl \
# If building without DX support in nanovis change --with-voronoi to 
# --without-voronoi
stage2_flags=" \
 --with-osg \
 --with-voronoi \
 --with-vtk \
stage3_flags=" \
 --with-osgearth \
 --with-pymol \
rappture_flags="--disable-gui --disable-lang --disable-vtkdicom --without-ffmpeg"

Getting the Source

To build servers for the current Rappture client trunk (1.9.x), use the trunk of each server:

$ svn co geovis
$ svn co nanoscale
$ svn co nanovis
$ svn co pymolproxy
$ svn co vmdshow
$ svn co vtkvis

To build servers for the current Rappture client stable release (1.8.x), use these commands:

$ svn co geovis
$ svn co nanoscale
$ svn co nanovis
$ svn co pymolproxy
$ svn co vmdshow
$ svn co vtkvis

Building the Servers


First, follow the steps above under Getting the source to check out the server sources. If you haven't yet built the dependencies, first see Building the Dependencies.

Each server is built using a 'configure', 'make', 'make install' sequence. All the servers share common configure options to specify the install directory and location for log files.

configure options:
--prefix The install prefix
--with-logdir The directory for run/debug logs
--with-statsdir The directory for server statistics logs
--with-tcllib Location of Tcl binary library libtclstubs.a (Tcl will be found in the prefix directory if you installed it there using the rappture runtime)
vtkvis and nanovis:
--with-vtk The version of VTK (e.g. 6.0)
--with-vtk-includes Directory where VTK includes are found
--with-vtk-libs Directory where VTK shared libraries are found
nanovis only:
--with-rappture The directory where rappture is installed


The following commands assume that you are starting in the directory where you ran the checkout commands from Getting the source above and that you have installed the dependencies in the install prefix directory. Note that on the render servers, the install prefix would be named based on a date and render would be a symbolic link to that directory. In that case, you should use the actual dated directory name for the prefix argument in the commands below instead of the link.

Also, if the stats directory doesn't exist, you will need to create it and change the owner to the user under which the servers run (for this is the 'rappture' user) before running the servers. For example:

$ sudo mkdir /var/log/visservers
$ sudo chown rappture /var/log/visservers

If building the geovis server, you will also need to create the cache directory:

$ sudo mkdir /var/cache/geovis
$ sudo chown rappture /var/cache/geovis

Build nanoscale

$ mkdir -p build/nanoscale
$ cd build/nanoscale
$ ../../nanoscale/configure --prefix=/opt/hubzero/rappture/render --with-logdir=/tmp --with-statsdir=/var/log/visservers
$ make
$ make install

Build pymolproxy

$ mkdir -p build/pymolproxy
$ cd build/pymolproxy
$ ../../pymolproxy/configure --prefix=/opt/hubzero/rappture/render --with-logdir=/tmp --with-statsdir=/var/log/visservers
$ make
$ make install

Build nanovis

$ mkdir -p build/nanovis
$ cd build/nanovis
$ ../../nanovis/configure --prefix=/opt/hubzero/rappture/render --with-rappture=/opt/hubzero/rappture/render --with-logdir=/tmp --with-statsdir=/var/log/visservers
$ make
$ make install

Build vmdshow

$ mkdir -p build/vmdshow
$ cd build/vmdshow
$ ../../vmdshow/configure --prefix=/opt/hubzero/rappture/render --with-logdir=/tmp --with-statsdir=/var/log/visservers
$ make
$ make install

Build vtkvis

$ mkdir -p build/vtkvis
$ cd build/vtkvis
$ ../../vtkvis/configure --prefix=/opt/hubzero/rappture/render --with-logdir=/tmp --with-statsdir=/var/log/visservers
$ make
$ make install

Build geovis

$ mkdir -p build/geovis
$ cd build/geovis
$ ../../geovis/configure --prefix=/opt/hubzero/rappture/render --with-logdir=/tmp --with-statsdir=/var/log/visservers
$ make
$ make install


Installing the NVIDIA driver

If you are using NVIDIA hardware, we recommend using the latest long-lived branch version of the NVIDIA Linux driver. Currently that driver is version 346.72. If a driver is already installed, you can check the installation version with:

$ sudo nvidia-installer -i

To install the driver:

  1. Download the driver package from
  1. Stop any X server processes if any are running. See Starting and Stopping the X server
  1. Run the installer:
    $ chmod +x
    $ sudo CC=/usr/bin/gcc-4.6 ./ -sN

Note that in the above command, the CC environment variable is needed (in Debian 7) to specify the compiler used to bulid the kernel, which is different than the default compiler.

Configuring the X server

If you have no display attached to the graphics cards in the server (this will typically be the case for a rack server), some configuration is required to allow running X headless. If you have a display attached to the graphics card, you can skip this step. Our current approach is to supply an EDID file through a configuration option to avoid the graphics hardware probing the port for a monitor.

To extract an EDID file from a monitor connected to an NVIDIA card using the NVIDIA driver, see the man page for the nvidia-xconfig utility. Of course you will need a real physical monitor connected to the video card's output for this step. You will need to run the X server with verbose logging '-logverbose 6', then extract the EDID from the log:

sudo nvidia-xconfig --extract-edids-from-file=/var/log/Xorg.0.log --extract-edids-output-file=/etc/X11/edid.bin

In the config file, you would then include the "CustomEDID" option in the Screen section:

Section "Screen"
    Identifier     "Screen0"
    Device         "Videocard0"
    Monitor        "Monitor0"
    DefaultDepth    24
    Option         "UBB"                "false"
    Option         "TwinView"           "false"
    Option         "UseEDID"            "true"
    Option         "ConnectedMonitor"   "DFP-0"
    Option         "CustomEDID"         "DFP-0:/etc/X11/edid.bin"
    Option         "IgnoreDisplayDevices" "CRT,TV"
    SubSection     "Display"
        Virtual    1920 1200
        Depth       24
        Modes      "1920x1200"

The nanoscale process acts as the main listening server for all render server ports. When a connection comes in from a client, nanoscale starts the appropriate server process (based on the port number) on one of the graphics accelerators in the render server. The main purpose of nanoscale is to provide load balancing by using a round-robin approach to choosing the graphics accelerator for each server connection. This is accomplished by configuring a separate X screen for each graphics accelerator. Nanoscale then sets the DISPLAY environment variable to different X screens (e.g. ':0.0', ':0.1', ':0.2', etc....) in order to change the graphics hardware used by the rendering server processes.

If you have more than one video card and are configuring multiple screens, you will nee to add a BusID for each card to the appropriate Device sections, e.g.:

Section "Device"
    Identifier     "Videocard0"
    Driver         "nvidia"
    VendorName     "NVIDIA Corporation"
    BusID          "PCI:05:0:0"

Section "Device"
    Identifier     "Videocard1"
    Driver         "nvidia"
    VendorName     "NVIDIA Corporation"
    BusID          "PCI:66:0:0"

The bus IDs can be obtained by running lspci. Note that lscpi reports the IDs in hexadecimal, but the X config needs them to be converted to decimal. For instance the card with the BusID above of "PCI:66:0:0" looks like this in lscpi:

42:00.0 VGA compatible controller: NVIDIA Corporation Device 1184 (rev a1)

We convert 0x42 in hexadecimal to 66 in decimal for the X configuration option.

Configuring nanoscale

The configuration file for nanoscale is in the form of a Tcl script file. It is installed as lib/renderservers.tcl in the install prefix.

Note that the script which is installed in bin/ assumes that you have NVIDIA graphics hardware. It runs lspci to count the number of graphics accelerators and looks for NVIDIA as the vendor in order to exclude any onboard graphics chips that are often found on server motherboards. You will need to modify this script if you are using other hardware such as AMD or Intel graphics. The -x option to nanoscale indicates the number of X screens to use in the round-robin selection, starting from screen 0. For example if nanoscale is run as 'nanoscale -x 2', it will alternate between "DISPLAY=:0.0" and "DISPLAY=:0.1" when starting new server processes.

If you are running the geovis server, you will need to ensure that this line in the script is enabled (not commented out):

pgrep -u rappture -x twm > /dev/null || twm -display :0 &

This will run a simple window manager (twm), which you will need to have installed. Currently, the geovis server will crash without a running window manager.

Running the Servers

For Debian 7, init scripts for the X server and nanoscale server should be installed in /etc/init.d. To setup the services to run at the default run levels:

$ sudo update-rc.d render-x-server defaults
$ sudo update-rc.d nanoscale defaults

These scripts rely on the existence of two other executable scripts:


On the render servers, these scripts are managed by the Ogre configuration management system.

For the nanoscale service script to function properly, you must ensure that /opt/hubzero/rappture/render is a symlink to the install directory from the install steps above.

Starting and Stopping the X server

$ sudo service render-x-server start
$ sudo service render-x-server stop

Starting and Stopping nanoscale

$ sudo service nanoscale start
$ sudo service nanoscale stop

Procedure for updates and reboots

In order to reboot a render server, you must first drain off existing client connections and wait for the server to become idle. Also, if a kernel update has been applied, the NVIDIA driver will need to be reinstalled after the reboot.

The steps to update a server:

  1. Remove the server from the client resources file configuration (see Configuring Rappture Clients to Use the Servers below)
  2. Kill the nanoscale process to prevent new client connections from being accepted.
  3. Wait for any running server processes (geovis, nanovis, pymol, vmd, or vtkvis) to exit. Usually client connections will time out after 12 hours of inactivity. However, sometimes processes can get deadlocked, so after 12 hours you may kill any remaining server processes. The Rappture client should try to reconnect to a new server if the user tries to interact with one of these sessions.
  4. Once all server processes have stopped, you may apply package updates. To update kernel packages:
    # aptitude -q unhold linux-headers-3.16.0-0.bpo.4-all linux-headers-3.16.0-0.bpo.4-all-amd64 linux-headers-3.16.0-0.bpo.4-amd64 \
    linux-headers-3.16.0-0.bpo.4-common linux-headers-3.2.0-4-amd64 linux-headers-3.2.0-4-common linux-image-3.16.0-0.bpo.4-amd64
    # aptitude upgrade -y
  5. Reboot the server. If no kernel updates have been applied, the server should return to a running state.
  6. If kernel updates have been applied, stop the X server and re-install the NVIDIA driver using the instructions above. Then restart the X server.
  7. Test the server:
    • Check that the X server is running and the proper display resolution was configured (see /var/log/Xorg.0.log)
    • Make sure that the nanoscale process was started
    • Test connecting to the server from a workspace
  8. Restore the client resources file configuration

Configuring Rappture Clients to Use the Servers

In a default Rappture install, a Rappture client will attempt to connect to render servers running on the default server ports of the localhost. These defaults are contained in the visviewer.tcl script file in the installed Rappture GUI scripts directory (e.g. lib/RapptureGUI1.3/scripts/visviewer.tcl in the install prefix directory).

On a deployed hub site, the list of render server hostnames and port numbers for each type of renderer (e.g. pymolproxy, vtkvis, etc.) is stored in the hub user's resources file. This file is written by the hub middleware in the user's session directory ($SESSIONDIR/resources) when the user session is created. The following lines are an example of the server configuration in a resources file:


These values may be configured by a system administrator on the hub's web server in /etc/mw-www/maxwell.conf (which is about to be renamed /etc/mw-client/mw-client.conf).

Last modified 2 years ago Last modified on Feb 2, 2017, 3:35:17 PM

Attachments (1)

Download all attachments as: .zip