Tips and tricks

Getting root access

sudo su

Fetching node information (interface names etc.) in scripts

When automating experiment setup and/or experiments themselves, often information such as the name of the local machine, the control interface or the local experiment interface names are needed.

An example: Consider an experiment with a node named xenNode, which is connected with a link named link0 by an interface xenNode:if0. A setup script that runs on this node, might need to know the linux interface name of the interface, which might for example be eth2. Note that each time an experiment is created, even though the same request RSpec is used, the linux interface name may be different (it might for example be eth1 or eno1 the next time).

The most generic method to retrieve this info is by using the geni-get command. See for detailed info.

2 common commands are:

#Get name of local machine in RSpec
geni-get client_id

#Get manifest RSpec
geni-get manifest

To use this info to retrieve interface names, processing of the manifest data is needed. This is a manifest RSpec, which is an XML based format, so many XML processing tools can be used. There are also tools that can process RSpecs specifically, such as geni-lib.

You can find an example python script here: ( This script uses basic python XML processing. It extracts interfaces names and other usefull data. It can be used a a starting point for your own scripts.

To try the script, log in on a node, and run the following commands:

chmod u+x

Note: this needs python2, so on modern distributions, you might need to install python2 specifically, e.g. on ubuntu20 (you might have to first enable public internet):

sudo apt update && sudo apt install python2-minimal

Example output:

Name of this machine in the RSpec: "node0"

SSH login info:

Control network interface:
      MAC: 00:30:48:43:5d:c2
      dev: eth0
      ipv4: (netmask /20)
      ipv6: 2001:6a8:1d80:2021:5062:9363:5859:6fab (netmask /64)
      ipv6: 2001:6a8:1d80:2021:230:48ff:fe43:5dc2 (netmask /64)
      ipv6: fe80::230:48ff:fe43:5dc2 (netmask /64)

Experiment network interfaces:
      Iface name: "node0:if0"
            MAC: 00:31:58:43:58:e8
            dev: eth2
            ipv4: (netmask

Requested public IPv4 pool (routable_pool):

Note that geni-get can give you also other information:

root@node0:/tmp# geni-get commands
 "client_id":    "Return the experimenter-specified client_id for this node",
 "commands":     "Show all available commands",
 "control_mac":  "Show the MAC address of the control interface on this node",
 "geni_user":    "Show user accounts and public keys installed on this node",
 "getversion":   "Report the GetVersion output of the aggregate manager that allocated this node",
 "manifest":     "Show the manifest rspec for the local aggregate sliver",
 "slice_email":  "Retrieve the e-mail address from the certificate of the slice containing this node",
 "slice_urn":    "Show the URN of the slice containing this node",
 "sliverstatus": "Give the current status of this sliver (AM API v2)",
 "status":       "Give the current status of this sliver (AM API v3)",
 "user_email":   "Show the e-mail address of this sliver's creator",
 "user_urn":     "Show the URN of this sliver's creator"

On emulab based sites, there is also an alternative to using geni-get: The /var/emulab/boot/ dir contains various info files. For example, link info can be found in /var/emulab/boot/topomap, the control interface in /var/emulab/boot/controlif and the full machine name in /var/emulab/boot/nickname. Note that this method is not recommended, as there is no guarantee that this information will stay the same in case of emulab software upgrades.

Using legacy custom images on Virtual Wall 1, pcgen1 nodes

If you use the default image UBUNTU12-64-STD or create your custom image from that one, then there is no issue, and you can ignore this.

If you want to use older images, or other custom images, please make the following changes if you want to run your experiment on the pcgen1 nodes and use networking (more specifically, the nvidia MCP55 forcedeth interfaces on the 6 interface machines).

We have to load the forcedeth driver with some options.

Become root on your image and create a file /etc/modprobe.d/forcedeth.conf with the following contents:

options forcedeth msi=0 msix=0 optimization_mode=1 poll_interval=38 max_interrupt_work=40 dma_64bit=0

To load this also in the initrd, do the following:

uname -r
update-initramfs -u -k 3.2.0-56-generic
(kernelversion is the output of uname -r)

ethtool and mii-tool should be renamed or removed from your image (otherwise emulab startup scripts try to set things which are not needed)

Then create from that node your new image, and use that.

This should solve the link issues with these cards.

If you still see link issues, please contact vwall-ops .a.t. and LET YOUR EXPERIMENT RUN so we can inspect this.

VirtualBox on a Physical Node

Following RSpecs will provision a node name node0 with VirtualBox ready to use on headless mode:

<?xml version='1.0'?>
<rspec xmlns="" type="request" generated_by="jFed RSpec Editor" generated="2017-03-03T16:43:54.668+01:00" xmlns:emulab="" xmlns:jfedBonfire="" xmlns:delay="" xmlns:jfed-command="" xmlns:client="" xmlns:jfed-ssh-keys="" xmlns:jfed="" xmlns:sharedvlan="" xmlns:xsi="" xsi:schemaLocation=" ">
<node client_id="node0" exclusive="true" component_manager_id="">
  <sliver_type name="raw-pc">
    <disk_image name=""/>
    <execute shell="sh" command="cd /opt &amp;&amp; sudo /bin/bash"/>
    <install install_path="/opt" url=""/>
  <location xmlns="" x="260.0" y="33.5"/>
  <emulab:blockstore name="bs1" size="60GB" mountpoint="/mnt" class="local"/>

Debugging problems with adding SSH keys

If the “Edit SSH Keys” options fails to add a user, you can check the logfile at /var/emulab/logs/emulab-watchdog.log for more information.