Instaseis Server

The biggest hurdle to using Instaseis are the generation and storage of the potentially huge databases. Even if that is possible, concurrent access to a database on a network storage by multiple users might thrash that storage system and result in abysmal performance and other problems.

Both issues are potentially resolvable by the Instaseis server. Someone (be it locally at the institutes network or globally over the internet) opens a local connection to an Instaseis database and shares data over HTTP. Instaseis clients can connect to that server with the same interface as to a local database.

Starting a Server

To launch a server, just execute

$ python -m instaseis.server --port 8765 --buffer_size_in_mb 100 /path/to/db

which will launch a webserver and start serving at the specified port. The buffer_size_in_mb argument is passed to the InstaseisDB initialization routine. It is probably a good idea to choose it as big as your machine allows. For a reciprocal database with horizontal and vertical components Instaseis will create 4 buffers, each buffer_size_in_mb in size.

Note

Some functionality requires an advanced server setup. Please view the Advanced Server Configuration for details.

Connecting to a Server

Connecting works by simply opening a connection to said machine, e.g. if the server is running on the same machine:

>>> import instaseis
>>> db = instaseis.open_db("http://127.0.0.1:8765")
>>> print(db)
RemoteInstaseisDB reciprocal Green's function Database (v7) generated with these parameters:
components           : vertical and horizontal
velocity model       : ak135f
attenuation          : True
dominant period      : 2.000 s
dump type            : displ_only
excitation type      : dipole
time step            : 0.487 s
sampling rate        : 2.052 Hz
number of samples    : 7591
seismogram length    : 3699.7 s
source time function : errorf
source shift         : 3.412 s
spatial order        : 4
min/max radius       : 5671.0 - 6371.0 km
Planet radius        : 6371.0 km
min/max distance     : 0.0 - 180.0 deg
time stepping scheme : symplec4
compiler/user        : ifort         1400 by di29kub on login05
directory/url        : http://127.0.0.1:8765
size of netCDF files : 883.0 GB
generated by AxiSEM version 615a180 at 2014-11-07T18:48:29.000000Z

Usage is then the same as with a local Instaseis database.

Deployment and Performance Considerations

Network latency and throughput are the limiting factors of the achievable speed when using the Instaseis server. The server is based on Tornado resulting in asynchronous network I/O. The database access and file I/O on the other hand is, by design, synchronous. Thus each Instaseis server should easily be able to serve dozens or more concurrent users with acceptable speed.

Note

If you plan to run this on a machine with outwards facing ports please either know what are you doing or ask your network admin to have a look to not compromise security of the whole network. Running this on an institute’s network or some managed server on the other hand should be a less of a security issue.

To get a stable server you will have to properly deploy it. A common option to deploy Tornado apps is to use supervisor for process management and ngnix as a server and reverse proxy. There are a ton of tutorials online as the configuration differs from system to system but it should not be hard to find.

Supervisor takes care to start, monitor, and restart the Instaseis server making sure it runs all the time. A very minimal supervisor configuration that launches two Instaseis server instances serving two different databases:

[program:instaseis_20s]
command=/path/to/python -m instaseis.server --port=8765 --buffer_size_in_mb=50 /path/to/20s_PREM_ANI_FORCES
autostart=true
autorestart=true
redirect_stderr=true
stdout_logfile=/tmp/instaseis_20s.log

[program:instaseis_10s]
command=/path/to/python -m instaseis.server --port=8766 --buffer_size_in_mb=100 /path/to/10s_PREM_ANI_FORCES
autostart=true
autorestart=true
redirect_stderr=true
stdout_logfile=/tmp/instaseis_10s.log

Now nginx can be used as a reverse proxy to map the internal routes to nice URLs. Also make sure to block all ports that you don’t need with your system’s firewall. The following example configuration will map both Instaseis server instances started with supervisor to http://site-name.org:8080/20s_PREM_ANI_FORCES and http://site-name.org:8080/10s_PREM_ANI_FORCES. One can easily imaging this being done for a number of models at various frequencies.

server {
    listen 8080;
    server_name localhost;
    location /20s_PREM_ANI_FORCES/ {
        rewrite ^/20s_PREM_ANI_FORCES/?(.*)$ /$1 break;
        proxy_pass  http://127.0.0.1:8765;
    }
    location /10s_PREM_ANI_FORCES/ {
        rewrite ^/10s_PREM_ANI_FORCES/?(.*)$ /$1 break;
        proxy_pass  http://127.0.0.1:8766;
    }
}

Load balancing can also be achieved by a combination of supervisor and nginx. It might not be worth it as Instaseis itself is oftentimes I/O bound but it will depend on your specific system and if you face performance issues it is a potential solution.

Logging

The Instaseis server starting script

$ python -m instaseis.server

offers basic functionality to log to standard out. By default it logs at the INFO level. The --quiet flag can be given to deactivate all logging output and the --log-level argument modifies said level. Please read up on Python logging for more details.

The Instaseis server is based on the Tornado framework, an asynchronous Python web server. To customize logging, read this document, use this source code file as a template for your custom server starting script, and modify however you see fit.

REST-like API Documentation

If you wish to use the Instaseis Server without the Python client this documentation might be helpful. The Instaseis server offers a REST-like API with currently nine endpoints.