What is a plugin?

WARNING: You'll also note that pluigins are incredibly powerful. They are going to be given credentials directly, so its important to NEVER execute plugins from an unknown source.

A plugin is a collection of python files that extends what the Developer MultiTool can do. On a vanilla installation, plugins are stored in the src/ directory in a folder called dmt_plugins. The builtin plugins included with dmt are in a special folder called builtin. Adding your own plugin should be added in the form of a classpath (a concept taken from the Java world). Your classpath should be in the form of com.example.module.plugin where example.com is a domain you should control.

Modules allow you to organize your plugins in whatever structure you want, and the files that makeup the plugin will be stored in the module folder. At some point in your classpath directory structure you should have the config, setup and run python files. This is how DMT will register your plugin and extract information it needs to structure tasks.

Tasks and Plugins

A Task model in DMT has a type field. This field should correspond to the classpath of the plugin ending at the directory where the config, setup, and run files are. When a task is created, the information in config.py is used to structure the variables that apply to the task. When a task is invoked in the run_a_run method, the setup method is first invoked, and then the run method is invoked.

Anatomy of a plugin

A plugin is made up of at least three files, config.py, setup.py, and run.py. The files are described in more detail below.

Plugin Structure

We'll demonstrate the structure of a plugin for DMT by taking a look at the builtin plugin, Log. Log will log a message into the run log for a particular run. It is useful for testing out the run_a_run method and seeing how things work.

config.py

def get_displayname():
    return 'Log'
def get_vars():
    return ['message']
def get_description():
    return "Log out particular messages or anything else you need to log."

These three methods tell DMT what the plugin will need. Vars are exposed in the UI when a task is created. The displayname is used to show what the plugin is in drop downs. Finally, the description is a dev provided description of what the plugin does.

setup.py

import logging

logger = logging.getLogger("dmt_plugins.builtin.log.setup")


def setup(params):
    logger.info(msg="A logging task has been setup...")
    return 0

This calls the setup method and allows the developer to set things up for the run method.

run.py

from dmt.utilities import write_to_log

import logging

logger = logging.getLogger("dmt_plugins.builtin.log.run")


def run(params, context):
    logger.info(msg="A logging task has been run...")
    current_run = params['run']
    task = params['task']
    write_to_log(run=current_run, task=task, msg=context['message'])
    return 0

Finally, we see the run method. The run method is passed the params and the context that is created during run_a_run. This plugin writes a simple log entry out to the run log.

Writing your first plugin

To get started writing your first plugin, you can copy the log plugin into a new directory. From here, you can test making modifications, adding a task which uses your plugin, and observing the results.

Important Variables

Setup your variables in the get_vars() method. You can add as many variables as you like. These are created and bound to a task at the time a task is created. Once the task is created, you can navigate to the task in the Workflow view and adjust the values for the variables. These variables can also be overridden at the Environment level, which lets you use a different set of variables in a production environment.

More Advanced Plugins

Lets take a look at a more advanced plugin, the Execute Command plugin:

config.py

def get_displayname():
    return 'Shell - Execute Command'

def get_vars():
    return ['command']

def get_description():
    return "Execute a shell command on a remote server."

setup.py

from dmt.utilities import write_to_log


def setup(params):
    current_run = params['run']
    job = params['job']
    task = params['task']

    write_to_log(run=current_run, task=task, msg="Starting execute command task setup...")

run.py

from io import StringIO

import paramiko

from dmt.utilities import write_to_log


def run(params, context):
    current_run = params['run']
    task = params['task']

    write_to_log(run=current_run, task=task, msg="Starting execute command task run...")

    client = paramiko.SSHClient()
    client.load_system_host_keys()
    client.set_missing_host_key_policy(paramiko.WarningPolicy())
    if context.get('_password', None):
        client.connect(context['_hostname'], context['_port'], context['_username'], context['_password'])
    else:
        file_like_key = StringIO(context.get('_private_key', None))
        pkey = paramiko.RSAKey.from_private_key(file_obj=file_like_key)
        client.connect(
            hostname=context['_hostname'],
            port=context['_port'],
            username=context['_username'],
            pkey=pkey
        )

    stdin, stdout, stderr = client.exec_command(context['command'])
    exit_status = stdout.channel.recv_exit_status()
    for line in stdout.readlines():
        write_to_log(run=current_run, task=task, msg="STDOUT: {line}".format(line=line))
    for line in stderr.readlines():
        write_to_log(run=current_run, task=task, msg="STDERR: {line}".format(line=line))
    write_to_log(run=current_run, task=task, msg="Exit status is {status}".format(status=exit_status))
    if exit_status != 0:
        raise Exception("Exception encountered trying to execute command...")

Config and setup are fairly similar, but we have a lot more meat in the run method. When we get to the run method, we are going to invoke paramiko in order to connect to a remote server and execute a command. Some variables are provided to us out of the box by DMT that allow us to figure out how to connect. Those are the _hostname, _port, _username, _password, and _private_key variables. If you look at the run_a_run method, you can see where we populate the context using a method in the utilities file.

This is where our Identity comes from for the task. So we have an Identity applied to the task which is provided by virtue of what Environment the task is run against. Then, we have our command variable that we configure on the task itself. By combining the Identity, Environment, and Task, we've created an extensible unit of work that can be different depending on which environment we execute it on. You can imagine running one command on a debian server and another on a freebsd server depending on what Environment/ Identity combination the workflow is executing against.

WARNING: You'll also note that pluigins are incredibly powerful. They are going to be given credentials directly, so its important to NEVER execute plugins from an unknown source.