PK »B@EjA5 A5 gaffer-doc-0.5/index.html
Control, Watch and Launch your applications and jobs over HTTP.
Gaffer is a set of Python modules and tools to easily maintain and interact with your applications or jobs launched on different machines over HTTP and websockets.
It promotes distributed and decentralized topologies without single points of failure, enabling fault tolerance and high availability.
- RESTful HTTP Api
- Websockets and SOCKJS support to interact with a gaffer node from any browser or SOCKJS client.
- Framework to manage and interact your applications and jobs on differerent machines
- Server and Command Line tools to manage and interract with your processes
- manages topology information. Clients query gaffer_lookupd to discover gaffer nodes for a specific job or application.
- Possibility to interact with STDIO and PIPES to interact with your applications and processes
- Subscribe to process statistics per process or process templates and get them in quasi RT.
- Procfile applications support (see Gaffer) but also JSON config support.
- Supervisor-like features.
- Fully evented. Use the libuv event loop using the pyuv library
- Flapping: handle cases where your processes crash too much
- Easily extensible: add your own endpoint, create your client, embed gaffer in your application, ...
- Compatible with python 2.7x, 3.x
Note
gaffer source code is hosted on Github
The manager module is a core component of gaffer. A Manager is responsible of maintaining processes and allows you to interract with them.
Bases: object
Manager - maintain process alive
A manager is responsible of maintaining process alive and manage actions on them:
The design is pretty simple. The manager is running on the default event loop and listening on events. Events are sent when a process exit or from any method call. The control of a manager can be extended by adding apps on startup. For example gaffer provides an application allowing you to control processes via HTTP.
Running an application is done like this:
# initialize the application with the default loop
loop = pyuv.Loop.default_loop()
m = Manager(loop=loop)
# start the application
m.start(apps=[HttpHandler])
.... # do smth
m.stop() # stop the controlller
m.run() # run the event loop
Note
The loop can be omitted if the first thing you do is launching a manager. The run function is here for convenience. You can of course just run loop.run() instead
Warning
The manager should be stopped the last one to prevent any lock in your application.
Like ``scale(1) but the process won’t be kept alived at the end. It is also not handled uring scaling or reaping.
get an OS process by ID. A process is a gaffer.Process instance attached to a process state that you can use.
load a process config object.
Args:
reload a process config. The number of processes is resetted to the one in settings and all current processes are killed
Convenience function to use in place of loop.run() If the manager is not started it raises a RuntimeError.
Note: if you want to use separately the default loop for this thread then just use the start function and run the loop somewhere else.
Scale the number of processes in for a job. By using this function you can increase, decrease or set the number of processes in a template. Change is handled once the event loop is idling
n can be a positive or negative integer. It can also be a string containing the opetation to do. For example:
m.scale("sometemplate", 1) # increase of 1
m.scale("sometemplate", -1) # decrease of 1
m.scale("sometemplate", "+1") # increase of 1
m.scale("sometemplate", "-1") # decrease of 1
m.scale("sometemplate", "=1") # set the number of processess to 1
stop a jon. All processes of this job are stopped and won’t be restarted by the manager
stop all processes of a job. Processes are just exiting and will be restarted by the manager.
This tutorial exposes the usage of gaffer as a tool. For a general overview or how to integrate it in your application you should read the overview page.
Gaffer allows you to launch OS processes and supervise them. 3 command line tools allows you to use it for now:
A process template is the way you describe the launch of an OS process, how many you want to launch on startup, how many time you want to restart it in case of failures (flapping).... A process template can be loaded using any tool or on gafferd startup using its configuration file.
To use gaffer tools you need to:
For more informations of gafferd go on its documentation page .
To launch gafferd run the following command line:
$ gafferd -c /path/to/gaffer.ini
If you want to launch custom plugins with gafferd you can also set the path to them:
$ gafferd -c /path/to/gaffer.ini -p /path/to/plugun
Note
default plugin path is relative to the user launching gaffer and is set to ~/.gaffer/plugins.
Note
To launch it in daemon mode use the --daemon option.
Then with the default configuration, you can check if gafferd is alive
The configuration file can be used to set the global configuration of gafferd, setup some processes and webhooks.
Note
Since the configuration is passed to the plugin you can also use this configuration file to setup your plugins.
Here is a simple example of a config to launch the dumy process from the example folder:
[process:dummy]
cmd = ./dummy.py
numprocesses = 1
redirect_output = stdout, stderr
Note
Process can be grouped. You can then start and stop all processes of a group and see if a process is member of a group using the HTTP api. (sadly this is not yet possible to do it using the command line).
For example if you want dummy be part of the group test, then [process:dummy] will become [process:test:dummy] . A process template as you can see can only be part of one group.
Groups are useful when you want to manage a configuration for one application or processes / users.
Each process section should be prefixed by process:. Possible parameters are:
Sometimes you also want to pass a custom environnement to your process. This is done by creating a special configuration section named env:processname. Each environmenets sections are prefixed by env:. For example to pass a special PORT environment variable to dummy:
[env:dummy]
port = 80
All environment variables key are passed in uppercase to the process environment.
The gaffer command line tool is an interface to the gaffer HTTP api and include support for loading/unloading Procfile applications, scaling them up and down, ... .
It can also be used as a manager for Procfile-based applications similar to foreman but using the gaffer framework. It is running your application directly using a Procfile or export it to a gafferd configuration file or simply to a JSON file that you could send to gafferd using the HTTP api.
For example using the following Procfile:
dummy: python -u dummy_basic.py
dummy1: python -u dummy_basic.py
You can launch all the programs in this procfile using the following command line:
$ gaffer start
Or load them on a gaffer node:
$ gaffer load
All processes in the Procfile will be then loaded to gafferd and started.
If you want to start a process with a specific environment file you can create a .env in he application folder (or use the command line option to tell to gaffer which one to use). Each environmennt variables are passed by lines. Ex:
PORT=80
and then scale them up and down:
$ gaffer scale dummy=3 dummy1+2
Scaling dummy processes... done, now running 3
Scaling dummy1 processes... done, now running 3
have a look on the Gaffer page for more informations about the commands.
gafferctl can be used to run any command listed below. For example, you can get a list of all processes templates:
$ gafferctl processes
You can simply add a process using the load command:
$ gafferctl load_process ../test.json
$ cat ../test.json | gafferctl load_process -
$ gafferctl load_process - < ../test.json
test.json can be:
{
"name": "somename",
"cmd": "cmd to execute":
"args": [],
"env": {}
"uid": int or "",
"gid": int or "",
"cwd": "working dir",
"detach: False,
"shell": False,
"os_env": False,
"numprocesses": 1
}
You can also add a process using the add command:
gafferctl add name inc
where name is the name of the process to create and inc the number of new OS processes to start.
To start a process run the following command:
$ gafferctl start name
And stop it using the stop command.
To scale up a process use the add command. For example to increase the number of processes from 3:
$ gafferctl add name 3
To decrease the number of processes use the command stop/
The command watch allows you to watch changes n a local or remote gaffer node.
For more informations go on the gafferctl page.
Gafferd is a server able to launch and manage processes. It can be controlled via the HTTP api .
$ gafferd -h
usage: gafferd [-h] [-c CONFIG_FILE] [-p PLUGINS_DIR] [-v] [-vv] [--daemon]
[--pidfile PIDFILE] [--bind BIND] [--certfile CERTFILE]
[--keyfile KEYFILE] [--backlog BACKLOG]
[config]
Run some watchers.
positional arguments:
config configuration file
optional arguments:
-h, --help show this help message and exit
-c CONFIG_FILE, --config CONFIG_FILE
configuration file
-p PLUGINS_DIR, --plugins-dir PLUGINS_DIR
default plugin dir
-v verbose mode
-vv like verbose mode but output stream too
--daemon Start gaffer in the background
--pidfile PIDFILE
--bind BIND default HTTP binding
--certfile CERTFILE SSL certificate file for the default binding
--keyfile KEYFILE SSL key file for the default binding
--backlog BACKLOG default backlog
[gaffer]
http_endpoints = public
[endpoint:public]
bind = 127.0.0.1:5000
;certfile=
;keyfile=
[webhooks]
;create = http://some/url
;proc.dummy.spawn = http://some/otherurl
[process:dummy]
cmd = ./dummy.py
;cwd = .
;uid =
;gid =
;detach = false
;shell = false
; flapping format: attempts=2, window=1., retry_in=7., max_retry=5
;flapping = 2, 1., 7., 5
numprocesses = 1
redirect_output = stdout, stderr
; redirect_input = true
; graceful_timeout = 30
[process:echo]
cmd = ./echo.py
numprocesses = 1
redirect_output = stdout, stderr
redirect_input = true
Plugins are a way to enhance the basic gafferd functionality in a custom manner. Plugins allows you to load any gaffer application and site plugins. You can for example use the plugin system to add a simple UI to administrate gaffer using the HTTP interface.
A plugin has the following structure:
/pluginname
_site/
plugin/
__init__.py
...
***.py
A plugin can be discovered by adding one ore more module that expose a class inheriting from gaffer.Plugin. Every plugin file should have a __all__ attribute containing the implemented plugin class. Ex:
from gaffer import Plugin
__all__ = ['DummyPlugin']
from .app import DummyApp
class DummyPlugin(Plugin):
name = "dummy"
version = "1.0"
description = "test"
def app(self, cfg):
return DummyApp()
The dummy app here only print some info when started or stopped:
class DummyApp(object):
def start(self, loop, manager):
print("start dummy app")
def stop(sef):
print("stop dummy")
def rester(self):
print("restart dummy")
See the Overview for more infos. You can try it in the example folder:
$ cd examples
$ gafferd -c gaffer.ini -p plugins/
Installing plugins can be done by placing the plugin in the plugin folder. The plugin folder is either set in the setting file using the plugin_dir in the gaffer section or using the -p option of the command line.
The default plugin dir is set to ~/.gafferd/plugins .
Plugins can have “sites†in them, any plugin that exists under the plugins directory with a _site directory, its content will be statically served when hitting /_plugin/[plugin_name]/ url. Those can be added even after the process has started.
Installed plugins that do not contain any Python related content, will automatically be detected as site plugins, and their content will be moved under _site.
If you rely on some plugins, you can define mandatory plugins using the mandatory attribute of a the plugin class, for example, here is a sample config:
class DummyPlugin(Plugin):
...
mandatory = ['somedep']
module to parse and manage a Procfile
Bases: object
Procfile object to parse a procfile and a list of given environnment files.
return a ConfigParser object. It can be used to generate a gafferd setting file or a configuration file that can be included.
Gaffer is a process management framework but also a set of command lines tools allowing yout to manage on your machine or a cluster. All the command line tools are obviously using the framework.
gaffer`is an interface to the :doc:`gaffer HTTP api and inclusde support for loading/unloadin apps, scaling them up and down, ... . It can also be used as a manager for Procfile-based applications similar to foreman but using the gaffer framework. It is running your application directly using a Procfile or export it to a gafferd configuration file or simply to a JSON file that you could send to gafferd using the HTTP api.
Gafferd is a server able to launch and manage processes. It can be controlled via the HTTP api. It is controlled by gafferctl and can be used to handle many processes.
The tool gafferctl allows you to control a local or remote gafferd node via the HTTP API. You can show processes informations, add new processes, changes their configureation, get changes on the nodes in rt ....
The process module wrap a process and IO redirection
Bases: object
class wrapping a process
Args:
return the process info. If the process is monitored it return the last informations stored asynchronously by the watcher
start to monitor the process
Listener can be any callable and receive (“stat”, process_info)
Bases: object
object to maintain a process config
create a Process object from the configuration
Args:
Bases: object
object to retrieve process stats
Bases: object
redirect stdin allows multiple sender to write to same pipe
Bases: gaffer.process.RedirectStdin
create custom stdio
Please activate JavaScript to enable the search functionality.
From here you can search these documents. Enter your search words into the box below and click "search". Note that the search function will automatically search for all of the words. Pages containing fewer words won't appear in the result list.
Many events happend in gaffer.
Manager events have the following format:
{
"event": "<nameofevent">>,
"name": "<templatename>"
}
All processes’ events are prefixed by proc.<name> to make the pattern matching easier, where <name> is the name of the process template
Events are:
proc.<name>.start : the template <name> start to spawn processes
proc.<name>.spawn : one OS process using the process <name> template is spawned. Message is:
{
"event": "proc.<name>.spawn">>,
"name": "<name>",
"detach": false,
"pid": int
}
Note
pid is the internal pid
proc.<name>.exit: one OS process of the <name> template has exited. Message is:
{
"event": "proc.<name>.exit">>,
"name": "<name>",
"pid": int,
"exit_code": int,
"term_signal": int
}
proc.<name>.stop: all OS processes in the template <name> are stopped.
proc.<name>.stop_pid: One OS process of the template <name> is stopped. Message is:
{
"event": "proc.<name>.stop_pid">>,
"name": "<name>",
"pid": int
}
proc.<name>.stop_pid: One OS process of the template <name> is reapped. Message is:
{
"event": "proc.<name>.reap">>,
"name": "<name>",
"pid": int
}
This module offeres a common way to susbscribe and emit events. All events in gaffer are using.
event = EventEmitter()
# subscribe to all events with the pattern a.*
event.subscribe("a", subscriber)
# subscribe to all events "a.b"
event.subscribe("a.b", subscriber2)
# subscribe to all events (wildcard)
event.subscribe(".", subscriber3)
# publish an event
event.publish("a.b", arg, namedarg=val)
In this example all subscribers will be notified of the event. A subscriber is just a callable (event, *args, **kwargs)
Bases: object
Many events happend in gaffer. For example a process will emist the events “start”, “stop”, “exit”.
This object offer a common interface to all events emitters
close the event
This function clear the list of listeners and stop all idle callback
emit an event evtype
The event will be emitted asynchronously so we don’t block here
The gaffer command line tool is an interface to the gaffer HTTP api and include support for loading/unloading Procfile applications, scaling them up and down, ... .
It can also be used as a manager for Procfile-based applications similar to foreman but using the gaffer framework. It is running your application directly using a Procfile or export it to a gafferd configuration file or simply to a JSON file that you could send to gafferd using the HTTP api.
For example using the following Procfile:
dummy: python -u dummy_basic.py
dummy1: python -u dummy_basic.py
You can launch all the programs in this procfile using the following command line:
$ gaffer start
Or load them on a gaffer node:
$ gaffer load
and then scale them up and down:
$ gaffer scale dummy=3 dummy1+2
Scaling dummy processes... done, now running 3
Scaling dummy1 processes... done, now running 3
-h –help show this help message and exit –version show version and exit -f procfile,–procfile procfile Specify an alternate Procfile to load -d root,–directory root Specify an alternate application root
This defaults to the directory containing the Procfile [default: .]-e k=v,–env k=v Specify one or more .env files to load –endpoint endpoint gafferd node URL to connect
[default: http://127.0.0.1:5000]
- export [-c concurrency|–concurrency concurrency]
[–format=format] [–out=filename] [<name>]
Export a Procfile
This command export a Procfile to a gafferd process settings format. It can be either a JSON that you could send to gafferd via the JSON API or an ini file that can be included to the gafferd configuration.
<format> ini or json –out=filename path of filename where the export will be saved
- load [-c concurrency|–concurrency concurrency] [<name>]
Load a Procfile application to gafferd
- <name> is the name of the application recorded in
- gafferd. By default it will be the name of your project folder.You can use . to specify the current folder.
- ps [<appname>]
List your processes informations
<appname> he name of the application (session) of process recoreded in gafferd. By default it will be the name of your project folder.You can use . to specify the current folder.
- run [-c] [<args>]...
Run one-off commands using the same environment as your defined processes
-c concurrency Specify the number of each process type to run. The value passed in should be in the format process=num,process=num --concurrency concurrency same as the -c option. - scale [<appname>] [process=value]...
Scaling your process
Procfile applications can scale up or down instantly from the command line or API.
Scaling a process in an application is done using the scale command:
$ gaffer scale dummy=3 Scaling dummy processes... done, now running 3Or both at once:
$ gaffer scale dummy=3 dummy1+2 Scaling dummy processes... done, now running 3 Scaling dummy1 processes... done, now running 3start [-c concurrency|–concurrency concurrency]
Start a process type or all process types from the Procfile.
-c concurrency Specify the number of each process type to run. The value passed in should be in the format process=num,process=num --concurrency concurrency same as the -c option.
- unload [<name>]
- Unload a Procfile application from a gafferd node
an http API provided by the gaffer.http_handler.HttpHandler` gaffer application can be used to control gaffer via HTTP. To embed it in your app just initialize your manager with it:
manager = Manager(apps=[HttpHandler()])
The HttpHandler can be configured to accept multiple endpoinds and can be extended with new HTTP handlers. Internally we are using Tornado so you can either extend it with rules using pure totrnado handlers or wsgi apps.
Gaffer supports GET, POST, PUT, DELETE, OPTIONS HTTP verbs.
All messages (except some streams) are JSON encoded. All messages sent to gaffers should be json encoded.
Gaffer supports cross-origin resource sharing (aka CORS).
Main http endpoints are described in the description of the gafferctl commands in gafferctl:
Gafferctl is using extensively this HTTP api.
The output streams can be fetched by doing:
GET /streams/<pid>/<nameofeed>
It accepts the following query parameters:
ex:
$ curl localhost:5000/streams/1/stderr?feed=continuous
STDERR 12
STDERR 13
STDERR 14
STDERR 15
STDERR 16
STDERR 17
STDERR 18
STDERR 19
STDERR 20
STDERR 21
STDERR 22
STDERR 23
STDERR 24
STDERR 25
STDERR 26
STDERR 27
STDERR 28
STDERR 29
STDERR 30
STDERR 31
$ curl localhost:5000/streams/1/stderr?feed=longpoll
STDERR 215
$ curl localhost:5000/streams/1/stderr?feed=eventsource
event: stderr
data: STDERR 20
event: stderr
data: STDERR 21
event: stderr
data: STDERR 22
$ curl localhost:5000/streams/1/stdout?feed=longpoll
STDOUTi 14
It is now possible to write to stdin via the HTTP api by sending:
POST to /streams/<pid>/ttin
Where <pid> is an internal process ide that you can retrieve by calling GET /processses/<name>/_pids
ex:
$ curl -XPOST -d $'ECHO\n' localhost:5000/streams/2/stdin
{"ok": true}
$ curl localhost:5000/streams/2/stdout?feed=longpoll
ECHO
It is now possible to get stin/stdout via a websocket. Writing to ws://HOST:PORT/wstreams/<pid> will send the data to stdin any information written on stdout will be then sent back to the websocket.
See the echo client/server example in the example folder:
$ python echo_client.py
Sent
Reeiving...
Received 'ECHO
'
(test)enlil:examples benoitc$ python echo_client.py
Sent
Reeiving...
Received 'ECHO
Note
unfortunately the echo_client script can only be launched with python 2.7 :/
Note
to redirect stderr to stdout just use the same name when you setting the redirect_output property on process creation.
module to return all streams from the managed processes to the console. This application is subscribing to the manager to know when a process is created or killed and display the information. When an OS process is spawned it then subscribe to its streams if any are redirected and print the output on the console. This module is used by Gaffer .
Note
if colorize is set to true, each templates will have a different colour
Bases: object
wrapper around colorama to ease the output creation. Don’t use it directly, instead, use the colored(name_of_color, lines) to return the colored ouput.
Colors are: cyan, yellow, green, magenta, red, blue, intense_cyan, intense_yellow, intense_green, intense_magenta, intense_red, intense_blue.
lines can be a list or a string.