Pipelines, Builds and Jobs

Build and job are two classes representing the same object. Builds are used in v3 API, jobs in v4 API.

Project pipelines

A pipeline is a group of jobs executed by GitLab CI.

Examples

List pipelines for a project:

pipelines = project.pipelines.list()

Get a pipeline for a project:

pipeline = project.pipelines.get(pipeline_id)

Create a pipeline for a particular reference:

pipeline = project.pipelines.create({'ref': 'master'})

Retry the failed builds for a pipeline:

pipeline.retry()

Cancel builds in a pipeline:

pipeline.cancel()

Triggers

Triggers provide a way to interact with the GitLab CI. Using a trigger a user or an application can run a new build/job for a specific commit.

Examples

List triggers:

triggers = project.triggers.list()

Get a trigger:

trigger = project.triggers.get(trigger_token)

Create a trigger:

trigger = project.triggers.create({}) # v3
trigger = project.triggers.create({'description': 'mytrigger'}) # v4

Remove a trigger:

project.triggers.delete(trigger_token)
# or
trigger.delete()

Pipeline schedule

You can schedule pipeline runs using a cron-like syntax. Variables can be associated with the scheduled pipelines.

Examples

List pipeline schedules:

scheds = project.pipelineschedules.list()

Get a single schedule:

sched = projects.pipelineschedules.get(schedule_id)

Create a new schedule:

sched = project.pipelineschedules.create({
    'ref': 'master',
    'description': 'Daily test',
    'cron': '0 1 * * *'})

Update a schedule:

sched.cron = '1 2 * * *'
sched.save()

Delete a schedule:

sched.delete()

Create a schedule variable:

var = sched.variables.create({'key': 'foo', 'value': 'bar'})

Edit a schedule variable:

var.value = 'new_value'
var.save()

Delete a schedule variable:

var.delete()

Projects and groups variables

You can associate variables to projects and groups to modify the build/job scripts behavior.

Examples

List variables:

p_variables = project.variables.list()
g_variables = group.variables.list()

Get a variable:

p_var = project.variables.get('key_name')
g_var = group.variables.get('key_name')

Create a variable:

var = project.variables.create({'key': 'key1', 'value': 'value1'})
var = group.variables.create({'key': 'key1', 'value': 'value1'})

Update a variable value:

var.value = 'new_value'
var.save()

Remove a variable:

project.variables.delete('key_name')
group.variables.delete('key_name')
# or
var.delete()

Builds/Jobs

Builds/Jobs are associated to projects, pipelines and commits. They provide information on the builds/jobs that have been run, and methods to manipulate them.

Reference

Examples

Jobs are usually automatically triggered, but you can explicitly trigger a new job:

Trigger a new job on a project:

project.trigger_build('master', trigger_token,
                      {'extra_var1': 'foo', 'extra_var2': 'bar'})

List jobs for the project:

builds = project.builds.list()  # v3
jobs = project.jobs.list()  # v4

To list builds for a specific commit, create a ProjectCommit object and use its builds method (v3 only):

# v3 only
commit = gl.project_commits.get(commit_sha, project_id=1)
builds = commit.builds()

To list builds for a specific pipeline or get a single job within a specific pipeline, create a ProjectPipeline object and use its jobs method (v4 only):

# v4 only
project = gl.projects.get(project_id)
pipeline = project.pipelines.get(pipeline_id)
jobs = pipeline.jobs.list()  # gets all jobs in pipeline
job = pipeline.jobs.get(job_id)  # gets one job from pipeline

Get a job:

project.builds.get(build_id)  # v3
project.jobs.get(job_id)  # v4

Get a job artifact:

build_or_job.artifacts()

Warning

Artifacts are entirely stored in memory in this example.

You can download artifacts as a stream. Provide a callable to handle the stream:

class Foo(object):
    def __init__(self):
        self._fd = open('artifacts.zip', 'wb')

    def __call__(self, chunk):
        self._fd.write(chunk)

target = Foo()
build_or_job.artifacts(streamed=True, action=target)
del(target)  # flushes data on disk

In this second example, you can directly stream the output into a file, and unzip it afterwards:

zipfn = "___artifacts.zip"
with open(zipfn, "wb") as f:
    build_or_job.artifacts(streamed=True, action=f.write)
subprocess.run(["unzip", "-bo", zipfn])
os.unlink(zipfn)

Mark a job artifact as kept when expiration is set:

build_or_job.keep_artifacts()

Get a job trace:

build_or_job.trace()

Warning

Traces are entirely stored in memory unless you use the streaming feature. See the artifacts example.

Cancel/retry a job:

build_or_job.cancel()
build_or_job.retry()

Play (trigger) a job:

build_or_job.play()

Erase a job (artifacts and trace):

build_or_job.erase()