Version 2.5 of the documentation is no longer actively maintained. The site that you are currently viewing is an archived snapshot. For up-to-date documentation, see the latest version.

On-premises setup (Linux)

Learn how to set up a Linux Remote Execution cluster on-premises

Running Kubernetes? See Kubernetes setup.

Summary

  1. Unpack the deployment kit
  2. Install the service package
  3. Add your license
  4. Configure the service
  5. Start the service
  6. Verify the cluster
  7. Run an example build
  8. Configure the client

Requirements

See baseline requirements.

As of 2022-03-15, our Remote Execution software only runs on Linux. Supported distros:

  • Debian 10 or newer
  • Ubuntu 18.04 or newer

Need other OS support? Contact us.

1. Unpack the deployment kit

Unpack engflow-re-<VERSION>.zip.

It contains:

  • this documentation (./index.html)
  • the service package (./setup/engflow-re-services.deb)
  • EngFlow config file (./setup/on-prem/config)
  • example Bazel project (./example)

It does not contain a valid license file: ./setup/license is empty. We send you a license separately.

2. Install the service package

  1. Copy ./setup/engflow-re-services.deb to every machine.

  2. Run on every machine:

    sudo apt update
    
    sudo apt install ./setup/engflow-re-services.deb docker.io
    

    This will install the service binaries under /usr/bin/engflow/, and install the scheduler and worker systemd services.

Warning: do not copy your source tree on these machines. The build tool uploads files if build actions need them.

3. Add your license

Copy your license onto every machine as /etc/engflow/license.

4. Configure the service

See the dedicated articles for details.

Tip: all service instances (schedulers and workers) can use the same config file. Schedulers ignore worker-specific options and vice versa.

  1. Customize ./setup/on-prem/config

    Set options common to every machine.

    1. Set --private_ip_selector

    2. Set --discovery

  2. Copy the file to every machine as /etc/engflow/config

  3. Customize the file per-machine:

Tip: for a first time trial setup we recommend using the default ./setup/on-prem/config. Later (and especially before productionizing) you should customize this config more. Consider:

See the Service Options Reference for more info.

5. Start the service

  1. SSH into every machine

  2. Run on every machine (respectively if it’s a worker or scheduler):

    • If you use systemd:

      service start worker
      

      or

      service start scheduler
      
    • Otherwise:

      /usr/bin/worker_service
      

      or

      /usr/bin/scheduler_service
      

6. Verify the cluster

Note: as of 2020-10-01, the service does not have a status page yet. You need to connect to an instance using SSH.

  1. SSH to a worker instance

  2. Look at the service output

    $ journalctl --unit worker
    

    Scroll down using the arrow keys; jump to the bottom with Shift + G.

    Somewhere in the log you should see a cluster formed:

    Aug 12 15:33:33 ip-10-0-1-29 scheduler_service[790]: Members {size:5, ver:5} [
    Aug 12 15:33:33 ip-10-0-1-29 scheduler_service[790]:         Member [10.0.1.29]:10081 - 381a734e-3213-4054-9aa5-e32e159f78e3 this
    Aug 12 15:33:33 ip-10-0-1-29 scheduler_service[790]:         Member [10.0.1.117]:10081 - 5e178daf-fca1-4709-a408-0261d7c8133e
    Aug 12 15:33:33 ip-10-0-1-29 scheduler_service[790]:         Member [10.0.1.200]:10081 - 9914770f-d255-424b-9438-fa3d70b2b67d
    Aug 12 15:33:33 ip-10-0-1-29 scheduler_service[790]:         Member [10.0.1.173]:10081 - 8affa7e5-b4b9-4118-8e88-9591136b407a
    Aug 12 15:33:33 ip-10-0-1-29 scheduler_service[790]:         Member [10.0.1.239]:10081 - 5e9211fb-438e-4987-8c72-a0d430299adf
    Aug 12 15:33:33 ip-10-0-1-29 scheduler_service[790]: ]
    

    On schedulers you should see two clusters: the same one as above, and another one only containing schedulers.

  3. Optional: Ensure you can pull Docker images

    Skip this step if you don’t plan to run actions in a Docker container.

    On a worker machine, run:

    docker run hello-world
    
    docker pull docker.io/library/debian
    

7. Run an example build

Follow the instructions in ./example/README.md

Note: the first build can take a while as Bazel first downloads the docker image locally, and the cluster software then downloads the docker image on each worker. You will not see a performance improvement for builds of the example project; it is too small to benefit from the remote execution cluster.

8. Configure the client

See Client configuration.

2022-04-28