If you are new to the world of DVT and or Obol, take a look at these key concepts to get you started: https://docs.obol.tech/docs/int/key-concepts

Running MEV Plus alongside the DV cluster is similar to running it with a normal Ethereum node. If you haven’t already tried running MEV Plus this tutorial should help you get started in getting set up for native delegation.

Once the node and the MEV Plus service is running, it boils down to correctly configuring the DV cluster.

This tutorial goes through the step by step process of installing and running a DV Cluster using Obol with the ultimate aim being the MEV Plus recognising the BLS keys being run by the DVT nodes.

For the sake of simplicity, the DV clients will be running on the same EC2 instance with linux already installed using the Charon distributed validator cluster, using the repo here. See this guide.

Step 1: Getting started with Obol’s distributed validator software

There are 2 ways to start with Obol. Using the Obol DV Launchpad. Once the ‘create a cluster alone’ steps from the launchpad “Create a distributed validator alone” flow are completed, the user should have a cluster/ subdirectory created.

To install Charon (Obol’s middleware client for managing the DVs), create private key shares for each of the DV nodes, follow this guide.

Once all the steps have been completed you just need to make small configurational changes.

Step 2: Create docker override file

If you have a pre-existing EL and CL Synced setup then follow this step or skip to step 3.

As a reference you can take a look at this guide: https://docs.obol.tech/docs/int/quickstart/quickstart-mainnet#using-a-remote-mainnet-beacon-node

Create a file docker-compose.override.yml and paste the following code:

services:
  geth:
    #Disable geth
    profiles: [disable]

  lighthouse:
    #Disable lighthouse
    profiles: [disable]

This override file disables the geth and lighthouse services from the docker file and the existing running beacon client and execution client will be used.

Step 3: Preparing the environment and the .env file

Take a look at the charon-distributed-validator-cluster repo.

Importantly, it is recommended that you copy the sample env file found here: https://github.com/ObolNetwork/charon-distributed-validator-node/blob/main/.env.sample

Where you can comment out the elements of the file you need.

For example, here is a minimal .env file:

NETWORK=
BUILDER_API_ENABLED=true
CHARON_BEACON_NODE_ENDPOINTS=<LINK_TO_EXTERNAL_BEACON_NODE>
CHARON_PORT_P2P_TCP=3801
MONITORING_PORT_GRAFANA=3701

The above .env file defines the parameters needed by the docker compose file:

  • NETWORK: network on which the validator key is going to be active.
  • CHARON_BEACON_NODE_ENDPOINTS: The endpoint for the beacon client. For this tutorial, the beacon node endpoint will be of the already running beacon client. Since we need to access the localhost:5052 from inside the docker, the localhost needs to be updated with the docker0 IP address, hence the endpoint will look like: http://172.17.0.1:5052. In case someone wants to connect to an external beacon node, they need to provide the consensus layer REST API link.
  • CHARON_PORT_P2P_TCP: TCP port for node0. The ports can be the same for all the DV clients if they are being run on separate devices. For the sake of this tutorial, all the nodes are running on the same device, hence the ports should be different for each of the DV clients.
  • MONITORING_PORT_GRAFANA: Grafana port. Different for nodes running on the same device.

Step 4: Starting the nodes

To start the nodes, simply run docker compose up To manage docker as a non-root user, you might have to add docker as a user group. Follow this docker documentation to do so.