Page cover image

Biττensor: Compute Subnet Miner Setup

How to Mine on the Bittensor Compute Subnet (SN27) - The Permission-less Compute Market

1. INTRODUCTION

Bittensor’s Layer-0: Subnet 27 is a permissionless compute market for platform-composable cloud services. Integrating various cloud platforms into a cohesive unit. Its purpose is to enable higher-level cloud platforms to offer seamless compute composability across different underlying platforms. Given the proliferation of cloud computing, there is a growing need for a subnet capable of seamlessly integrating these platforms, thereby allowing efficient resource sharing and allocation. This compute-composable subnet enables nodes to contribute computational power, with validators ensuring the integrity and efficiency of the shared resources, thus empowering the entire Bittensor ecosystem and cloud computing as a whole.

For miners interested in joining this innovative network, Subnet 27 offers the opportunity to contribute computing resources and earn $TAO in return. This guide is structured to provide a comprehensive breakdown of how you can get started with contributing to Bittensor’s commodity markets using your compute power.

Decentralizing Compute

With Subnet 27, we decentralize compute and the people’s right to access compute. Ten-figure VC-funded AI companies are swallowing up the GPU supply while advocating for computing regulations. It looks like we must stand for our right to compute. Subnet 27 decentralizes computing resources by combining siloed pools of compute on a blockchain to be validated and accessed trustlessly. This opens a door to scalable compute without the constraints of centralized power.

Compute is a fundamental necessity for all operations, welcome to Bittensor's Layer-0. An incentivized and permission-less compute market is priceless. Your contribution means less power to the AI oligopoly and more to the collective. Join us as we democratize compute and AGI, bring your own GPUs.

Subnet 27 is live; come and take it!


Miner Overview:

Miners contribute processing resources, notably GPU (Graphics Processing Unit) and CPU (Central Processing Unit) instances, to facilitate optimal performance in essential GPU and CPU-based computing tasks.

Performance-Based Mining: The system operates on a performance-based reward mechanism, where miners are incentivized through a dynamic reward structure correlated to the processing capability of their hardware. High-performance devices are eligible for increased compensation, reflecting their greater contribution to the network's computational throughput. Emphasizing the integration of GPU instances is critical due to their superior computational power, particularly in tasks regarding machine learning.

Consequently, miners utilizing GPU instances are positioned to receive substantially higher rewards compared to their CPU counterparts, in alignment with the greater processing power and efficiency GPUs bring to the network.

Powered By Bittensor

Subnet 27 brings an entirely new resource to Bittensor, compute. Arguably, the most important and finite resource needed for the creation of machine intelligence. All network participants will have access to an ever-expanding pool of compute for all development needs.

Governments and regulatory bodies are in the process of regulating GPUs for AI. These political moves coupled with a shortage of GPUs in the market hinder the collective when it comes to AI/ML development and access. Big tech and those with deep pockets are the only ones that can participate in this transformative technology.

Subnet 27 changes this. With a decentralized compute subnet plugged into Bittensor, we become ungovernable. End of the day, what is a decentralized supercomputer without access to permissionless compute?


Compute Subnet Mining Video Tutorial:

Compute Subnet Github: https://github.com/neuralinternet/compute-subnet

Compute Subnet Discord Channel: https://discord.gg/t7BMee4w

Real-Time OpenCompute Dashboard: https://opencompute.streamlit.app/

We greatly appreciate and encourage contributions from the community to help improve and advance the development of the Compute Subnet. We have an active bounty program in place to incentivize and reward valuable contributions.

If you are interested in contributing to the Compute Subnet, please review our Reward Program for Valuable Contributions document on GitHub. This document outlines the details of the bounty program, including the types of contributions eligible for rewards and the reward structure.

Reward Program for Valuable Contributions: https://github.com/neuralinternet/compute-subnet/blob/main/CONTRIBUTING.md

CLI Guide For Reserving Compute Subnet Resources: https://same-cornet-d6b.notion.site/Bi-ensor-Utilization-of-Compute-Resources-in-Subnet-27-07d40d898725436db2fe78ac4cd95242?pvs=74

Akash Tutorial Coming Soon...

Cloud Providers:

We do not support Docker-based cloud platforms such as Runpod, Vast(.)AI and Lambda.

Here are some GPU providers. Choose a provider or use your own hardware:

Examples of GPUs to rent (listed in order of computing power)

  • NVIDIA H100

  • NVIDIA A100s

  • NVIDIA A6000s

  • NVIDIA 3090s/4090s

2. INSTALLATION


This installation process requires Ubuntu 22.04 and python3.8 or higher. You are limited to one external IP per UID. There is automatic blacklisting in place if validators detect anomalous behavior.

Optionally you can use our installer script which will install Bittensor and its dependencies as well as the compute subnet repo and its dependencies. If you chose not to use the installation script move to the first step which is installing docker.

One liner installation script: https://github.com/neuralinternet/compute-subnet/tree/main/Installation%20Script

Install Docker

To run a miner, you must install Docker and run the service. If Docker is already installed on your machine, scroll down to step 2.1

Install Link: https://docs.docker.com/engine/install/ubuntu/#install-using-the-repository

Verify that the Docker Engine installation is successful by running the hello-world image.

This command downloads a test image and runs it in a container. When the container runs, it prints a confirmation message and exits.

sudo docker run hello-world

It is best practice to default to using a local subtensor.

Subtensor Setup Github: https://github.com/opentensor/subtensor/blob/main/docs/running-subtensor-locally.md

git clone https://github.com/opentensor/subtensor.git
cd subtensor
# to run a lite node on the mainnet:
sudo ./scripts/run/subtensor.sh -e docker --network mainnet --node-type lite

cd out of directory once complete

2.1 BEGIN BY INSTALLING BITTENSOR:

/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/opentensor/bittensor/master/scripts/install.sh)"

See Bittensor’s documentation for alternative installation instructions.

Bittensor Documentation: docs.bittensor.com

2.2 VERIFY THE INSTALLATION:

Verify using the btcli command

btcli --help

which will give you the below output:


Explain
usage: btcli <command> <command args>

bittensor cli v6.1.0

positional arguments:
  {subnets,s,subnet,root,r,roots,wallet,w,wallets,stake,st,stakes,sudo,su,sudos,legacy,l}
    subnets (s, subnet)
                        Commands for managing and viewing subnetworks.
    root (r, roots)     Commands for managing and viewing the root network.
    wallet (w, wallets)
                        Commands for managing and viewing wallets.
    stake (st, stakes)  Commands for staking and removing stake from hotkey accounts.
    sudo (su, sudos)    Commands for subnet management
    legacy (l)          Miscellaneous commands.

options:
  -h, --help            show this help message and exit
  --config CONFIG       If set, defaults are overridden by passed file.
  --strict              If flagged, config will check that only exact arguments have been set.
  --no_version_checking
                        Set true to stop cli version checking.
  --no_prompt           Set true to stop cli from prompting the user.using the [Bittensor Command Line Interface](<https://docs.bittensor.com/getting-started/reference/btcli>) with **btcli --help*** and/or check the installation in python.Run the below command to install Bittensor in the above virtual environment.

Create a Cold & Hotkey with the commands below:

btcli w new_coldkey
btcli w new_hotkey

If you already have a Key, you can regenerate it ‘safely’ on a machine using btcli w regen_coldkeypub. However, you must regen the full key if you plan to register or transfer from that wallet. regen_coldkeypub lets you load the key without exposing your mnemonic to the server. If you want to, you can generate a key pair on a local safe machine to use as cold storage for the funds that you send.

btcli w regen_coldkeypub
btcli w regen_coldkey
btcli w regen_hotkey

4. CLONE COMPUTE-SUBNET


git clone https://github.com/neuralinternet/Compute-Subnet.git

Access the Compute-Subnet Directory

cd Compute-Subnet

5. COMPUTE SUBNET DEPENDENCIES


For optimal functionality of the Compute Subnet, it's essential to install the appropriate graphics drivers and dependencies.

Required dependencies for validators and miners:

cd Compute-Subnet
python3 -m pip install -r requirements.txt
python3 -m pip install --no-deps -r requirements-compute.txt
python3 -m pip install -e .

5.1 EXTRA DEPENDENCIES FOR MINERS:

In case you have missing requirements

sudo apt -y install ocl-icd-libopencl1 pocl-opencl-icd

Install Hashcat

Recommended hashcat version >= v6.2.5

wget https://hashcat.net/files/hashcat-6.2.6.tar.gz
tar xzvf hashcat-6.2.6.tar.gz
cd hashcat-6.2.6/
sudo make
sudo make install
export PATH=$PATH:/usr/local/bin/
echo "export PATH=$PATH">>~/.bashrc
hashcat --version

Version should output v6.2.6

cd out of directory

Download the NVIDIA CUDA Toolkit

If Nvidia toolkit and drivers are already installed on your machine, scroll down to verify then move on to download Docker.

wget https://developer.download.nvidia.com/compute/cuda/12.3.1/local_installers/cuda-repo-ubuntu2204-12-3-local_12.3.1-545.23.08-1_amd64.deb
sudo dpkg -i cuda-repo-ubuntu2204-12-3-local_12.3.1-545.23.08-1_amd64.deb
sudo cp /var/cuda-repo-ubuntu2204-12-3-local/cuda-*-keyring.gpg /usr/share/keyrings/
sudo apt-get update
sudo apt-get -y install cuda-toolkit-12-3
sudo apt-get -y install -y cuda-drivers
export CUDA_VERSION=cuda-12.3
export PATH=$PATH:/usr/local/$CUDA_VERSION/bin
export LD_LIBRARY_PATH=/usr/local/$CUDA_VERSION/lib64
echo "">>~/.bashrc
echo "PATH=$PATH">>~/.bashrc
echo "LD_LIBRARY_PATH=$LD_LIBRARY_PATH">>~/.bashrc

You may need to reboot the machine at this point

sudo reboot

The simplest way to check the installed CUDA version is by using the NVIDIA CUDA Compiler (nvcc).

nvidia-smi
nvcc --version

The output of which should look something like

+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 545.29.06              Driver Version: 545.29.06    CUDA Version: 12.3     |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|=========================================+======================+======================|
|   0  NVIDIA RTX                     Off | 00000000:05:00.0 Off |                  Off |
| 30%   34C    P0              70W / 300W |  400MiB / 4914000MiB |      4%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+
                                                                                         
+---------------------------------------------------------------------------------------+
| Processes:                                                                            |
|  GPU   GI   CI        PID   Type   Process name                            GPU Memory |
|        ID   ID                                                             Usage      |
|=======================================================================================|
|  No running processes found                                                           |
+---------------------------------------------------------------------------------------+
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2023 NVIDIA Corporation
Built on Fri_Nov__3_17:16:49_PDT_2023
Cuda compilation tools, release 12.3, V12.3.103
Build cuda_12.3.r12.3/compiler.33492891_0

Wandb Setup

To log into the wandb project named opencompute from neuralinternet, miners and validators need a wandb API key. This is necessary for your miner to be properly scored. You can obtain a free API key by making an account here: https://wandb.ai/

Inside of the Compute-Subnet directory; Rename the .env.example file to .env and replace the placeholder with your actual API key.

You can now track your mining and validation statistics on Wandb. For access, visit: https://wandb.ai/neuralinternet/opencompute. To view the networks overall statistics check out our real-time dashboard here: https://opencompute.streamlit.app/

If you encounter 429 filestream errors dont be alarmed. As long as your hotkey and machine have uploaded a run to the opencompute wandb you are fine. If you would like to circumvent the filestream limit you can create more wandb accounts to obtain more API keys. This is not necessary as the 429 error does not effect your scoring.

PM2 Installation

Install and run pm2 commands to keep your miner online at all times.

sudo apt update
sudo apt install npm
sudo npm install pm2 -g

Confirm pm2 is installed and running correctly

pm2 ls

5.2 INSTALL NVIDIA DOCKER SUPPORT

Add the NVIDIA Docker repository

distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add -
curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list
sudo apt update
sudo apt-get install -y nvidia-container-toolkit
sudo apt install -y nvidia-docker2

For more information, refer to the NVIDIA Container Toolkit installation guide: https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html#installing-with-apt

6. START THE DOCKER SERVICE IN COMPUTE SUBNET


cd Compute-Subnet
sudo groupadd docker
sudo usermod -aG docker $USER
sudo systemctl start docker
sudo apt install at

Make sure to check that docker is properly installed and running correctly:

sudo service docker status

This is an example of it running correctly:

root@merciful-bored-zephyr-fin-01:~# sudo service docker status
● docker.service - Docker Application Container Engine
     Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset>
     Active: active (running)

7.0 SETTING UP A MINER


Hotkey Registration

At this point, you will need some $TAO in your coldkey address for miner registration. Once your coldkey is funded, run the command below to register your hotkey:

btcli s register --subtensor.network finney --netuid 27

If you get the error ‘too many registrations this interval’ it means the max amount of registrations that cycle has been reached, and you need to wait a bit and try again. You can check the registration cost here

7.1 SETTING UP UFW FOR MINER:

TCP Ports: Open ports using ufw (put any number in place of xxxx and yyyy of your choice) and use them as axon port. Open port 4444 on your system (required for allocation and maximum score application):

sudo apt update
sudo apt install ufw
sudo ufw allow 4444
sudo ufw allow xxxx:yyyy/tcp
sudo ufw allow 22/tcp
sudo ufw enable
sudo ufw status

7.2 RUNNING THE MINER:

Now, using pm2, run miner as:

pm2 start ./neurons/miner.py --name MINER --interpreter python3 -- --netuid 27 --subtensor.network finney --wallet.name COLDKEYNAME --wallet.hotkey HOTKEYNAME --axon.port XXXX --axon.ip xx.xxx.xxx.xx --logging.debug --miner.blacklist.force_validator_permit --auto_update yes
pm2 start ./neurons/miner.py --name MINER --interpreter python3 -- --netuid 27 --subtensor.network local --wallet.name COLDKEYNAME --wallet.hotkey HOTKEYNAME --axon.port xxxx --logging.debug --miner.blacklist.force_validator_permit --auto_update yes

To set up your miner, first replace COLDKEYNAME & HOTKEYNAME with the names of your keys. Then, update axon.portwith the 4-digit number you've selected for xxxx above. For the parameters --axon.external and --axon.ip, use your miner machine's public IP address in place of the 'xxxxx's. You can find this IP by running hostname -I. Though not always necessary, these parameters can be crucial for resolving certain connectivity issues.

When operating a miner and you have local subtensor running on a separate machine, it's crucial to add and adjust the --subtensor.chain_endpoint parameter. This should be set to the IP and port (XXX.XX.XXX.XXX:XXXX) where your subtensor is running. If your subtensor is local to the miner machine, this parameter can be removed.

8. CHECKING MINER LOGS


After launching the compute miner, you can then check the logs using the two commands below:

pm2 logs
pm2 monit

Run pm2 logs and wait to see incoming HTTP traffic. Ensure you are receiving challenges and then finding them.

Happy mining ❤️ Dont forget to update 😄

9. RUNNING A VALIDATOR


Validators hold the critical responsibility of rigorously assessing and verifying the computational capabilities of miners. This multifaceted evaluation process commences with validators requesting miners to provide comprehensive performance data, which includes not only processing speeds and efficiencies but also critical metrics like Random Access Memory (RAM) capacity and disk space availability.

Computational Integrity: Following the receipt of this detailed hardware and performance information, validators proceed to test the miners' computational integrity. This is achieved by presenting them with complex benchmarking challenges, designed to evaluate the processing power and reliability of the miners' systems. Validators adjust the difficulty of these problems based on the comprehensive performance profile of each miner.

In addition to measuring the time taken by miners to resolve these problems, validators meticulously verify the accuracy of the responses. This thorough examination of both speed and precision forms the crux of the evaluation process.

Dynamic Scoring Mechanism: Validators update the miners' scores, reflecting a holistic view of their computational capacity, efficiency, and hardware quality. This score then determines the miner's weight within the network, directly influencing their potential rewards and standing. This scoring process, implemented through a Python script, considers various factors including CPU, GPU, hard disk, and RAM performance. The script's structure and logic are outlined below:

Understanding the Score Calculation Process

The scoring system has been updated, if you want to check the old hardware mechanism: Hardware scoring

The score calculation function determines a miner's performance based on various factors:

Successful Problem Resolution: It first checks if the problem was solved successfully. If not, the score remains at zero.

Problem Difficulty: This measures the complexity of the solved task. The code restricts this difficulty to a maximum allowed value.

Weighting Difficulty and Elapsed Time: The function assigns a weight to both the difficulty of the solved problem (75%) and the time taken to solve it (25%).

Exponential Rewards for Difficulty: Higher problem difficulty leads to more significant rewards. An exponential formula is applied to increase rewards based on difficulty.

Allocation Bonus: Miners that have allocated machine receive an additional bonus added to their final score.

Effect of Elapsed Time: The time taken to solve the problem impacts the score. A shorter time results in a higher score.

  • Max Score = 1e5

  • Score = Lowest Difficulty + (Difficulty Weight * Problem Difficulty) + (Elapsed Time * 1 / (1 + Elapsed Time) * 10000) + Allocation Bonus

  • Normalized Score = (Score / Max Score) * 100

Example 1: Miner A's Hardware Scores and Weighted Total

  • Successful Problem Resolution: True

  • Elapsed Time: 4 seconds

  • Problem Difficulty: 6

  • Allocation: True

Score = 8.2865

Example 2: Miner B's Hardware Scores and Weighted Total

  • Successful Problem Resolution: True

  • Elapsed Time: 16 seconds

  • Problem Difficulty: 8

  • Allocation: True

Score = 24.835058823529412

# To run the validator
cd neurons
python -m validator.py
    --netuid <your netuid> # The subnet id you want to connect to
    --subtensor.network <your chain url> # blockchain endpoint you want to connect
    --wallet.name <your validator wallet>  # name of your wallet
    --wallet.hotkey <your validator hotkey> # hotkey name of your wallet
    --logging.debug # Run in debug mode, alternatively --logging.trace for trace mode

Resource Allocation Mechanism

The allocation mechanism within subnet 27 is designed to optimize the utilization of computational resources effectively. Key aspects of this mechanism include:

  1. Resource Requirement Analysis: The mechanism begins by analyzing the specific resource requirements of each task, including CPU, GPU, memory, and storage needs.

  2. Miner Selection: Based on the analysis, the mechanism selects suitable miners that meet the resource requirements. This selection process considers the current availability, performance history, and network weights of the miners.

  3. Dynamic Allocation: The allocation of tasks to miners is dynamic, allowing for real-time adjustments based on changing network conditions and miner performance.

  4. Efficiency Optimization: The mechanism aims to maximize network efficiency by matching the most suitable miners to each task, ensuring optimal use of the network's computational power.

  5. Load Balancing: It also incorporates load balancing strategies to prevent overburdening individual miners, thereby maintaining a healthy and sustainable network ecosystem.

Through these functionalities, the allocation mechanism ensures that computational resources are utilized efficiently and effectively, contributing to the overall robustness and performance of the network.

Validators can send requests to reserve access to resources from miners by specifying the specs manually in the in register.py and running this script: https://github.com/neuralinternet/Compute-Subnet/blob/main/neurons/register.py for example: {'cpu':{'count':1}, 'gpu':{'count':1}, 'hard_disk':{'capacity':10737418240}, 'ram':{'capacity':1073741824}}

Options


All the list arguments are now using coma separator.

  • -netuid: (Optional) The chain subnet uid. Default: 27.

  • -auto_update: (Optional) Auto update the repository. Default: True.

  • -blacklist.exploiters: (Optional) Automatically use the list of internal exploiters hotkeys. Default: True.

  • -blacklist.hotkeys <hotkey_0,hotkey_1,...>: (Optional) List of hotkeys to blacklist. Default: [].

  • -blacklist.coldkeys <coldkey_0,coldkey_1,...>: (Optional) List of coldkeys to blacklist. Default: [].

  • -whitelist.hotkeys <hotkey_0,hotkey_1,...>: (Optional) List of hotkeys to whitelist. Default: [].

  • -whitelist.coldkeys <coldkey_0,coldkey_1,...>: (Optional) List of coldkeys to whitelist. Default: [].

Validator options


Flags that you can use with the validator script.

  • -validator.whitelist.unrecognized: (Optional) Whitelist the unrecognized miners. Default: False.

  • -validator.perform.hardware.query: (Optional) Perform the old perfInfo method - useful only as personal benchmark, but it doesn't affect score. Default: False.

  • -validator.challenge.batch.size <size>: (Optional) Batch size that perform the challenge queries - For lower hardware specifications you might want to use a different batch_size than default. Keep in mind the lower is the batch_size the longer it will take to perform all challenge queries. Default: 64.

  • -validator.force.update.prometheus: (Optional) Force the try-update of prometheus version. Default: False.

Miner options


  • -miner.hashcat.path <path>: (Optional) The path of the hashcat binary. Default: hashcat.

  • -miner.hashcat.workload.profile <profile>: (Optional) Performance to apply with hashcat profile: 1 Low, 2 Economic, 3 High, 4 Insane. Run hashcat -h for more information. Default: 3.

  • -miner.hashcat.extended.options <options>: (Optional) Any extra options you found usefull to append to the hascat runner (I'd perhaps recommend -O). Run hashcat -h for more information. Default: ''.

  • -miner.whitelist.not.enough.stake: (Optional) Whitelist the validators without enough stake. Default: False.

10. BENCHMARKING THE MACHINE


hashcat -b -m 610

Output

Speed.#1.........: 12576.1 MH/s (75.69ms) @ Accel:8 Loops:1024 Thr:1024 Vec:1
Speed.#2.........: 12576.1 MH/s (75.69ms) @ Accel:8 Loops:1024 Thr:1024 Vec:1
...
...

The recommended minimum hashrate for the current difficulty is >= 4500 MH/s.

Difficulty will increase over time.

11. MORE USEFUL COMMANDS

btcli s metagraph --netuid 27
btcli s list
btcli wallet overview --subtensor.network finney --all --netuid 27
pm2 logs miner --lines 1000 | grep -i "Challenge.*found"
pm2 logs -f | grep -E "SUCCESS|INFO|DEBUG|ERROR"
nvidia-smi --query-gpu=name,memory.total,clocks.gr,clocks.mem --format=csv
grep "Challenge .* found in" "/home/ubuntu/.pm2/logs/MINER-out.log" | sed -E 's/.* found in ([0-9.]+) seconds.*/\\1/' | awk '{sum+=$1; count+=1} END {if (count > 0) print sum/count; else print "No data to calculate average"}’
btcli w transfer --subtensor.network local --dest DESTINATION_WALLET --wallet.name default --amount 0
btcli stake remove --subtensor.network local --all --all_hotkeys --wallet.name default

Last updated