Biττensor: Compute Subnet Miner Setup
How to Mine on the Bittensor Compute Subnet (SN27) - The Permission-less Compute Market
1. INTRODUCTION
Bittensor’s Layer-0: Subnet 27 is a permissionless compute market for platform-composable cloud services. Integrating various cloud platforms into a cohesive unit. Its purpose is to enable higher-level cloud platforms to offer seamless compute composability across different underlying platforms. Given the proliferation of cloud computing, there is a growing need for a subnet capable of seamlessly integrating these platforms, thereby allowing efficient resource sharing and allocation. This compute-composable subnet enables nodes to contribute computational power, with validators ensuring the integrity and efficiency of the shared resources, thus empowering the entire Bittensor ecosystem and cloud computing as a whole.
For miners interested in joining this innovative network, Subnet 27 offers the opportunity to contribute computing resources and earn $TAO in return. This guide is structured to provide a comprehensive breakdown of how you can get started with contributing to Bittensor’s commodity markets using your compute power.
Decentralizing Compute
With Subnet 27, we decentralize compute and the people’s right to access compute. Ten-figure VC-funded AI companies are swallowing up the GPU supply while advocating for computing regulations. It looks like we must stand for our right to compute. Subnet 27 decentralizes computing resources by combining siloed pools of compute on a blockchain to be validated and accessed trustlessly. This opens a door to scalable compute without the constraints of centralized power.
Compute is a fundamental necessity for all operations, welcome to Bittensor's Layer-0. An incentivized and permission-less compute market is priceless. Your contribution means less power to the AI oligopoly and more to the collective. Join us as we democratize compute and AGI, bring your own GPUs.
Subnet 27 is live; come and take it!
Miner Overview:
Miners contribute processing resources, notably GPU (Graphics Processing Unit) and CPU (Central Processing Unit) instances, to facilitate optimal performance in essential GPU and CPU-based computing tasks.
Performance-Based Mining: The system operates on a performance-based reward mechanism, where miners are incentivized through a dynamic reward structure correlated to the processing capability of their hardware. High-performance devices are eligible for increased compensation, reflecting their greater contribution to the network's computational throughput. Emphasizing the integration of GPU instances is critical due to their superior computational power, particularly in tasks regarding machine learning.
Consequently, miners utilizing GPU instances are positioned to receive substantially higher rewards compared to their CPU counterparts, in alignment with the greater processing power and efficiency GPUs bring to the network.
Powered By Bittensor
Subnet 27 brings an entirely new resource to Bittensor, compute. Arguably, the most important and finite resource needed for the creation of machine intelligence. All network participants will have access to an ever-expanding pool of compute for all development needs.
Governments and regulatory bodies are in the process of regulating GPUs for AI. These political moves coupled with a shortage of GPUs in the market hinder the collective when it comes to AI/ML development and access. Big tech and those with deep pockets are the only ones that can participate in this transformative technology.
Subnet 27 changes this. With a decentralized compute subnet plugged into Bittensor, we become ungovernable. End of the day, what is a decentralized supercomputer without access to permissionless compute?
Compute Subnet Mining Video Tutorial:
Compute Subnet Github: https://github.com/neuralinternet/compute-subnet
Compute Subnet Discord Channel: https://discord.gg/t7BMee4w
Real-Time OpenCompute Dashboard: https://opencompute.streamlit.app/
We greatly appreciate and encourage contributions from the community to help improve and advance the development of the Compute Subnet. We have an active bounty program in place to incentivize and reward valuable contributions.
If you are interested in contributing to the Compute Subnet, please review our Reward Program for Valuable Contributions document on GitHub. This document outlines the details of the bounty program, including the types of contributions eligible for rewards and the reward structure.
Reward Program for Valuable Contributions: https://github.com/neuralinternet/compute-subnet/blob/main/CONTRIBUTING.md
CLI Guide For Reserving Compute Subnet Resources: https://same-cornet-d6b.notion.site/Bi-ensor-Utilization-of-Compute-Resources-in-Subnet-27-07d40d898725436db2fe78ac4cd95242?pvs=74
Akash Tutorial Coming Soon...
Cloud Providers:
We do not support Docker-based cloud platforms such as Runpod, Vast(.)AI and Lambda.
Here are some GPU providers. Choose a provider or use your own hardware:
Latitude.sh (referral code: BITTENSOR27)
Oblivus (referral code: BITTENSOR27 - 2% cash back in platform expenditures)
Examples of GPUs to rent (listed in order of computing power)
NVIDIA H100
NVIDIA A100s
NVIDIA A6000s
NVIDIA 3090s/4090s
2. INSTALLATION
This installation process requires Ubuntu 22.04 and python3.8 or higher. You are limited to one external IP per UID. There is automatic blacklisting in place if validators detect anomalous behavior.
Optionally you can use our installer script which will install Bittensor and its dependencies as well as the compute subnet repo and its dependencies. If you chose not to use the installation script move to the first step which is installing docker.
One liner installation script: https://github.com/neuralinternet/compute-subnet/tree/main/Installation%20Script
Install Docker
To run a miner, you must install Docker and run the service. If Docker is already installed on your machine, scroll down to step 2.1
Install Link: https://docs.docker.com/engine/install/ubuntu/#install-using-the-repository
Verify that the Docker Engine installation is successful by running the hello-world
image.
This command downloads a test image and runs it in a container. When the container runs, it prints a confirmation message and exits.
It is best practice to default to using a local subtensor.
Subtensor Setup Github: https://github.com/opentensor/subtensor/blob/main/docs/running-subtensor-locally.md
cd out of directory once complete
2.1 BEGIN BY INSTALLING BITTENSOR:
See Bittensor’s documentation for alternative installation instructions.
Bittensor Documentation: docs.bittensor.com
2.2 VERIFY THE INSTALLATION:
Verify using the btcli
command
which will give you the below output:
Create a Cold & Hotkey with the commands below:
If you already have a Key, you can regenerate it ‘safely’ on a machine using btcli w regen_coldkeypub
. However, you must regen the full key if you plan to register or transfer from that wallet. regen_coldkeypub
lets you load the key without exposing your mnemonic to the server. If you want to, you can generate a key pair on a local safe machine to use as cold storage for the funds that you send.
4. CLONE COMPUTE-SUBNET
Access the Compute-Subnet Directory
5. COMPUTE SUBNET DEPENDENCIES
For optimal functionality of the Compute Subnet, it's essential to install the appropriate graphics drivers and dependencies.
Required dependencies for validators and miners:
5.1 EXTRA DEPENDENCIES FOR MINERS:
In case you have missing requirements
Install Hashcat
Recommended hashcat version >= v6.2.5
Version should output v6.2.6
cd out of directory
Download the NVIDIA CUDA Toolkit
If Nvidia toolkit and drivers are already installed on your machine, scroll down to verify then move on to download Docker.
You may need to reboot the machine at this point
The simplest way to check the installed CUDA version is by using the NVIDIA CUDA Compiler (nvcc
).
The output of which should look something like
Wandb Setup
To log into the wandb project named opencompute from neuralinternet, miners and validators need a wandb API key. This is necessary for your miner to be properly scored. You can obtain a free API key by making an account here: https://wandb.ai/
Inside of the Compute-Subnet directory; Rename the .env.example
file to .env
and replace the placeholder with your actual API key.
You can now track your mining and validation statistics on Wandb. For access, visit: https://wandb.ai/neuralinternet/opencompute. To view the networks overall statistics check out our real-time dashboard here: https://opencompute.streamlit.app/
If you encounter 429 filestream errors dont be alarmed. As long as your hotkey and machine have uploaded a run to the opencompute wandb you are fine. If you would like to circumvent the filestream limit you can create more wandb accounts to obtain more API keys. This is not necessary as the 429 error does not effect your scoring.
PM2 Installation
Install and run pm2 commands to keep your miner online at all times.
Confirm pm2 is installed and running correctly
5.2 INSTALL NVIDIA DOCKER SUPPORT
Add the NVIDIA Docker repository
For more information, refer to the NVIDIA Container Toolkit installation guide: https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html#installing-with-apt
6. START THE DOCKER SERVICE IN COMPUTE SUBNET
Make sure to check that docker is properly installed and running correctly:
This is an example of it running correctly:
7.0 SETTING UP A MINER
Hotkey Registration
At this point, you will need some $TAO in your coldkey address for miner registration. Once your coldkey is funded, run the command below to register your hotkey:
If you get the error ‘too many registrations this interval’ it means the max amount of registrations that cycle has been reached, and you need to wait a bit and try again. You can check the registration cost here
7.1 SETTING UP UFW FOR MINER:
TCP Ports: Open ports using ufw (put any number in place of xxxx and yyyy of your choice) and use them as axon port. Open port 4444 on your system (required for allocation and maximum score application):
7.2 RUNNING THE MINER:
Now, using pm2, run miner as:
If you have a local subtensor (recommended):
To set up your miner, first replace COLDKEYNAME
& HOTKEYNAME
with the names of your keys. Then, update axon.port
with the 4-digit number you've selected for xxxx above. For the parameters --axon.external and --axon.ip, use your miner machine's public IP address in place of the 'xxxxx's. You can find this IP by running hostname -I
. Though not always necessary, these parameters can be crucial for resolving certain connectivity issues.
When operating a miner and you have local subtensor
running on a separate machine, it's crucial to add and adjust the --subtensor.chain_endpoint
parameter. This should be set to the IP and port (XXX.XX.XXX.XXX:XXXX) where your subtensor is running. If your subtensor is local to the miner machine, this parameter can be removed.
8. CHECKING MINER LOGS
After launching the compute miner, you can then check the logs using the two commands below:
Run pm2 logs and wait to see incoming HTTP traffic. Ensure you are receiving challenges and then finding them.
9. RUNNING A VALIDATOR
Validators hold the critical responsibility of rigorously assessing and verifying the computational capabilities of miners. This multifaceted evaluation process commences with validators requesting miners to provide comprehensive performance data, which includes not only processing speeds and efficiencies but also critical metrics like Random Access Memory (RAM) capacity and disk space availability.
Computational Integrity: Following the receipt of this detailed hardware and performance information, validators proceed to test the miners' computational integrity. This is achieved by presenting them with complex benchmarking challenges, designed to evaluate the processing power and reliability of the miners' systems. Validators adjust the difficulty of these problems based on the comprehensive performance profile of each miner.
In addition to measuring the time taken by miners to resolve these problems, validators meticulously verify the accuracy of the responses. This thorough examination of both speed and precision forms the crux of the evaluation process.
Dynamic Scoring Mechanism: Validators update the miners' scores, reflecting a holistic view of their computational capacity, efficiency, and hardware quality. This score then determines the miner's weight within the network, directly influencing their potential rewards and standing. This scoring process, implemented through a Python script, considers various factors including CPU, GPU, hard disk, and RAM performance. The script's structure and logic are outlined below:
Understanding the Score Calculation Process
The scoring system has been updated, if you want to check the old hardware mechanism: Hardware scoring
The score calculation function determines a miner's performance based on various factors:
Successful Problem Resolution: It first checks if the problem was solved successfully. If not, the score remains at zero.
Problem Difficulty: This measures the complexity of the solved task. The code restricts this difficulty to a maximum allowed value.
Weighting Difficulty and Elapsed Time: The function assigns a weight to both the difficulty of the solved problem (75%) and the time taken to solve it (25%).
Exponential Rewards for Difficulty: Higher problem difficulty leads to more significant rewards. An exponential formula is applied to increase rewards based on difficulty.
Allocation Bonus: Miners that have allocated machine receive an additional bonus added to their final score.
Effect of Elapsed Time: The time taken to solve the problem impacts the score. A shorter time results in a higher score.
Max Score = 1e5
Score = Lowest Difficulty + (Difficulty Weight * Problem Difficulty) + (Elapsed Time * 1 / (1 + Elapsed Time) * 10000) + Allocation Bonus
Normalized Score = (Score / Max Score) * 100
Example 1: Miner A's Hardware Scores and Weighted Total
Successful Problem Resolution: True
Elapsed Time: 4 seconds
Problem Difficulty: 6
Allocation: True
Score = 8.2865
Example 2: Miner B's Hardware Scores and Weighted Total
Successful Problem Resolution: True
Elapsed Time: 16 seconds
Problem Difficulty: 8
Allocation: True
Score = 24.835058823529412
Resource Allocation Mechanism
The allocation mechanism within subnet 27 is designed to optimize the utilization of computational resources effectively. Key aspects of this mechanism include:
Resource Requirement Analysis: The mechanism begins by analyzing the specific resource requirements of each task, including CPU, GPU, memory, and storage needs.
Miner Selection: Based on the analysis, the mechanism selects suitable miners that meet the resource requirements. This selection process considers the current availability, performance history, and network weights of the miners.
Dynamic Allocation: The allocation of tasks to miners is dynamic, allowing for real-time adjustments based on changing network conditions and miner performance.
Efficiency Optimization: The mechanism aims to maximize network efficiency by matching the most suitable miners to each task, ensuring optimal use of the network's computational power.
Load Balancing: It also incorporates load balancing strategies to prevent overburdening individual miners, thereby maintaining a healthy and sustainable network ecosystem.
Through these functionalities, the allocation mechanism ensures that computational resources are utilized efficiently and effectively, contributing to the overall robustness and performance of the network.
Validators can send requests to reserve access to resources from miners by specifying the specs manually in the in register.py
and running this script: https://github.com/neuralinternet/Compute-Subnet/blob/main/neurons/register.py for example: {'cpu':{'count':1}, 'gpu':{'count':1}, 'hard_disk':{'capacity':10737418240}, 'ram':{'capacity':1073741824}}
Options
All the list arguments are now using coma separator.
-netuid
: (Optional) The chain subnet uid. Default: 27.-auto_update
: (Optional) Auto update the repository. Default: True.-blacklist.exploiters
: (Optional) Automatically use the list of internal exploiters hotkeys. Default: True.-blacklist.hotkeys <hotkey_0,hotkey_1,...>
: (Optional) List of hotkeys to blacklist. Default: [].-blacklist.coldkeys <coldkey_0,coldkey_1,...>
: (Optional) List of coldkeys to blacklist. Default: [].-whitelist.hotkeys <hotkey_0,hotkey_1,...>
: (Optional) List of hotkeys to whitelist. Default: [].-whitelist.coldkeys <coldkey_0,coldkey_1,...>
: (Optional) List of coldkeys to whitelist. Default: [].
Validator options
Flags that you can use with the validator script.
-validator.whitelist.unrecognized
: (Optional) Whitelist the unrecognized miners. Default: False.-validator.perform.hardware.query
: (Optional) Perform the old perfInfo method - useful only as personal benchmark, but it doesn't affect score. Default: False.-validator.challenge.batch.size <size>
: (Optional) Batch size that perform the challenge queries - For lower hardware specifications you might want to use a different batch_size than default. Keep in mind the lower is the batch_size the longer it will take to perform all challenge queries. Default: 64.-validator.force.update.prometheus
: (Optional) Force the try-update of prometheus version. Default: False.
Miner options
-miner.hashcat.path <path>
: (Optional) The path of the hashcat binary. Default: hashcat.-miner.hashcat.workload.profile <profile>
: (Optional) Performance to apply with hashcat profile: 1 Low, 2 Economic, 3 High, 4 Insane. Runhashcat -h
for more information. Default: 3.-miner.hashcat.extended.options <options>
: (Optional) Any extra options you found usefull to append to the hascat runner (I'd perhaps recommend -O). Runhashcat -h
for more information. Default: ''.-miner.whitelist.not.enough.stake
: (Optional) Whitelist the validators without enough stake. Default: False.
10. BENCHMARKING THE MACHINE
Output
The recommended minimum hashrate for the current difficulty is >= 4500 MH/s.
Difficulty will increase over time.
11. MORE USEFUL COMMANDS
Last updated