Biττensor: Compute Subnet Miner Setup
How to Mine on the Bittensor Compute Subnet (SN27) - The Permission-less Compute Market
Last updated
How to Mine on the Bittensor Compute Subnet (SN27) - The Permission-less Compute Market
Last updated
Bittensor’s Layer-0: Subnet 27 is a permissionless compute market for platform-composable cloud services. Integrating various cloud platforms into a cohesive unit. Its purpose is to enable higher-level cloud platforms to offer seamless compute composability across different underlying platforms. Given the proliferation of cloud computing, there is a growing need for a subnet capable of seamlessly integrating these platforms, thereby allowing efficient resource sharing and allocation. This compute-composable subnet enables nodes to contribute computational power, with validators ensuring the integrity and efficiency of the shared resources, thus empowering the entire Bittensor ecosystem and cloud computing as a whole.
For miners interested in joining this innovative network, Subnet 27 offers the opportunity to contribute computing resources and earn $TAO in return. This guide is structured to provide a comprehensive breakdown of how you can get started with contributing to Bittensor’s commodity markets using your compute power.
With Subnet 27, we decentralize compute and the people’s right to access compute. Ten-figure VC-funded AI companies are swallowing up the GPU supply while advocating for computing regulations. It looks like we must stand for our right to compute. Subnet 27 decentralizes computing resources by combining siloed pools of compute on a blockchain to be validated and accessed trustlessly. This opens a door to scalable compute without the constraints of centralized power.
Compute is a fundamental necessity for all operations, welcome to Bittensor's Layer-0. An incentivized and permission-less compute market is priceless. Your contribution means less power to the AI oligopoly and more to the collective. Join us as we democratize compute and AGI, bring your own GPUs.
Subnet 27 brings an entirely new resource to Bittensor, compute. Arguably, the most important and finite resource needed for the creation of machine intelligence. All network participants will have access to an ever-expanding pool of compute for all development needs.
Governments and regulatory bodies are in the process of regulating GPUs for AI. These political moves coupled with a shortage of GPUs in the market hinder the collective when it comes to AI/ML development and access. Big tech and those with deep pockets are the only ones that can participate in this transformative technology.
Subnet 27 changes this. With a decentralized compute subnet plugged into Bittensor, we become ungovernable. End of the day, what is a decentralized supercomputer without access to permissionless compute?
Subnet 27 is live; come and take it!
Miners contribute processing resources, notably GPU (Graphics Processing Unit) instances.
Performance-Based Mining: The system operates on a performance-based reward mechanism, where miners are incentivized through a dynamic reward structure correlated to the processing capability of their hardware. High-performance devices are eligible for increased compensation, reflecting their greater contribution to the network's computational throughput. Emphasizing the integration of GPU instances is critical due to their superior computational power, particularly in tasks regarding machine learning.
Compute Subnet Mining Video Tutorial:
Rent A Server From Subnet 27: https://app.neuralinternet.ai/
Compute Subnet Github: https://github.com/neuralinternet/compute-subnet
Compute Subnet Discord Channel: https://discord.gg/t7BMee4w
Real-Time Compute Subnet Metrics: https://opencompute.streamlit.app/
We greatly appreciate and encourage contributions from the community to help improve and advance the development of the Compute Subnet. We have an active bounty program in place to incentivize and reward valuable contributions.
If you are interested in contributing to the Compute Subnet, please review our Reward Program for Valuable Contributions document on GitHub. This document outlines the details of the bounty program, including the types of contributions eligible for rewards and the reward structure.
Reward Program for Valuable Contributions: https://github.com/neuralinternet/compute-subnet/blob/main/CONTRIBUTING.md
CLI Guide For Reserving Compute Subnet Resources: Validator Utilization of Compute Resources
We do not support Containerized (docker)-based cloud platforms such as Runpod, VastAI and Lambda.
We strongly urge miners to provide their own hardware to foster and build a stronger network for all. Providing your own in-house hardware may come with its own benefits.
If you cannot supply your hardware in-house, here are some usable GPU providers:
Latitude.sh (referral code: BITTENSOR27)
Oblivus (referral code: BITTENSOR27 - 2% cash back in platform expenditures)
Examples of GPUs to rent (listed in order of computing power):
GPU Base Scores: The following GPUs are assigned specific base scores, reflecting their relative performance. To understand scoring please see the Proof-of-GPU page here:
NVIDIA H200: 4.00
NVIDIA H100 80GB HBM3: 3.30
NVIDIA H100: 2.80
NVIDIA A100-SXM4-80GB: 1.90
NVIDIA A100 80GB PCIe: 1.65
NVIDIA L40s: 1.10
NVIDIA L40: 1.00
NVIDIA RTX 6000 Ada Generation: 0.90
NVIDIA RTX A6000: 0.78
This installation process requires Ubuntu 22.04 and python3.8 or higher. You are limited to one external IP per UID. There is automatic blacklisting in place if validators detect anomalous behavior.
To run a miner, you must install Docker and run the service. If Docker is already installed on your machine, scroll down to step 2.1
Install Link: https://docs.docker.com/engine/install/ubuntu/#install-using-the-repository
Verify that the Docker Engine installation is successful by running the hello-world
image.
This command downloads a test image and runs it in a container. When the container runs, it prints a confirmation message and exits.
See Bittensor’s documentation for alternative installation instructions.
Bittensor Documentation: docs.bittensor.com
Verify using the btcli
command
which will give you an output similar to below:
Create a Cold & Hotkey with the commands below:
Follow the instruction following both of these commands
If you already have a Key, you can regenerate it ‘safely’ on a machine using btcli w regen_coldkeypub
. However, you must regen the full key if you plan to register or transfer from that wallet. regen_coldkeypub
lets you load the key without exposing your mnemonic to the server. If you want to, you can generate a key pair on a local safe machine to use as cold storage for the funds that you send.
Access the Compute-Subnet Directory
For optimal functionality of the Compute Subnet, it's essential to install the appropriate graphics drivers and dependencies.
Required dependencies for validators and miners:
If Nvidia toolkit and drivers are already installed on your machine, scroll down to verify then move on to the Wandb Setup.
You may need to reboot the machine at this point
The simplest way to check the installed CUDA version is by using the NVIDIA CUDA Compiler (nvcc
).
The output of which should look something like
To log into the wandb project named opencompute from neuralinternet, miners and validators need a wandb API key. This is necessary for your miner to be properly scored. You can obtain a free API key by making an account here: https://wandb.ai/
Inside of the Compute-Subnet directory; Rename the .env.example
file to .env
and replace the placeholder with your actual API key.
You can now track your mining and validation statistics on Wandb. For access, visit: https://wandb.ai/neuralinternet/opencompute. To view the networks overall statistics check out our real-time dashboard here: https://opencompute.streamlit.app/
Install and run pm2 commands to keep your miner online at all times.
Confirm pm2 is installed and running correctly
Add the NVIDIA Docker repository
For more information, refer to the NVIDIA Container Toolkit installation guide: https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html#installing-with-apt
Make sure to check that docker is properly installed and running correctly:
This is an example of it running correctly:
At this point, you will need some $TAO in your coldkey address for miner registration. Once your coldkey is funded, run the command below to register your hotkey:
For testnet use: btcli s register --subtensor.network test --netuid 15
If you get the error ‘too many registrations this interval’ it means the max amount of registrations that cycle has been reached, and you need to wait a bit and try again. You can check the registration cost here
Open your desired ssh port for allocations; default is 4444 (required for allocation):
Below we open the allocation SSH port
TCP Ports: Open ports using ufw (put any number in place of xxxx and yyyy of your choice) and use them as axon port.
--miner.whitelist.not.enough.stake
: (Optional) Whitelist the validators without enough stake. Default: False.
--miner.whitelist.not.updated
: (Optional) Whitelist validators not using the last version of the code. Default: False.
--miner.whitelist.updated.threshold
: (Optional) Total quorum before starting the whitelist. Default: 60. (%)
To set up your miner, first replace COLDKEYNAME
& HOTKEYNAME
with the names of your keys. Then, update axon.port
with the 4-digit number you've selected for xxxx above. For the parameters --axon.external and --axon.ip, use your miner machine's public IP address in place of the 'xxxxx's. You can find this IP by running hostname -I
. Though not always necessary, these parameters can be crucial for resolving certain connectivity issues.
When operating a miner and you have local subtensor
running on a separate machine, it's crucial to add and adjust the --subtensor.chain_endpoint
parameter. This should be set to the IP and port (XXX.XX.XXX.XXX:XXXX) where your subtensor is running. If your subtensor is local to the miner machine, this parameter can be removed.
After launching the compute miner, you can then check the logs using the two commands below:
Run pm2 logs and wait to see incoming HTTP traffic. Ensure you are receiving challenges and then finding them.