DA Node
The Data Availability (DA) module allows users to submit a piece of data, referred to as a DA blob. This data is redundantly encoded by the client's proxy and divided into several slices, which are then sent to DA nodes. DA nodes gain eligibility to verify the correctness of DA slices by staking. Each DA node verifies the integrity and correctness of its slice and signs it. Once more than 2/3 of the aggregated signatures are on-chain, the data behind the related hash is considered to be decentralized published.
To incentivize DA nodes to store the signed data for a period, the signing process itself does not provide any rewards. Instead, rewards are distributed through a process called DA Sampling. During each DA Sample round, any DA slice within a certain time frame can generate a lottery chance for a reward. DA nodes need to actually store the corresponding slice to redeem the lottery chance and claim the reward.
The process of generating DA nodes is the same as the underlying chain's PoS process, both achieved through staking. During each DA epoch (approximately 8 hours), DA nodes are assigned to several quorums. Within each quorum, nodes are assigned numbers 0 through 3071. Each number is assigned to exactly one node, but a node may be assigned to multiple quorums, depending on its staking weight.
DA Processing Flow
DA takes an input of data up to 32,505,852 bytes in length and processes it as follows:
Padding and Size Encoding:
Pad the data with zeros until it reaches 32,505,852 bytes.
Add a little-endian format 4-byte integer at the end to indicate the original input size.
Matrix Formation:
Slice the padded data into a 1024-row by 1024-column matrix, filling each row consecutively, with each element being 31 bytes.
Pad each 31-byte element with an additional 1-byte zero, making it 32 bytes per element.
Redundant Encoding:
Expand the data to a 3072-row by 1024-column matrix using redundancy coding.
Calculate the erasure commitment and data root of the expanded matrix.
Submission to DA Contract:
Submit the erasure commitment and data root to the DA contract and pay the fee.
The DA contract will determine the epoch to which the data belongs and assign a quorum.
Data Distribution:
Send the erasure commitment, data root, each row of the matrix, and necessary proofs of correctness to the corresponding DA nodes.
Signature Aggregation:
More than 2/3 of the DA nodes sign the erasure commitment and data root.
Aggregate the signatures using the BLS signature algorithm and submit the aggregated signature to the DA contract.
Details of erasure encoding
The data root is defined as the input root by treating the 1024*3072 32-byte elements as a continuous storage submission input. Specifically, according to the storage submission requirements, these data does not need to pad any zeros, and will be divided into a 16384-element sector array and an 8192-element sector array.
DA nodes need to verify two parts:
The consistency between the received slice and the data root, mainly achieved through Merkle proofs.
The consistency between the received slice and the erasure commitment, verified using KZG proofs. Here, we use the AMT protocol optimization introduced in LVMT to reduce the proving overhead.
DA Sampling
The blockchain will periodically release DA Sampling tasks at preset height every SAMPLE_PERIOD
blocks, with the parent block hash of these heights used as the sampleSeed
for DA Sampling.
List of Parameters
Constant parameters
Admin adjustable parameters
Responses
During each period, each DA slice (one row) can be divided into 32 sub-lines. For each sub-line, the podasQuality
will be computed using the dataRoot
and assigned epoch
and quorumId
of its corresponding DA blob.
<aside> 💡 By default, all integers are in 256-bit big-endian format when computing hash values. lineIndex
is the only exception, which is in 64-bit big-endian format.
The hash value can be viewed interchangeably as either 32 bytes of data or a 256-bit big-endian integer.
</aside>
If the podasQuality
is less than the current podasTarget
in the DA contract and the epoch
falls within [currentEpoch - EPOCH_WINDOW_SIZE, currentEpoch)
, then this sub-line is regarded as a valid DAS response and is eligible for the reward. The DA node assigned to this row can claim the reward.
During a sample period, at most TARGET_SUBMITS × 2
DAS responses can be submitted and rewarded; any submissions exceeding this limit will be rejected.
Difficulty Adjustment
TARGET_SUBMITS
valid responses are expected in a sample period. If more or fewer responses are submitted during a sample period, the podasTarget
will be adjusted as follows:
Economic Model
List of Parameters
Admin adjustable parameters
[1] TARGET_SUBMITS
× Time elapsed for EPOCH_WINDOW_SIZE
epochs / Time elapsed in SAMPLE_PERIOD
/ REWARD_RATIO
should be approximately 0.5 to 2.
Pricing
When users submit the metadata for a DA blob, they need to pay a fee in amount of BLOB_PRICE
.
Reward
When a DA epoch ends, all the rewards from that DA epoch will be stored in the DA reward pool. Each time a valid response is submitted, 1 / REWARD_RATIO
of the reward pool will be distributed to the corresponding DA node.
System Rewards
In the early stages of the ecosystem, the foundation can reserve a portion of tokens for system rewards. When the DA node submits a valid response, an additional reward of BASE_REWARD
will be issued.
The funds for the base reward will be manually deposited into the reward contract and tracked separately. If the balance for the base reward is insufficient to cover a signle base reward, miners will not be able to receive the full base reward.
Service Fee
A system service fee is charged as a proportion of the DA fees paid by the user, according to the parameter SERVICE_FEE_RATE_BP
.
Data Sharding
When the total data size in the network exceeds the storage capacity of a single node, the node will only store a part of the
Run a node
See here for instructions to become DA signer and run your own node.
Last updated