Storage fees adjustment mechanism based on utilization

In current implementation of fees inherited from Substrate (broadly speaking) we actually have a mechanism that adjusts (compute only according to implementation) fees based on chain utilization: Token Economics | Research at Web3 Foundation

However, since Substrate isn’t really designed for dynamic cost of storage, storage fees are not adjusted at all (and there is no corresponding logic in Substrate’s pallets, they only seem to care about weight).
I’m not 100% sure that is correct though, maybe storage fees should be adjusted as well?

I’ve got this question after working on dynamic issuance implementation that is only concerned with the blockspace utilization and just like the way we identify storage fees doesn’t really account for potentially non-linear charging of fees based on chain utilization (which is not used right now, but if we decide to do so it’ll start to matter).

2 Likes

To clarify, our dynamic issuance does account for the current blockspace fee (in the parameter b). For a block with some % utilization, in case of higher storage fees, the protocol would issue less than it would under the same utilization % but cheaper fees. It’s not a strong relationship, but it’s present.
While I’m not opposed to having a storage fee adjustment, we should preserve the current intuitions behind the pricing based on available storage. I think the current formula should be used for the base-target value that gets adjusted up or down based on utilization.

I’m not necessarily saying we should do it either. It is a different kind of resource and it is already dynamically priced. I just thought I’d bring this as something to think about while I was reading at related code.

My understanding is that compute fee adjustment is designed to constrain compute usage in case there is high demand (blocks are full in terms of fees). But the same can also happen with storage.

Yes, storage fees can be adjusted in a way similar to the compute fees. This has been justified by recent research papers, such as “Foundations of Transaction Fee Mechanism Design” and “Tiered Mechanisms for Blockchain Transaction Fees”. In particular, their system models and assumptions apply to both computing resources and storage resources. Fundamentally, a higher price reduces users’ demand to make sure that the system is not overly congested. For instance, EIP-1559 essentially does the following

new fee = current fee * ( 1 + 0.125*(fullness_level - target_load)/target_load ).

We can use a similar formula for both compute fees and storage fees. Also, we can incorporate Dariia’s suggestion “I think the current formula should be used for the base-target value that gets adjusted up or down based on utilization.” I will have another post on this.

P.S.: However, the fancy mechanisms in the above research papers (see, e.g., page 11 of Paper 1 and Mechanism 2 of Paper 2) don’t seem to be a good fit for us.

My previous post mainly focuses on the short-term behavior to handle network congestion. We increase the fees when demand exceeds capacity and decrease the fees otherwise.

How can we regulate the long-term behavior (to make sure the available storage is sufficient)? Following Dariia’s suggestion, we can introduce long-term storage-fee updates, on top of the short-term updates, like

new long-term storage fee = current long-term storage fee * ( 1 - c*(current_replication_factor - target_replication_factor)/target_replication_factor ).

To sum up, long-term storage-fee updates make sure the available storage is sufficient, while short-term storage-fee updates ensure that the available bandwidth is sufficient. In particular, short-term storage-fee updates can be disabled if the bandwidth is not a bottleneck (while the execution speed is the bottleneck).

To sum up our R&D discussion today, a good fee-update mechanism should be able to handle (short-term) network congestion and maintain (long-term) replication ratio. The update formula can be something like
new fee = current fee * ( 1 + c * (fullness_level - target_load)/target_load ),
or like
new fee = current fee * ( 1 + c * (fullness_level - normalized_target_load) + c^2 * (fullness_level - normalized_target_load)^2 / 2 ),
where the parameter c can be chosen to make sure that the price change is at most a certain percentage (say, 60%) per day.

Computing fees and short-term storage fees can be updated using the same formula. Sometimes. we can even disable the short-term storage-fee update if the block size is set to be sufficiently small relative to the bandwidth limit.

Fundamentally, we have three physical limits, namely, computing limit, bandwidth limit, and storage limit. The computing limit determines the maximum execution speed a typical operator can support. The bandwidth limit determines the maximum block size the network can support.

Note: The block size is often set to be sufficiently small relative to the bandwidth limit, because otherwise the forking is severe and the system is insecure. However, this is NOT a fundamental tradeoff (between block size and security). In particular, we can use a parallel-chain architecture (e.g., the Prism paper) to overcome this tradeoff so that we can set the block size to approach the bandwidth limit.

Finally, the available storage determines our replication factor. Through long-term storage-fee updates, we ensure a sufficient replication factor (say, > 100).