Storage consensus is digital consensus. Storage-based computing is a trustworthy computation paradigm for blockchain.
DeFi on Ethereum has grown rapidly since early 2020. As COMP and YFI’s yield farming gained popularity, Ethereum became clogged, raising the gas fee to 500 GWEI and hence the DeFi transaction fee to hundreds of dollars. So, a scalable smart contract solution is urgently needed to relieve Ethereum congestion.
Because of Vitalik’s early October publication, Layer-2 solutions have gained prominence. Among them, ZK Rollups (ZKR) and OPR stand out. The type of proof used by ZKR and OPR differs. On relies on the former for legitimacy and on fraud evidence.
The way ZKR and OPR finalize a new state differs. Like PoW, ZKR uses a cryptographic technique to verify a computed result. It is more like PoS because it needs operators to deposit funds (like ETH2 Staking) and has governance measures. Both have problems. With its small opcode set, ZKR is unable to deal with sophisticated programs due to cryptographic theory and technical constraints. Because of OPR’s governance structure, operators must lock up funds, reducing liquidity.
Blockchain-based Storage
Blockchain-based computing and storage are handled by Ethereum’s World Computer. That means all Ethereum nodes must participate in the computation. So on-chain computation has a significant cost. Validation is still done on Layer 1 even if Layer-2 solutions can reduce both data and computation.
This post introduces a novel computation paradigm distinct from Ethereum’s on-chain computation approach. Since computing is now off-chain, a blockchain can only guarantee the availability and finality of data stored on it. Assume that if all inputs remain constant, the output will remain constant. Consider z = x + y. If x and y are both 1, then the output z will always be 3, regardless of where the function is computed. The output of a program remains unchanged whether it is run on-chain or not. That is, anyone can run the software off-chain. Because the computation is independent of the chain, the parameters require predictable storage. A storage-based computation paradigm.
What is Arweave?
Ledger-based file storage protocol Arweave Users pay a one-time cost for permanent data storage. It encourages users to save data forever by applying simple guidelines.
Arweave’s key characteristic is permanent storage. Arweave does so by addressing two issues. First, how to determine permanent storage costs. Statistics demonstrate that storage costs are falling dramatically every year. Every year, the average storage cost per gigabyte drops by 30%. The cost will eventually converge to a constant, which is called the permanent storage cost. Arweave bases its data storage fees on the converged permanent cost. As seen below, 1 GB of data costs 2.45 AR ($9.8). The storage fee will fluctuate with the price of AR.
Incentives for miners to store data indefinitely. It introduces a new mining method. To mine a valid new block, miners must refer to a random recall block. They must show they can access the data in the recall block. It can encourage miners to store recall blocks, especially unusual ones. When a rare block is needed, miners that hoard it have an advantage over other miners. So Arweave has permanent storage.
Arweave allows for long-term data storage at low cost. The data in Arweave is verified and traceable because to blockchain technology. So Arweave is a good candidate for a trusted computation tape.
Fundamentals
Permanent storage and low cost are essential for storage-based computation. Storage ensures data availability. Assuming the same data, an off-chain computation can always produce the same state. A fixed/low cost allows applications to keep their consensus cost stable, avoiding resource competition in cases of network congestion. This makes applications more usable.
We can create a new trusted computation model by combining the storage-based computation paradigm and Arweave.
Storage-based Computation Disadvantages
First, it allows for computations of any complexity. Off-chain machine performance determines computing power.
Second, separating storage and computation reduces consensus cost. The application operators (off-chain computers) will bear the computation costs.
Third, it is composable and shardable. Application developers can download data as needed. When multiple applications compose, operators only need to download the data of those applications, not the entire GETH client.
Fourth, it is highly scalable. However, lower consensus costs improve scalability. However, the paradigm supports both upload and download sharding. Thus, performance is limited only by network bandwidth.
Fifth, it has no language requirements. You only need to upload the destination program’s source code and the input serialization to the blockchain in advance.
Our research on rollups and ETH2.0 shows that off-chain computation is the goal. The ultimate goal appears to be off-chain computation.
So, if a program has no ambiguity and its inputs and outputs are in a deterministic storage, the program’s results must be deterministic.
The storage-based computation paradigm is unlike any other. The public may take some time to accept and recognize the new paradigm. But I think it’s a great trustworthy computation paradigm that comes closest to a Turing machine.
“The views and opinions on this Crypto News Website are solely those of the authors and contributors. These views and opinions do not necessarily represent those of iBaseTrading or its partners.”