D.A.G.G.E.R. Versus Tendermint

Introduction

Welcome to the ultimate consensus showdown where we pit the dynamic upstart, GenesysGo's Directed Acyclic Gossip Graph Enabling Replication Protocol (D.A.G.G.E.R.), against the seasoned veteran, Tendermint. It’s a face-off that might make you wonder, “What’s an innovator like D.A.G.G.E.R. doing in the ring with a nine-year-old stalwart like Tendermint?” Well, put simply, it's all about relevance and the new “data-availability” paradigm. Despite its age, Tendermint continues to be the go-to for newer networks that are comfortable with an established, out-of-the-box generic consensus solution rather than rolling up their sleeves to craft one from scratch.

In this tech tussle of consensus protocols, both combatants don’t just duke it out in the abstract—they’re the muscle behind some popular applications. For D.A.G.G.E.R., shdwDrive stands as its champion in the storage arena, flexing the $SHDW token. On the opposite corner, Tendermint underpins the modular amalgam that is Celestia. Celestia is populating the blockchain space with $TIA tokens, carrying the banner of the Cosmos SDK and opted to build on a fork of Tendermint called cometBFT.

So, grab your ringside seats as we turn our attention to the core discussion at hand. From high-level goals to the foundational technology that powers these protocols, let's take a dive and sift through the properties that differentiate D.A.G.G.E.R. from Tendermint (along with their applications). We will transition from the spectacle to a more granular examination of how each contender operates, focusing on the practicalities of decentralized system design and how modern purpose-built consensus  measures up against more generalized, old-school, approaches. Prepare for an insightful exploration into the innovations and complexities that define these frameworks.

This article aims to provide a comprehensive comparative analysis of two fundamentally different approaches to consensus protocol design in decentralized systems: the purpose-built, specialized D.A.G.G.E.R. that powers shdwDrive with a laser-focus on storage, data-availability, and modern technology, contrasted against the generalized engine of Tendermint, which underpins a suite of applications including Celestia. We will examine the technical nuance that distinguishes these approaches, making a case for the targeted efficiency of a bespoke system like D.A.G.G.E.R. + shdwDrive over a more commoditized solution like Tendermint + Celestia.

Overview

Both D.A.G.G.E.R. and Tendermint are Byzantine fault tolerant (BFT) consensus protocols capable of reaching agreement in distributed systems in the presence of Byzantine failures. However, they take different approaches to the consensus problem. D.A.G.G.E.R. was developed by GenesysGo and is designed to be an efficient, decentralized consensus protocol. It uses a directed acyclic graph (DAG) structure to reach consensus in an asynchronous, leaderless manner. Tendermint uses a classical BFT algorithm with rounds of leader election. It represents the almost decade old way of solving security and block finality. 

Key properties of D.A.G.G.E.R.
  • Asynchronous consensus - does not require synchronized clocks between nodes
  • Leaderless - all nodes participate equally without leader election
  • Gossip-based - nodes sync by gossiping with peers
  • Highly bandwidth efficient - graph structure reduces communication overhead
  • Byzantine fault tolerant - can tolerate up to 1/3 Byzantine nodes
  • Probabilistic finality - events gain confidence over time
  • Scalability - design allows for high transaction throughput, making it suitable for large-scale applications.
  • Dynamic Membership - allows for dynamic changes in the network participants, enhancing its flexibility.
  • Implicit Voting - voting mechanism is implicit, meaning votes are derived from the connectivity of the nodes in the graph, reducing the communication overhead.
  • Fork-free Operation - Design avoids forks, streamlining consensus
  • Operator Indexing - assigns unique indices to operators, facilitating shard distribution
  • Auditing Mechanism - Nodes audited for data integrity, ensuring compliance and reliability
  • Self-Healing Network - Features intelligent node communication that can adapt and recover from network partitions or failures.
  • Testnet Phase 1: Sustained TPS - 5,000
  • Testnet Phase 1: Peak TPS - 20,000 (no data loss under 4Gbps sustained data loads)
Key properties of Tendermint
  • Synchronous Round-robin leader election - validators take turns proposing blocks
  • Three-Step Consensus - Structured proposal, pre-vote, pre-commit messaging ensures coordinated consensus.
  • Deterministic consensus - guarantees safety with +2/3 honest voting power
  • Instant finality - committed blocks are irreversible
  • Locking Mechanism - Enforces transaction ordering and safeguards network safety.
  • Timeouts - prevent blocking, ensure progress post-GST (Global Stabilization Time)
  • Evidence handling - malicious behavior is detectable and publishable, enabling network self-policing.
  • High communication overhead - extensive messaging between validators
  • Tightly coupled consensus - requires precise timing between nodes
  • Fork accountability - validators lose stake if they commit conflicting blocks
  • State Machine Replication - Guarantees that transactions are reliably replicated across all nodes.
  • Application interface - modularity focused, core consensus integrates with apps using ABCI.
  • Simple Client Verification - Light clients can verify the blockchain state without needing the entire chain's data.
  • TPS - 400-600 (default settings)
  • Theoretical Peak TPS - 10,000 TPS (w/ ~20% transaction loss rate)

When reviewing Tendermint (now CometBFT in the case of Celestia), nothing should come across as new or surprising given its legacy proof-of-stake consensus technology. However when you read D.A.G.G.E.R. key properties, you might be asking how does a blockchain work fork-free, how can a system be totally asynchronous and yet still order transactions properly, and how can the ledger be self-healing? Now that we have an overview of new-school versus old-school, let's do a deeper dive into their technical design and consensus mechanisms that help explain these questions.

Consensus Approach

D.A.G.G.E.R. and Tendermint take fundamentally different approaches to distributed consensus, with major differences in leader election, block proposal, voting, and finality mechanisms.

Leader Election

A key difference between D.A.G.G.E.R. and Tendermint is in how they handle leader election during the consensus process.

Tendermint uses a classical round-robin scheme for leader election. In each round, a different validator node is selected as the leader to propose a block. The order of validators is agreed upon during node initialization. This approach relies heavily on the leader - progress stops if the leader goes offline or acts maliciously. Leaders also become a scalability bottleneck, since they must collect and validate all transactions.

In contrast, D.A.G.G.E.R. is completely leaderless. There is no concept of rounds or dedicated leader nodes. Instead, each operator participates equally in the consensus process, which enhances decentralization, fault tolerance, and parallelism. This is accomplished through the use of a directed acyclic graph (DAG), where each event in the graph serves as a vote for multiple blocks. Unlike other systems that must propagate blocks and then transmit messages to vote on and finalize each block, votes for a particular block in D.A.G.G.E.R. are derived from the connectivity of the nodes in the graph and the contents of a small amount of metadata appended to the nodes. Graph data-structures (like DAGs) are not new in and of themselves, however never before has DAG technology been effectively integrated in a way that weaves together ledger state, transactions ordering, erasure coding, and membership management all the while preserving a permissionless leaderless network. 

D.A.G.G.E.R.’s  approach significantly reduces the bandwidth requirements of the system and eliminates the need for a designated leader, thereby avoiding the potential bottlenecks and points of failure associated with such a system.

Block Proposal

Related to leader election is the mechanism for proposing blocks during consensus. Again, Tendermint and D.A.G.G.E.R. differ significantly.

In Tendermint, the elected leader for each round is solely responsible for collecting transactions from the mempool, ordering them, bundling them into a block, and broadcasting this block to be voted on by validators. This concentrates substantial power and responsibility on the leader. If the leader equivocates or goes offline, the entire consensus process stalls until the next round. If the leader is malicious, they are offered a prime leader window in which to act.

In D.A.G.G.E.R., block proposal is decentralized across all nodes. The old school way of leader-based block proposals is a relic of the past.

Each node (or operator) constructs a series of events independently, which can include a block of transactions. Events are composed of several items:

  • The signature of the event's parent, if it exists (ed25519 signature)
  • The signature of the event's self-parent, if it exists (ed25519 signature)
  • A payload, (e.g. a Block of transactions)
  • A timestamp assigned by the operator (signed integer)
  • The operator's identity (ed25519 public key)

These events are then propagated through the network via an advanced graph-based gossip protocol. The decentralized nature of this process enhances the system's resilience against censorship and increases fault tolerance. Unlike in Tendermint, where the consensus process can stall if the leader goes offline or equivocates, D.A.G.G.E.R.'s asynchronous, leaderless approach ensures the consensus process continues even if individual nodes experience issues. This is an exciting step forward in blockchain technology, capable of suggesting designs like Tendermint (and all the other proof-of-stake protocols out there like it) are antiquated.

Moreover, because each event in D.A.G.G.E.R. serves as a vote for multiple blocks, the system achieves high bandwidth efficiency. This is unlike most other systems (such as Tendermint, CometBFT, and other well known proof-of-stake protocols)  that must propagate blocks and then transmit a huge number of messages to vote on and finalize each block. In D.A.G.G.E.R., votes for a particular block are derived from the connectivity of the data-nodes in the graph-structure and the contents of a small amount of metadata appended to the data-nodes. It’s an ever expanding harmoniously woven fault tolerant ledger of truth. This means that votes are implicit - no explicit votes are transmitted over the network, further enhancing the efficiency of the system. If this approach seems new and has you scratching your head a little, that’s because it very much is and that’s why we’re excited about it. You can learn more by reading about directed acyclic graphs and then diving into the D.A.G.G.E.R. Litepaper for a deeper understanding on how this all works.

D.A.G.G.E.R.'s graph-based and asynchronous approach to block proposal is not only a testament to its progressive design but also reveals its intrinsic optimization for storage solutions. The efficiency of its event-based consensus, coupled with the leaderless model, exemplifies design precision tailored to the operational rigors of scalable data storage, where every iota of performance and fault tolerance is capitalized.

On the contrary, Tendermint's generic block proposal system reflects its necessity to remain simple and easily refashioned when projects copy the code. The design accommodates an extensive array of front-end projects that need a “good enough” consensus underlay , and must bear the responsibility of a 'one-size-fits-all' model. This however comes with trade-offs, such as bottlenecks and increased latency during key consensus actions. These trade-offs are entirely unacceptable in storage-centric platforms that demand continuous high data availability and performance.

Voting

Both systems are designed to cope with Byzantine behavior (malicious or faulty nodes), although they employ different mechanisms. Tendermint uses explicit communication, while D.A.G.G.E.R. uses the graph's informational structure for voting. Here's are the key points for each protocol:

Tendermint Voting:
  • Structured Rounds: The voting process in Tendermint is structured into rounds. Validators are required to participate in multiple steps: proposing, pre-voting, pre-committing, and committing. This detail emphasizes the multi-phase nature of the consensus which is further slowed by the process being partially sequential
  • Signed Votes and Broadcasting: In each round, validators indeed sign votes that indicate their agreement or disagreement with the proposed block. These are broadcast to other validators, and a supermajority of +2/3 is required for the block to be committed.
  • Finality: Once the supermajority of +2/3 is reached, the proposal is considered decided or final (unless decided otherwise), and subsequent blocks build upon it. Finality in Tendermint is not subject to reorganizations.
  • Network Latency Sensitivity: Network latency is a concern. If the messages (votes) are significantly delayed, the round might timeout, prompting a new round to start, potentially with a new proposer.

Tendermint employs a multi-phase voting process structured into rounds, ensuring a regimented and sequenced approach. Each validator participates by endorsing the proposed block through a verifiable digital signature, with these endorsements disseminated across the network. Should +2/3 of the validators cast their approval within a round, the block achieves immediate finality and the consensus proceeds unimpeded. Conversely, should consensus not be achieved, the process iterates with a fresh proposal in a subsequent round, possibly under a new leader's stewardship. When blocks are final they must then be propagated. Tendermint simply splits the blocks into equal sized chunks and then gossips to peers, rather than utilizing the more bandwidth efficient erasure coding / fanout techniques of modern systems. Latency, therefore, is a critical factor—delays here can slow the iterative rounds, delaying consensus.

D.A.G.G.E.R. Voting:
  • Implicit and Asynchronous Voting: D.A.G.G.E.R. does not require explicit voting messages. Instead, nodes implicitly vote through the creation and attachment of events in the Directed Acyclic Graph (DAG). This contributes to the system's efficiency and scalability.
  • Event Inclusion as a Vote: Nodes in D.A.G.G.E.R. indicate consensus by including the hash of a proposed block in their events. The protocol handles the determination of consensus based on the intertwined structure of the graph rather than explicit communication.
  • Separation of Responsibilities: D.A.G.G.E.R. structurally decouples the tasks of proposing, voting, and achieving finality. This modularity provides robustness, especially in network environments where latency and asynchrony are factors. This structural decoupling helps the network thrive in the more real-world chaotic nature of networks.

In the case of D.A.G.G.E.R., the consensus takes a more elegant and unobtrusive route, weaving voting into the construction of the DAG itself. Validators create and propagate new events, embedding the proposal's hash to implicitly indicate their acknowledgment. This innovative approach capitalizes on the graph's inherent connectivity; an embedded critical mass of recognition from over 2/3 of the nodes intrinsically signals agreement. D.A.G.G.E.R.'s method eliminates sequential communication dependencies, offering increased resilience to network synchrony challenges faced by “weakly” synchronous designs such as Tendermint (or any protocol that succumbs to moments of sequential execution).

Finality

Tendermint is known for offering instant finality, hinging on a synchronous form of communication among validators. Such a model alleges to offer certainty—once a block is committed, it is final. However, as previously stated, network hiccups could potentially stall this process, impacting the blockchain's ability to operate smoothly at all times. Here are a few nuances to consider regarding Tendermint:

  • Instant Finality: Tendermint operates on a Byzantine Fault Tolerance (BFT) consensus mechanism that provides “instant” finality under normal operation. This means that once a block has been agreed upon by a supermajority of validators and committed to the blockchain, it cannot be changed or reverted—transactions within the block are final.
  • Network Delays: Tendermint operates under the assumption of partial synchrony, which means that there is an assumed known upper bound on the network delay (even if this upper bound is not known in advance). Thus, significant unexpected delays can affect the consensus protocol's ability to commit new blocks in a timely manner. Furthermore, the network is unable to scale without also scaling up this very issue of hiccups.
  • Effect of Network Hiccups or Delays: Because Tendermint assumes some degree of synchrony, it is indeed the case that significant network delays can affect the process. If a validator node does not receive messages within a certain timeframe or if a network partition occurs, validators might not be able to finalize a block in the expected time, potentially causing delays until the network recovers or the validators come back online. However, this does not mean the blockchain ceases to operate; it rather implies that the commitment of new blocks is delayed. There are timeout and locking mechanisms to help Tendermint handle this situation. Tendermint prioritizes safety over liveness by their design choice to never live a fully asynchronous existence.

D.A.G.G.E.R. brings a different approach to finality through a well-designed asynchronous consensus algorithm. Finality in D.A.G.G.E.R. is identified by the 'Shadowy Council', a mechanism involving a specific grouping of events in the consensus. An event is finalized when all member events within its “Shadowy Council” confirm its “visibility” within the network, granting it consensus order and solidifying its place in the chain without the need for stringent timing protocols.

Emphasizing the strength of its Byzantine fault tolerance, D.A.G.G.E.R.’s consensus protocol ensures resilience, robustness, and consistent system behavior even in the face of erratic network conditions or nefarious actors. Moreover, the brilliance of D.A.G.G.E.R.'s implicit voting system—where votes are inferred from the interconnected event graph and associated metadata—increases bandwidth efficiency, as there is no longer a requirement for explicit vote messages to traverse the network.

This innovative leap in the finality process contributes to D.A.G.G.E.R.'s agility and fortitude as a consensus mechanism. Unlike Tendermint's instant-but-time-sensitive finality, D.A.G.G.E.R.'s leaderless and time-indifferent model advances a finality that is robust against the unpredictability of distributed network environments. The design choices inherent to D.A.G.G.E.R. 's consensus afford it strategic advantages, particularly relevant to shdwDrive's ambitions to unfaltering decentralized storage services.

Through this lens, while Tendermint provides a more rigid framework that works better the more ideal the network conditions are, D.A.G.G.E.R. embraces the inherently chaotic nature of real-world networks. It's this embrace that enables D.A.G.G.E.R. to cast a wider net of reliability, providing a finality that can persist undeterred and effectively support the demanding requirements of storage infrastructure. Let’s take the concepts of proposing, voting, and finalizing blocks and translate them to performance metrics.

D.A.G.G.E.R. Metrics

The tests were conducted across 14 geographically distributed nodes including independent operators participating in Testnet phase 1. Locations included were Dallas, Chicago, New York, Vancouver, and London). The following data was collected from a Wield node that has an AMD EPYC 7502p, 32 cores @ 2.5ghz, 256gb memory, and 2 x 3.8TB NVMe, on a 2 x 25 gbps network cards:

  • Time to download 1GB snapshot: ~1-2 seconds
  • Time to unpack and catch up to cluster during sustained tps: 4-7 seconds
  • Time to sync a block across network: 30ms-300ms
  • Time for block validation (time it takes for a node to actually process and verify a block's contents once received): sub-500 nano seconds - 20ms depending on latency and tps throughput
  • Time to finality (this includes transactions, blocks, and bundles as a transaction finalization means a block is finalized, which means a bundle is finalized, which also means it's reached a majority of nodes): as low as 70ms, ~273ms on average
  • Throughput sustained - 5,000 tps (under node join/leave and 4Gbps file upload throughput)
  • Peak TPS tested - 20,000 TPS (with zero data loss and 4Gbps throughput)
Tendermint Metrics

From (A paper review: The design, architecture, and performance of the Tendermint Blockchain Network)

  • Time to commit a block: ~2.53 seconds
  • Time to vote and finalize: ~1-2 seconds.
  • Total time to construct a block (default settings): ~1.5-3 seconds. 
  • Throughput real world:  400-600 TPS

From (Analyzing the Performance of the Inter-Blockchain Communication Protocol)

  • Throughput theoretical (ideal network conditions): 10,000 tps (with ~20% loss rate)

Fault Tolerance

A key requirement for any consensus system is an ability to tolerate Byzantine faults. Tendermint and D.A.G.G.E.R. both meet this bar, but accomplish it in different ways.

Tendermint

Tendermint is proven to guarantee safety as long the voting power controlled by honest validators exceeds 2/3. Safety is ensured through lock change proofs which track validator stakes. If validators controlling >1/3 of stake act maliciously, they can violate safety by causing forks. Tendermint disincentives this through slashing - validators lose stake if they violate the protocol. Tendermint of course requires a synchronous (slower moving) network to provide these guarantees. Delayed blocks or votes can undermine security assumptions.

In the case of network partitions, some nodes might be cut off from the rest of the network. Tendermint's consensus can tolerate up to 1/3 of Byzantine nodes, which includes nodes that might be unresponsive due to a network partition. However, if the partition affects more than 1/3 of the nodes, the network will not be able to commit new blocks until the partition is resolved.

Tendermint uses a locking mechanism to ensure consistency in the blocks being proposed. Validators "lock" on a proposed block and will not contribute to the advancement of the consensus for a different block in the same height, unless they see proof that a +2/3 majority of other validators has done so. This helps prevent forks and contributes to the network's ability to recover from transient failures. This is a well-known design challenge in the older proof-of-stake systems because they have to handle forks. It’s an age-old problem that has persisted for over a decade. Therefore, we designed D.A.G.G.E.R. such that it does not have forks. 

D.A.G.G.E.R.

D.A.G.G.E.R. can tolerate up to 1/3 Byzantine voting power while ensuring consensus safety. No set of nodes less than 1/3 of the network can cause irreversible forks.

Additionally, D.A.G.G.E.R. provides probabilistic safety guarantees. The risk of reversal decays over time as events get buried deeper in the DAG structure, quickly and asymptotically approaching zero. The result is that D.A.G.G.E.R. 's guarantees hold even under partial synchrony or total asynchronous network conditions. Liveness is dependent solely on individual node fault rates. Therefore for the honest and properly operating 2/3rd majority of the network, both liveness and security are in place despite instances of a global asynchronous state. 

Furthermore, D.A.G.G.E.R.’s consensus does not demand explicit trust among nodes due to the way it handles conflicting information. Malicious or faulty behavior leads to an automatic expulsion of offenders, reinforced by indisputable cryptographic evidence. This self-cleansing, self-healing, decentralized approach minimizes reliance on trust assumptions while safeguarding the network. This new way of fault detection and self-healing “on the fly” is an entirely novel consensus design of D.A.G.G.E.R. 

In summary, both systems are designed to resist Byzantine faults up to a third of the network. Tendermint focuses on maintaining a certain threshold of operators and deterring misbehavior through economic incentives and slashing, with a demand, albeit a bottlenecking demand, for timely communication. In contrast, D.A.G.G.E.R. fosters a trust-minimized environment, leveraging cryptographic evidence to provide indisputable proof of malfeasance. This leads to the automatic expulsion of offenders, ensuring the system's resilience. This resilience is further bolstered by D.A.G.G.E.R.'s unique graph structure and the existence of economic incentive structures ($SHDW) for nodes which are adapted to maintain integrity even amidst unpredictable network conditions.

Scalability

As we shift focus to the critical dimension of scalability, it is clear that this trait is a defining parameter for the viability of any consensus protocol in the ever-scaling landscape of blockchain technology. D.A.G.G.E.R. emerges as a powerful challenger when compared with long-standing designs like Tendermint due to its novel approach toward scaling.

Communication Overhead

Tendermint’s design is rooted in a leader-based model, which becomes increasingly cumbersome as the network scales. This model can lead to a high communication overhead because for every block that needs commitment, it requires a complex series of messages amongst validators for leader proposals, voting, and vote aggregation. In a worst-case scenario, Tendermint could demand an ( O(N^2) ) messaging overhead proportional to the number of validators, a quadratic increase that challenges scalability.

Conversely, D.A.G.G.E.R., serving as the spine for shdwDrive, prioritizes a lightweight, efficient design. Leveraging advances in hashgraph and DAG technologies, nodes within D.A.G.G.E.R. 's network are structured to interact primarily with a small subset of peers, exchanging messages that are structurally fundamental rather than gratuitous. The consensus is achieved not through a cascade of explicit votes but through the interplay of these interactions within the network's graph topology. This approach leads to a more linear increase in bandwidth demands tied directly to actual transaction volume, rather than the number of validators, which is an important distinction when considering efficient scalability.

“Weakly” Synchronous vs Asynchronous Design

Another aspect where these two consensus protocols diverge is in their alignment with either synchronous or asynchronous design principles. Tendermint’s consensus mechanism involves synchronous processes, requiring validators to engage in a carefully orchestrated sequence of events for block proposals and finality.

To be more precise, Tendermint's consensus requires each validator to send a pre-vote and pre-commit message for each block round to all other validators. In a network of ( N ) validators, this results in ( O(N) ) messages per validator, accumulating to ( O(N^2) ) messages across all validators in total for a single round. Tendermint includes mechanisms for gossiping these votes so that not every message needs to be sent directly from every validator to every other validator. Instead, they can be relayed through the network, theoretically reducing the number of overall direct connections required to disseminate all necessary information and thus reducing network load.

While this leads to a measure of predictability and instant finality, it also imposes fundamental restrictions; the speed of consensus is limited by the slowest participant due to the need for precise timing, imposing a scaling bottleneck.

D.A.G.G.E.R., on the other hand, embraces a fundamentally asynchronous design. Its consensus mechanism does not depend on timing assumptions, allowing consensus to scale with the network's natural bandwidth capacity. The absence of fixed timing requirements means that D.A.G.G.E.R.’s consensus process can be more adaptable and resilient to network latency, making it more suitable for handling larger volumes of transactions without significant performance degradation. Given the shdwDrive application, these larger volumes of transactions come in the form of data uploads, downloads, edits, in addition to the economic transactions associated with these activities. Pushing the engineering envelope in overall throughput was essential in the development of D.A.G.G.E.R., given that any application focusing on data-availability should consider its consensus core as the linchpin of high-capacity operations.

In contrast, Tendermint adopts a general-purpose legacy design that, while versatile across a range of modular applications like Celestia, inherently compromises on throughput. Its consensus mechanism, which relies on a synchronous model with predetermined timing, can become constrained under high transaction volumes and/or high node counts, potentially leading to bottlenecks. As a result, Tendermint will not reach the same level of efficiency in data-intensive environments as a modernized system like D.A.G.G.E.R., which is finely tuned for the demanding throughput requirements of shdwDrive's scalable storage services.

During Testnet phase 1 we conducted tests adding and removing large amounts of nodes to inspect latency, throughput, and synchronization. We were pleased to observe that tripling the node count had only a small impact on latencies (expected), but we have not observed a noticeable negative impact on bundle finalizations. In general, finalizations stayed within the same range and gave us further confidence in the readiness for an expanded Testnet phase 2 in the near future.

In conclusion, balancing the scales between these two protocols showcases the pragmatic benefits of a targeted, streamlined design that D.A.G.G.E.R. brings to the table for scalability. By avoiding the scaling pitfalls that come with traditional designs, D.A.G.G.E.R. targets true useability and scalability.

Data-Availability

Let’s discuss the hot topic of the “data-availability problem” by starting with a simple explanation of what it is: Consider a blockchain as a digital ledger that is maintained by a network of computers (nodes) rather than a single authority. Whenever a new batch of transactions (a block) is added to this ledger, every computer in the network updates their copy to stay in sync. For the blockchain to function correctly, all these batches of transactions need to be visible and verifiable by everyone — this is crucial for maintaining the trust and security of the system (A note on data availability and erasure coding).

Now, here's where we encounter the data availability problem. Sometimes, a participant in the network might act maliciously and try to add a block that either hides some of the transaction data or is outright fraudulent. If this data isn't available for others to check, nobody can verify if the transactions are legitimate, which could lead to inaccuracies in the ledger or even allow fraudulent activities to slip through. Think of it as someone trying to sneak a fake page into a communal accounting book, but they're hiding some of the numbers — if no one can see the full page, they might not catch the fraud.

The crux of the problem is ensuring that all the data making up these new blocks of transactions is completely accessible to anyone who needs to check it. If parts of the data are missing or hidden, it can disrupt the system because:

  • It makes it difficult for others to verify if recent activity on the blockchain is valid.
  • It can stop or delay other participants from adding new blocks, since they might need information from the hidden data to proceed.
  • It challenges the integrity of cross-chain operations, like swaps or trades, which rely on the availability of data.

Data availability is a foundational aspect of blockchain’s functionality and its promise of decentralization and trust. The entire network needs a reliable way to spot and handle situations where data is missing or hidden to keep the shared ledger open, accurate, and secure.

Just as every community relies on the keen eyes of neighborhood watch participants to maintain safety, a blockchain network depends on the vigilance of its nodes to safeguard the integrity of its data. Similarly, fostering a culture of accountability amidst a diverse array of independent actors requires careful attention to the incentives in play.

Data Oversight: Keeping your neighborhood safe

This data-availability problem (or dilemma) commonly faced by blockchain networks can be pictured through the lens of a neighborhood watch program. Consider a neighborhood watch in which the participants earn rewards for reporting alarms. The neighborhood watch participants experience an incentive dilemma when reporting these alarms and false alarms. In this setting, the neighborhood watch participants (akin to nodes) who vigilantly observe and report genuine threats to community safety (akin to unavailable data) must be motivated to act with integrity. However, devising an incentive structure that rewards authentic reporting without encouraging the exploitation of the system for personal gain is a delicate balance. The crux of this quandary is to cultivate a system where the neighborhood's watchers are encouraged to stay honest and alert, ensuring their reports are valid without spawning manipulative behaviors that could ultimately undermine the program’s effectiveness.

Their job is to keep an eye out for any suspicious activity and report it to the community so that everyone can stay safe. In this neighborhood, the dilemma arises when considering how to deal with potential false alarms:

  • False Alarm Penalty: If a resident reports a false alarm and causes unnecessary panic, should they be penalized? If you set a penalty for false alarms, you might discourage residents from reporting any suspicious activity, even when it's real, out of fear of being penalized. This could lead to real threats going unreported.
  • No Consequences: On the other hand, if there are no consequences for false alarms, some residents might report trivial matters, causing disruption and wasting everyone's time. This could desensitize the community to alerts, and they may become less vigilant.
  • Reward for Reports: Should there be a reward for reporting suspicious activity? If residents are rewarded for their vigilance, this could lead to an overzealous watch program where people report every minor incident in the hope of gaining a reward, overwhelming the community's ability to respond.

In each of these scenarios, the neighborhood watch faces a challenging balance. They need to encourage genuine and important reporting without falling prey to false alarms or the creation of a perverse incentive where people report issues just to receive a reward, potentially swamping the system with false reports.

Modernizing the Neighborhood Watch

Now, how does GenesysGo's D.A.G.G.E.R. equate to this analogy? D.A.G.G.E.R. essentially puts in place a decentralized, automated neighborhood watch system, where each resident (node) has a copy of the community's security cameras (data). Because the data is distributed across many residents, a single person withholding footage (data withholding) is futile, as other residents have their own copies.

Moreover, residents are collectively motivated to maintain the watch system's health by a shared reward system—not through individual rewards for reporting. If a watch member tries to exploit the system by withholding footage, not only do they get spotted by the network, but their stake (deposit) in the community watch program is at risk. This discourages disruptive behavior while leveraging the collective group to ensure the watch program runs smoothly and efficiently. There's no dilemma about false reports because the community trust is built into the system—it's in everyone's best interest to keep the watch program honest without fearing penalties or chasing unnecessary rewards.

Solving the Data Availability Dilemma

Next we will explain with more technical depth how D.A.G.G.E.R. proactively addresses this issue to maintain a robust and transparent protocol (a safe neighborhood), but first let’s quickly visit how this is handled by Celestia’s network design through its implementation of CometBFT (a Tendermint fork) in order to draw a comparison.

Celestia's Approach:
  • Modular, Proof-of-Stake Framework: Celestia's seeks a modular approach, underpinned by Proof of Stake consensus through the celestia-core, and application framework via celestia-app interface.
  • Reed-Solomon Encoding: Celestia employs basic one-dimensional erasure coding over a data structure made up of rows and columns (essentially a table). This block-data arrangement into a matrix is extended with parity data for error correction, enabling reconstruction of full data from partial network data.
  • Resource-Efficient Data Availability Sampling (DAS): Light nodes in Celestia's network can verify data availability by sampling and validating random data chunks with their corresponding Merkle proofs, eliminating the need to download entire blocks.
  • Network Scalability via Light Nodes: The data availability layer of Celestia scales as the number of light nodes performing data sampling increases, supporting larger blocks without disproportionate resource demands on light nodes.
  • Integrated Fraud Proofs: Celestia's structure allows light nodes to use Fraud Proofs for contesting and rejecting blocks with incorrectly extended data, reducing data requirements for validation.
  • Namespaced Merkle Trees: By utilizing Namespaced Merkle Trees, Celestia partitions data into namespaces, allowing applications to retrieve and validate only the data relevant to them, simplifying data management.
  • ABCI++ and Modular Communication: Celestia implements blockchain interface communication by integrating ABCI++, an updated version of the original ABCI protocol when Tendermint launched, facilitating improved interaction between consensus and application layers.
D.A.G.G.E.R.'s Approach:
  • Graph-Based Topology: D.A.G.G.E.R. utilizes a hashgraph and a Directed Acyclic Graph (DAG) structure, which allows for multiple blocks to be added to the ledger simultaneously. This contrasts with the linear progression of blocks in traditional blockchain systems, and enables parallel asynchronous execution.
  • Implicit Voting: The consensus in DAG-based protocols can be reached through implicit voting mechanisms inherent in the graph structure. This can make the consensus process more efficient, as it potentially reduces the need for explicit communication for voting and agreement on new blocks.
  • Auditor Nodes: D.A.G.G.E.R. incorporates specialized auditor nodes that contribute to network security. These nodes actively participate in verifying and maintaining data availability, increasing the robustness of the network. Auditors verify data integrity and fraud by calculating proofs issued by Wield nodes as well as benefiting from an incentive structure.
  • Advanced Erasure Coding Schemes: D.A.G.G.E.R. utilizes the latest most advanced erasure coding of an improved O(nlog2(n)) complexity over characteristic-2 finite fields.  The achievement results in significantly less computational overhead greatly lowering network bandwidth and improving the execution speed of algorithms.
  • Persistent and Redundant Data Replication: D.A.G.G.E.R.'s directed acyclic gossiping approach ensures that data isn't simply sent through the network but is persistently replicated across numerous nodes. This replication makes it far less likely that a malicious actor can withhold data long enough to game the system, as multiple nodes will already have replicated copies.
  • Direct Accountability: In D.A.G.G.E.R., nodes rely on a consensus mechanism that identifies and penalizes nodes which do not properly replicate and share data. Unlike in the fisherman scenario where responsibility for identifying faults is somewhat diffuse, D.A.G.G.E.R.'s protocol ensures that the responsibility for maintaining data availability is distributed across the network.
  • Real-time Monitoring and Verification: D.A.G.G.E.R. implements a real-time verification system where each node acts as a watchdog, continuously verifying the availability and validity of data. It reduces the singular node dependency for identifying unavailable data and instead implicates the collective network in this responsibility. This means a “neighborhood watch alert” is merely a trigger for immediate verification by multiple nodes.
  • Reduced Incentive to Withhold Data: Since the D.A.G.G.E.R. system heavily relies on widespread and almost instantaneous data distribution, the potential benefit of withholding data diminishes significantly. The design incorporates a cooperative model where the best outcome for every participant is achieved through adherence to the protocol, which includes reliable data replication.
  • Economic Incentives: To prevent the scenario where nodes must operate out of altruism, D.A.G.G.E.R. ensures that all nodes in the network that share the responsibility of maintaining data availability are economically incentivized to do so. Economic incentives in D.A.G.G.E.R. are distributed in such a way that they reward the collective maintenance of the network rather than individual acts that may or may not contribute significantly to the network's health.
  • Data Restoration Capabilities: The utilization of erasure codes embedded in the D.A.G.G.E.R. architecture means even if a node behaves maliciously and withholds certain data, the system could potentially restore the original data as long as there is a critical threshold of it still available in the network.
  • Gossiping Governance: Any data-related disputes within D.A.G.G.E.R. can be resolved through its gossiping graph structure, which rapidly cascades information throughout the network across a ubiquitous immutable graph-topology. This approach fosters a community-driven environment where dishonest behavior is immediately noticed and rectified by the collective action of other nodes.

Both platforms, though distinct in design—D.A.G.G.E.R. fixated on specialized decentralized storage and Celestia on elevating blockchain frameworks—share a fundamental necessity: the assurance of block data availability. D.A.G.G.E.R. 's architecture inherently assures this through its bespoke cutting-edge DAG-based ledger topology, which seamlessly integrates data availability as a core function. Conversely, Celestia systematically constructs this availability through specific add-on mechanisms to the legacy framework of Tendermint. 

Scalability

The data availability problem doesn't end with the mere ability to sample data – it extends to maintaining that ability effectively and efficiently as the scale of data grows. In blockchain networks, as the number of transactions and the associated data increase, simply ensuring that data can be sampled isn't enough. The system must scale to handle larger datasets without compromising performance or security. 

Here's how scaling impacts various aspects of data availability:

  • Network Bandwidth: As more data is transmitted across the network, the bandwidth requirements increase. A scalable data availability solution must ensure that even light clients (nodes with limited resources) can participate in the sampling process without needing to download the entire dataset.
  • Storage Requirements: Increased data leads to higher storage demands on full nodes, which can centralize the network if only entities with large storage capacities can participate. Scalable solutions must address this to avoid centralization.
  • Computational Resources: Larger datasets require more computational power to encode, decode, and validate. Scalable data availability solutions must optimize these processes to maintain network performance.
  • Data Reconstruction: The erasure coding scheme used must be robust enough to reconstruct data seamlessly as the volume grows, and the probability of data loss or unavailability due to nodes going offline should not increase with scale.
  • Security: A scalable data availability solution must maintain security at scale, ensuring that fraudulent data or censorship attacks are detectable and preventable even as the amount of data increases.

Both systems' approaches to the problem of scaling data availability are crucial because the ability to operate efficiently at a small scale does not guarantee the same performance at a larger scale. The effectiveness of these solutions will likely depend on their capacity to handle billions of transactions and the petabytes of data that a global-scale blockchain network may produce. For this reason we want to explore a few key ideas that we believe prepare D.A.G.G.E.R. for scale.

The New Generation of Data-Capable Design: D.A.G.G.E.R.

As we venture into the future of decentralized systems, we are witnessing a transformative approach to data handling and network efficiency. The D.A.G.G.E.R. platform represents a paradigm shift in data-capable design with its array of pioneering features that culminate in a comprehensive solution for storage and accessibility. Reflecting a confluence of improved erasure coding mechanics, seamless integration of cutting-edge data transfer protocols, and a visionary auditor architecture, D.A.G.G.E.R. is reshaping the landscape of blockchain data availability. This section delves into the intricacies of D.A.G.G.E.R.'s progressive design, examining how each of its components play a pivotal role in elevating data management to new heights of efficiency and reliability.

Elevating Data Capabilities with D.A.G.G.E.R.’s Improved Erasure Coding

D.A.G.G.E.R. distinguishes itself in the realm of data availability with its cutting-edge Reed-Solomon erasure coding, streamlined for a sleek O(n log^2(n)) complexity. This innovation not only facilitates extremely fast file and ledger processing speeds, but also strengthens data integrity across the network. It's a shining example of how GenesysGo has intricately woven data resilience into the DNA of D.A.G.G.E.R.'s sophisticated synchronization architecture.

Alongside the core functionality, shdwDrive champions this erasure coding, applying it with finesse on the client-frontend to protect user data even before it touches the network. The strategic segmentation and distribution of data underscore a commitment to efficiency and safeguarding against data loss. This reflects GenesysGo’s championing of purpose-crafted solutions for a new generation of decentralized systems.

D.A.G.G.E.R. propels this ethos further with a novel augmentation to Reed-Solomon erasure coding. Using a bespoke hybrid recursive algorithm we improve the adeptness and adaptability to the unpredictable nature of file sizes common to high volume content delivery networks. The innovative nature of this algorithm embodies a flexible, forward-thinking design philosophy intrinsic to D.A.G.G.E.R.

Further enhancing this advanced erasure coding is the integration with QUIC— the modern standard in transport protocols — optimizing data transfer between nodes to the highest standards and staging a future of growth and scale across the network. This combination addresses packet loss with unparalleled precision, a feature not seen in the realms of traditional protocols like Tendermint.

D.A.G.G.E.R.’s dual-pronged application of erasure coding stands exceptional — one tailored for peer-to-peer communication providing packet loss stability, and the other for fine-tuned segmentation of client-side file data. The synergy between these two enables GenesysGo to deploy decisive, low-overhead computations for tasks such as data repair and file retrieval, which is only augmented further by the inherent efficiencies of the Directed Acyclic Graph (DAG) structure.

By ingeniously coding the very placement of data shards across the network, using a deterministic RNG and metadata nodes, D.A.G.G.E.R. exhibits a masterful command over data distribution. The architecture doesn't just streamline current processes; it anticipates future demands, ensuring that applications like shdwDrive not only perform to today’s standards but are primed to exceed tomorrow's expectations.

It’s clear that the union of D.A.G.G.E.R. and shdwDrive benefits immensely from the most modern iteration of Reed-Solomon erasure coding within the QUIC framework. GenesysGo’s uniquely crafted hybrid erasure coding scheme further amplifies this, allowing front-end applications to achieve remarkable efficiency, dynamic file size optimization, and breakneck speeds. Collectively, these elements forge an ecosystem that not only highlights the data availability principles inherent in D.A.G.G.E.R.'s blueprint but redefines them, cementing D.A.G.G.E.R.'s role as a beacon of innovation in decentralized network technologies.

Enhancing Data Prowess Through Holistic Design

In exploring the realm of data availability, it's essential to consider not just the foundational core networks that support the applications but also the seamless integration and alignment between these applications and their underlying consensus mechanisms. To begin, we look at the April 2023 published aspects of the development roadmap for CometBFT (CometBFT Priorities for 2023 Q2), the consensus engine forked by Celestia from Tendermint Core, which raises valid concerns and pinpoints areas for fundamental improvement within their system regarding bandwidth overhead and fragilities. These updates point out the inherent difficulties that can arise when a generic solution strives to fit the specific needs of an evolving application like Celestia. While these challenges are a natural part of the software lifecycle and are not a mark against the protocol, they do help illuminate the strengths of a modern bespoke solution like D.A.G.G.E.R. Let's contextualize a few takeaways to highlight the advantages of a cohesive, purpose-designed system.

One of the most salient points is D.A.G.G.E.R.'s adoption of the modern QUIC transport protocol. QUIC enhances security through improved encryption, reduces connection establishment time, and enables multiplexing without head-of-line blocking. This protocol represents modern best practices for networking and provides innate benefits for decentralized applications—benefits that, at present, CometBFT’s (Tendermint fork) approach does not fully leverage.

With this QUIC adoption, D.A.G.G.E.R. implicitly acknowledges the crucial role that modern transport protocol technology plays across all facets of decentralized storage solutions. This strategic choice not only minimizes latency and maximizes throughput but also ensures that each layer of the protocol, from consensus to network communication, is aligned for optimal performance.

The CometBFT roadmap points out the need to revisit the custom P2P transport protocol that has been carried over the nine year legacy of Tendermint—a interesting signal of the risks associated with modularity across applications (Celestia), their interfaces (ABCI), and the consensus core (CometBFT). While modularity offers flexibility, adaptability, and speed-to-market, it inherently introduces dependencies on third-party systems and the potential for disruptions during significant overhauls of any core underlay supporting the application overlays. Celestia's reliance on CometBFT means that it must navigate the changes that come with CometBFT's ongoing evolution, including any unintended repercussions on network performance or application stability. 

Beyond the benefits of modular systems, the multi-layered architecture can also pose significant risks associated with third-party dependencies. Any technology stack that integrates various components from different providers to form a cohesive framework inevitably opens itself to potential vulnerabilities and risks stemming from its dependencies. This aspect becomes critical when the underlying consensus mechanism originates from a fork, as with CometBFT from Tendermint, which carries nuanced implications for the entire blockchain apparatus encapsulated by Celestia.

In the intricate Celestia stacking, the Tendermint fork of CometBFT provides a vivid example of cascading dependencies. Any core update or feature enhancement in CometBFT necessitates comprehensive evaluation to estimate its ripple effects across the application interface it supports, which, in turn, integrates and impacts multiple SDKs. These dependencies, while empowering Celestia's modularity and capability to spin off new blockchains, also spiral into a complex web of contingent functionalities, where a single change can propagate unforeseen consequences throughout the entire stack. This creates a scenario where the robustness of an upper layer is perpetually at the mercy of the foundational elements' resilience and stability.

Addressing this issue requires Celestia to engage in rigorous version control, dependency management, and backward compatibility checks to ensure that foundational improvements do not compromise the integrity of the upper layers. The very strength of Celestia's modular approach introduces a persistent need for vigilance to safeguard against issues that may cascade from dependencies — issues that could potentially culminate in performance bottlenecks, compromised security, or operational failures. 

The dependency woes are amplified considering that Celestia's goal is to be a launchpad for other blockchains. Each spawned blockchain intertwines its fate with Celestia, further complicating the dependency chain. Changes at the lower levels of Celestia could necessitate time-consuming and often expensive, coordinated updates across all dependent chains, increasing the risk of network fragmentation, version incompatibility, and collective failure. All of this is not to take away from the ambitious goals set forth by Celestia, or to detract from the solutions they have delivered to the legacy blockchain space, but rather to think dispassionately about the technical debt management such an approach brings with it.

In summary, while CometBFT and Celestia’s modular blockchain design provides flexibility, modularity and extensibility, it does so at the cost of introducing a layered topology of dependencies. Each successive stratum not only inherits the features of the layers below it but also their vulnerabilities and the amplified risk of chain reaction failures precipitated by a single point of change. It becomes imperative for blockchain architects to weigh the tradeoffs of this stratified approach against the seamlessness and hermetic stability afforded by a holistic design approach like D.A.G.G.E.R. Here, a fully integrated end-to-end solution bypasses the fragility of interdependencies, streamlining innovation, swiftly incorporating modern technology and change management to provide a robust and consistently harmonious blockchain ecosystem.

For blockchain experts and enthusiasts alike, the modern features of a purpose-built system like D.A.G.G.E.R. powering shdwDrive presents compelling solutions to long standing problems in distributed systems engineering. As we scrutinize the technical evolution of both systems, it becomes apparent that a full-stack, holistic engineering approach like D.A.G.G.E.R. offers compelling go-to-market advantages.

Modernizing Data Sampling and Auditing

As previously discussed, D.A.G.G.E.R. employs erasure coding as part of its strategy for data storage and data availability. Erasure codes allow for data to be split into shards so that redundancy can assure data durability and reliability while maintaining high space efficiency. The parameters of the erasure coding scheme are tunable to balance system priorities such as responsiveness, performance, durability, and composure. The data availability issue this addresses is ensuring that all parts of the data are accessible, even in case some nodes fail or attempt to withhold data. Not only is the data always accessible, but audited via Auditor nodes that calculate mathematical proofs programmatically generated and incentivized to mine and verify. 

Overview:
  • Erasure coding is used to encode file data into shards with redundancy. This ensures files can be recovered even if some shards are lost or corrupted.
  • Nodes perform periodic audits by requesting shard metadata and data shards from peer nodes and verifying their integrity. This detects corrupted or missing data.
  • When corrupted data is detected via a failed audit, a repair procedure is triggered. Missing shards are efficiently retrieved from peer nodes and reconstructed via Reed-Solomon error correction.
  • Requiring nodes to pass storage and bandwidth proofs prevents nodes from circumventing audits by repairing data on-demand. Rational nodes will simply store the data to avoid bandwidth costs of repairs.
  • Latency requirements on nodes ensure they cannot quickly circumvent audits by retrieving and repairing shards on-demand when audited.
  • Having nodes chosen randomly for audits, and requesting shards from other random peers, prevents collusion to fake audits.
  • The audits and repairs ensure data shards remain consistently available across nodes, mitigating data availability attacks like withholding or deleting shards.
  • Even if some nodes fail audits or go offline, the erasure-coded redundancy allows the original data to be reconstructed from the remaining shards.

In summary, the auditing and repair procedures, combined with erasure coding and enforced node requirements, ensure transaction data remains consistently available and recoverable in D.A.G.G.E.R.'s decentralized storage system. This provides strong guarantees around data availability and durability. The rationale behind this is twofold:

  • Economic Disincentive for Circumvention: For rational actors (operators), the cost (in terms of bandwidth) of retrieving and repairing shards solely for passing an audit is non-trivial. It is economically more sensible for them to simply store the data correctly than to engage in such behavior. This aspect ties in with the concepts of Rational Proofs of Storage, where it is assumed actors will behave in economically rational ways.
  • Prevention of Quick Circumvention: The system also has latency requirements that prevent operators from quickly repairing the data to pass audits. These requirements facilitate the detection of actors who might attempt to game the audit process by only storing data upon request. This is enforced through the mandatory bandwidth and latency capabilities of nodes, ensuring that the network remains responsive to genuine requests for data.

In light of these innovative measures, we are excited to look toward the future, particularly regarding the development and integration of mobile auditor nodes—a concept poised to dramatically scale out audit efficiency. Mobile auditing represents the next frontier in ensuring data integrity, harnessing the power of ubiquitous devices to further decentralize and distribute the auditing process. This will not only maximize efficiency and resilience but will also contribute to a more robust and fault-tolerant network. With mobile auditor nodes, the shdwDrive protocol embraces a future where everyone can partake in maintaining the network's integrity, essentially turning every smartphone into a guardian of data. This initiative is a testament to our commitment to staying at the leading edge of storage technology, continuously evolving to meet the demands of a decentralized world.

D.A.G.G.E.R. - A New Standard for Data Availability

As we come to the end of our exploratory match-up, we've navigated the intricacies of both D.A.G.G.E.R. and Tendermint in this friendly tech tussle. From one corner of the blockchain world to the other, innovation thrives, and today, we tilt our hats in respect to the contributions Tendermint has made to the decentralized ecosystem over its tenure.

Now, let's circle back to the essence of this comparison—a triumph of specialization and a nod to the agility of D.A.G.G.E.R. paired with shdwDrive, it raises its corner of the sky a little higher, heralding a fresh perspective on what it means to truly have data at your fingertips. This isn't just about being able to retrieve data—it's about redefining the term 'data availability' and setting a new, elevated standard that aims to contend for the title belt in the blockchain arena.

D.A.G.G.E.R. excels with its specialty-built nature, focusing its energy like a seasoned athlete on one goal: unparalleled data storage and accessibility. It demonstrates a level of resilience and integrity that is meticulously engineered, offering not just a platform but a new way forward for today's needs and tomorrow's aspirations.

Making use of QUIC, D.A.G.G.E.R. leads with a network handshake that's both firm and fast, ensuring data exchanges are smooth and steadfast. Its advanced Reed-Solomon erasure coding, hybrid recursive sharding algorithm, low bandwidth overhead and asynchronous dynamics are some of the cornerstones—not just of its operations—but of a bigger vision that emphasizes intelligent design in the face of decentralized data scenarios. It's a purpose-driven standard that not just competes but seeks to redefine our expectations of the blockchain space.

Conclusion

D.A.G.G.E.R., through its dedicated design, provides a strategically crafted solution that speaks to the future of data availability. It emerges as a prime example of how targeted, forward-thinking approaches can enrich the blockchain landscape.

We move forward with a new understanding, an awareness that data availability can be more than just a feature—it's a multidimensional standard, redefined by the likes of D.A.G.G.E.R. and shdwDrive. Here, efficiency, useability, and resilience are not just aspirations but tangible realities. As we conclude this showdown with other protocols, it's clear that, while the spirit of competition remains friendly, the bar is being set. The challenge now? To meet—and exceed—the standard that D.A.G.G.E.R. and its shdwDrive application will usher in as we deliver decentralized technology to meet the demands of our data rich future.

Stay tuned for our next article in the “D.A.G.G.E.R. versus” series, when we continue our toe-to-toe comparisons by taking on the popular heavyweight - Filecoin.

References:
  1. The Latest Gossip on BFT Consensus
  2. Tendermint: Consensus without Mining
  3. Analyzing the Performance of the Inter-Blockchain Communication Protocol
  4. A paper review: The design, architecture, and performance of the Tendermint Blockchain Network
  5. A note on data availability and erasure coding
  6. CometBFT Priorities for 2023 Q2
  7. D.A.G.G.E.R. Litepaper