Testnet 2: Facts, Fixes, and Future

Welcome to an exciting update from GenesysGo on our journey through Testnet 2. We've been gathering loads of data and making important strides toward creating a rock-solid platform. Here's a closer look at what we've accomplished, what we’re focusing on, and a few key aspects of our network that are indeed setting the stage for what's to come.

By the Numbers: Significant Growth
  • Over 600 participants: Hundreds of individuals and teams are collaborating towards a stronger network.
  • Over 1 PB of testable storage space: Imagine storing about 500 billion pages of standard typed text - that's how much space we’ve strung together for testing.
  • Representation from over 15 countries: Our community spans the globe, from North America to Asia and beyond.
  • Over 27 million SHDW staked across 2700+ wallets: A clear indicator of trust and investment in the future of shdwDrive.

These numbers underscore the scale and engagement we've seen in Testnet 2, painting a picture of a diverse, active ecosystem that’s growing daily.

Learnings
  • Corrected a flaw in the uptime tracking system that led to unnecessary excess in the reporting of uptime metrics, streamlining the data we use to measure network reliability.
  • We resolved a snag in which nodes appeared correctly in sync yet stumbled at the final hurdle, failing to reach a consensus.
  • Tackled the rare but troublesome issue of thread lock that previously sent machines into downtime during critical updating periods, known as epoch boundaries.
  • Implemented a fix for a condition that allowed nodes stuck in a loop during network restarts to remain erroneously active and to continue feeding into the uptime metrics tracker.
  • Laid the foundational work for enhanced logging capabilities in future updates, aiming to make our system's logs more detailed, user-friendly, and insightful.
  • Observed acute synchronization bottlenecks, which have been scoped for fixes and placed on the roadmap.
  • Studied the bandwidth budget and behavior across individual shdwNodes under high stress leading us to key discoveries that are now planned enhancements

And we won’t stop there – we have additional scope planned as we progress through testnets.

Refinements Underway & Enhancements on the Horizon
  • Enhanced Transport Layer Security (TLS): Underway - we are moving to s2n-tls to fortify our cryptographic posture and streamline handshake protocols for heightened secure connections.
  • Persistent Connections: Underway - By implementing connection resumption, we're reducing the latency in repeated TLS negotiations, making secure communication within our network both swifter and more robust.
  • Modular Codebase: Re-organizing the codebase into more distinct modules is set to propel our development efficiencies, paving the way for swifter iterative advancements and clearer codebase navigation as we go full acceleration towards mainnet.
  • S3 Compatibility and Refinement: Perfecting the integration of S3 multipart object operations will bring enhanced cross-compatibility and operational agility to our storage ecosystem.
  • Upgrading Peer-Event Synchronization: Introducing a novel peer event sync architecture primed with an event broadcast system, liberating nodes from the onus of manual sync requests and embracing a higher level of network synergy.
  • Scalable Event Production: To accommodate network growth, we're evolving the shdwDrive consensus architecture to dynamically scale event processing into deterministic sets of event producers (groups of shdwNodes that work together), thus reducing the overall number of gossip events required to form consensus. This smoothes network spikes on a per-node basis and improves bandwidth efficiency.
  • RPC Methodology and Documentation: Formalizing the RPC interface for user interaction that is both comprehensive and user-oriented to ensure a seamless shdwDrive experience.
  • shdwWallet: Assembling an intuitive and feature-enhanced wallet that orchestrates SHDW staking, node uptime tracking, and reward claims within a cohesive, intuitive user interface. This sets the stage to support an abundance of user and developer experiences.
  • Bandwidth Governance: Enacting shdwNode-specific bandwidth tracking architecture, laying the groundwork for precision in operational metrics, egress tracking, and equitable reward distribution reflective of cloud revenue models. Your cloud, your token. 
  • Database Architecture Refinement: We're advancing our database systems to manage dual needs efficiently: the high-speed transaction processing required by our ledger and the sophisticated snapshot and compression demands for our historical data. The goal is a seamless data flow for current transactions while ensuring that past network states are preserved with precision and efficiency.
  • Dedicated Snapshot Port: We're setting up a special port just for snapshots, making sure that saving and restoring our network's history doesn't slow down the day-to-day traffic.
  • Expedited Node Catch-Up Mechanisms: Building gossip mechanisms to enable nodes that lag beyond one epoch to swiftly recalibrate and synchronize, improving network stability.
  • Metric Aggregation: Refining metrics collection by condensing them into aggregated logs, reducing network telemetry overhead to a coherent and optimized stream.
  • shdwNode Command-Line Interface (CLI): Crafting a CLI to consolidate node operations, from system configurations to real-time status insights, into a singular, potent toolset. Phasing out config files in favor of direct CLI parameters, promoting an uncluttered and deterministic environment for node configuration.
  • Automated Binary Deployment: Implementing GitHub actions to automate the build process of new network binaries, ensuring node operators can deploy upgrades without hiatus.
  • shdwDrive Lexicon: Instituting a consistent nomenclature throughout the shdwNode codebase, potentially underpinned by a comprehensive glossary for intellectual clarity.
  • shdwAuditing Framework: Introducing a robust audit system to validate data integrity proofs across nodes, elevating trust through verification that no bytes have been altered from the originals.
  • Data Repair Protocols: Post foundational network improvements, deploying a well-defined data repair mechanism to address aberrations in data storage, fortifying network resilience, and data integrity.

These targeted improvements reflect our commitment to elevating the network's quality and robustness. By methodically addressing these areas, we're not just patching up the present; we’re paving the way for an even more powerful and reliable decentralized storage solution.

Learnings From Network Scale

Our network is operating with impressive scope and vitality, a testament to its standing as a testnet. Our review of many pivotal elements of the consensus core reveals substantial stability in key operational areas, which fills us with confidence. The strategic adjustments set for our gossip systems, synchronization processes, and bundle propagation are earmarked for early refinement, signifying a sequence of forthcoming version updates and system restarts.

Given the significant scale achieved by the testnet, we stand poised to:

  • Simulate mainnet conditions with greater efficacy, providing a rich and accurate rehearsal for the shift to full-scale operations.
  • Identify additional edge cases more frequently, an advantage tied to our expanded network, allowing us to optimize against potential issues preemptively.
  • Create and navigate real-world chaos and network disturbances with increased capability, a vital step in stress-testing our system's resilience.
  • Understand the nuances of geographic distribution on performance with deeper clarity, ensuring our network thrives across diverse environments.
  • Accelerate our learning around reward management structures and shdwOperator payouts, ensuring a fair and motivating system for those running our network.
  • Enhance network economics more swiftly, moving towards a more value-driven ecosystem for all stakeholders.

The expanded scale of Testnet 2 significantly boosts our insights and capabilities. As we progress, we're not just staying on course but also improving our ability to learn and iterate quickly.

Navigating Challenges and Forging Ahead

Amidst the continuous improvements and rigorous testing within Testnet 2, it's important to clarify the unique strengths and areas for growth that we've observed in the shdwDrive core consensus.

D.A.G.G.E.R. processes thousands of transactions every second, showcasing the network's capacity to handle heavy traffic efficiently and adapt to changes without delays. While this paints a picture of a system purring along, it's also true that we've encountered some challenges – expected hiccups on the road towards optimizing the technology.

However, the beauty of a testnet is that it serves precisely to unearth these issues. By proactively stressing the network and probing its limits with controlled adversarial tests and scaling operations, we're uncovering and resolving the edge cases that lead to disruptions. This proactive approach ensures that the 'critical systems,' or the essential components of our network that ensure its stability and reliability, remain solid. While we actively pursue these tests to improve resilience, we've noticed that the underlying network has robust baseline stability and runs without major issues when left to its own devices.

The consistent operation, despite intentional stressors, speaks volumes about the resilience built into the shdwDrive v2 consensus protocol. This robust core becomes particularly apparent when considering properties like an asynchronous consensus, leaderless coordination, and a gossip-based communication system that doesn't buckle under pressure.

So, while it's accurate that our journey through Testnet 2 has been marked by various challenges, each one has been an invaluable lesson. Each iteration, each fix applied, and each hiccup navigated brings us closer to the ultimate goal: a mainnet capable of delivering the reliability, scalability, and efficiency expected from a groundbreaking decentralized platform.

Thank You shdwOperators

Our journey wouldn’t be what it is without the incredible participation of our shdwOperators. Your efforts yield valuable information and experiences shaping the future of shdwDrive. We're grateful for your patience and are excited about the additional improvements to come as we continue to fine-tune the network together.

Conclusion

Stay connected with our blog for upcoming developments and announcements. Follow us on Twitter/X or hop into Discord to stay in the loop on the latest updates.

Just as Bitcoin was created to decentralize the financial infrastructure monopolized by big banks, SHDW was created to decentralize the cloud infrastructure monopolized by big tech. This fundamental principle marks the cornerstone of our ethos at shdwDrive. We are unwavering in our commitment to ensure that the power to store, manage, and monetize data remains in your hands—the hands of the users, not hidden away in the silos of a few dominant entities.

On this journey, we all play a role in establishing a digital domain that's both empowering and equitable—a place where the 'Your Token, Your Cloud' philosophy resonates not merely as an ideal but as a tangible reality for everyone.