Starting Fulltime Work on the Causevest Protocol!

Hi everyone. The transaction simulator has been running for some time and we’re glad we did it because it has uncovered bug after bug :sweat_smile:. Once it is stable and running smoothly we want to launch the CLI testnet. We’re hoping that is around the start of May/June.

Looking forward to shaing it with you!

1 Like

A bit of a belated update, but work on the transaction generator and the issues it has uncovered is continuing. It’s taken longer then we had hoped, but we hope the CLI testnet is only a few weeks away. A lot of issues have been fixed and the system is a lot more stable then when we started. It’s coming soon everyone!

1 Like

It has been a summer of working here and a lot has gotten done. The transaction generator uncovered even more issues but after a lot of fixin, the system appears stable. Fast sync has been upgraded to handle bigger loads and should be capable of quickly syncing to a mainnet size chain. Nominee history has been streamlined to take up a lot less space and only show data that is most relevant to users. A state cache has been implemented that makes full syncing and just normal operation easier on the I/O of your disk,

Public testnet is on its way soon. We’re excited to be able to share our hard work with you!

1 Like

Hello Everyone,

The road to testnet has been rougher then we anticipated but it is nearing completion. The transaction generator has continued to uncover a lot of rough edge cases that have required smoothing. However, we are mostly focused on optimization now and hope to have a testnet sometime in the next month.

1 Like

Hello again everyone!

A lot of work has been going on behind the scenes here. We’ve been coming close and close to testnet, but the team want to make sure the system is a stable as possible before we bring more people into the network. We want to make sure the testnet gets valuable data and doesn’t just flounder, so we’ve been working on stability improvements over the last few months. We hope to have the testnet early next year.

3 Likes

Another Update:

A lot of work has been done on improving node stability with a focus on syncing stability. The transaction generator has been running and we’ve make a lot of backend improvements. There are still a couple of issue that need to be taken care of but we’re we’ll on our way to the first CLI testnet - It’s looks like it should be ready by mid February. We’re really excited to get the community involved and have something people can touch and play with.

1 Like

Hello Everyone,

It’s been a couple of months of hard work but we’re getting closer and closer to the testnet launch - we’re almost there!

These past couple of months have been focused on backend stability - in particular fixing a longstanding issue that could cause all of the nodes in the network to get stuck if a single node broadcast a false header height that was very high.

To go into more detail, to do a successful fast sync the node must first download all of the blockheaders for the chain, then use the litesync algorithm to verify that those headers have been finalized by 2/3rds of the finality generator set.

However, it was doing these two tasks sequentially. This meant that if a peer advertised that they had a very large number of headers, it would try to download them all first which could tie up the node for an arbitrary long time.

These headers would never be able to pass a verification, but that’s not a big help if it never reaches that step. Obviously the node should alternate between downloading new headers and verifying them. However this change required a major rewrite of the lite sync verification logic to allow the restarting necessary to verify the chain in pieces instead of all at once.

We’re just finishing up with this change and should be ready for testnet soon!

1 Like

Hey Everyone,

Work has been continuing at a strong pace. The issue with headers first fastsync has been fixed and tested. The node now alternates between downloading headers and verifying proofs that those headers are correct, this should prevent a attacker node from causing issues in the network by advertising a very large blockheight.

Now we are finishing up having fast sync warp to the highest finalized checkpoint instead of the chain tip. The backend logic of this change is complicated, but we should have it done in a few days. This will close the final known attack against the fast sync algo, as there was no way to verify the chainstate above the highest finalized checkpoint so an attack node could have sent a bad chain to the fast syncing peer.

After that update, we will be undergoing a review of the economic variables in preparation for testnet. Hope to see you there!

Everyone,

A lot of work has been done but a lot of it is invisible. The node now fast syncs to the highest finalized checkpoint. We’re focusing on polishing things up for testnet and making sure fast sync works every time without issues.

Changelog:

  • Update Mempool scanning and rebroadcasting to fix issues with invalid transactions getting stuck in the mempool and valid transactions not being properly spread around the network
  • Fix issues with litesync algorithm and restrict new validators from being created in the first two finalized checkpoints to prevent edge cases
  • Add reorg logic to new merge request and confirm the header chain system. Litesync algo now can handle a justified checkpoint being reorged without getting into an invalid state.
  • Fixed an issue with header merge requests that could lead to the request always hanging. The fix means header requests should be more robust, even if the node’s database is slow or it is doing other work that would slow it down, it should still return a result before the request timed out
  • Significant work is being done to iron out the final issues with fast sync to the highest finalized checkpoint. In order to be able to quickly generate the state leafs for a past state without forcing the node to completely revert, extra data needs to be stored. To prevent this from taking too much space, this extra data needs to be pruned when no longer relevant. However, the current implementation of this appears to create some issues where data that is still necessary is being pruned. We are working to fix this next week.
1 Like