Rick Dudley Discusses Laconic Network on The Interop

02.09.2023-By — Michael Gushansky
  • Insights
  • Product
post image

Laconic cofounder Rick Dudley appeared on a special livestream of The Interop with host Sebastien Couture to discuss the Laconic Stack, the blockchain data problems that Laconic solves, Laconic’s novel governance structure, and how Laconic can index and verify data faster, more efficiently, and at lower cost. 

Below is a distilled transcript of Rick’s responses during the discussion.

The Future is App Chains

I think there will be millions of chains, and we'll be using a combination of rollups and mesh–not straight linear L1, L2, L3, but also meshes of rollups and attestations publishing bridges, etc. And although we may have millions of chains, we won't have millions of massive chains. A large chain may have 100 members, and there may be one or two chains out there with 4,000 validators. But in the world, you only need a few of those.

I think everything becomes an app chain. I think mainnet Ethereum ultimately becomes an app chain and the application is settling rollups–very similar to Cosmos Hub, frankly. Polkadot, Ethereum 2.0, Cosmos Hub are all actually very similar in terms of the endgame state in the final thesis. And I don't think that there will necessarily be a winner per se. I think they will have curious different properties. 

Why Laconic?

The ultimate goal of Laconic is to get all of the data that a user is concerned about in the hands of that user. Not in a cloud-hosted environment, not in Microsoft, not in AWS, but in users’ actual custody. And to enable them to do all the verification themselves.

Right now, it’s very difficult to extract parts of data from the Ethereum Mainnet that are relevant for Dapp needs. It’s almost impossible to synchronize a Geth node in a reasonable amount of time.

There are multiple light client protocols that have come around to help alleviate this problem but they still don't go all the way. The Laconic Network goes the whole way. It goes from source code, to what is in the user's eyeballs with everything being verifiable. If you see a message on Laconic that came to you through the Laconic Network, you could say, "I want to know which blockchain or blockchains this came from. I want to know what code generated this result. I want to know who wrote that code.” We provide all of that in the Laconic Network.

Three Major Components of Laconic

There is the Laconic LLC itself, which is in the Cayman Islands. There is the Laconic Stack, which is the standalone software that anyone can run today to generate this data and the evidence that they need. And then there's the Laconic Network, which facilitates the buying and selling of data. It facilitates running these services, discovering the services, paying for services, and then making sure all of that is verifiable.

Those three components are an evolution. We've iterated on the Stack many times over at this point. MakerDAO is still using an early version of that stack to this day last I checked, which was recently. 

If you were an intrepid developer, you could go into the Stack Orchestrator code and run that yourself and put that into production yourself right now. But the problem with that is it's very expensive to generate this evidence. 

It’s computationally very expensive, and specifically, disk I/O operations are very expensive activities to do. So as a Dapp developer, when you have very few users, you can run this reasonably on the laptop. But as your app grows, or if you're wanting to see all of the Uniswap V3 pool data, then a laptop's not going to be able to process that in a timely manner necessarily. I mean, laptops are pretty powerful so some of them can, but maybe not all of them. And at that point, you need hardware. And when you need hardware, you then have this problem of, "Okay, well am I going to buy hardware and rack it in a data center?" That's probably not a viable answer.

Am I going to go to AWS? Well, AWS is centralized, there are all sorts of problems. There's censorship for instance. AWS may choose to comply with a law that I'm not legally obligated to comply with. We've seen this issue with Alchemy and Infura, and these solutions comply with the laws in their jurisdiction, but the Dapp developer is in a different jurisdiction. 

So then you end up with this situation; "Okay, well if I want to have multiple service providers actually serving this data to users, they need to be in multiple jurisdictions." And that's what Laconic LLC solves. It's a Cayman Island LLC. We have members in different jurisdictions and those members will contract with the end users and comply with those laws in that way.

Laconic and Cosmos

Laconic team members were also core contributors to Cosmos SDK–we did a bunch of work on the Cosmos SDK. The data structures in Ethereum and the data structures in Cosmos and many other blockchains were designed to facilitate consensus, not to facilitate reading the data back out. And so in those architectures, there's utility in taking the techniques that we've applied to Ethereum and applying them to those other chains.

There is a value and utility to taking those techniques and applying them to the Cosmos SDK chains. Osmosis is an example of where it would be useful. For example, you can't have a block explorer that works across Cosmos Hub upgrades. No one's ever bothered to build one that works that way. If you built the block explorer on top of Laconic instead of directly on top of the chain, you would actually be able to provide that continuity.

Every time a Cosmos chain upgrades, they regenesis and restart the chain. When you start that new genesis, people–just as a matter of convenience–don't preserve that data. You don't have a way of representing the irregular state change that happens during the upgrade. Whereas in the Laconic system, we have a means of doing all those things. We can link any two arbitrary chains together and we have a means of representing these arbitrary state changes. We can provide that continuity as a service.

Incentive Alignment 

Because we're IPLD based, we actually can relatively easily take our archive and push it into Filecoin, where there can then be this clear monetization strategy for storing the data. Because we monetize the transmission of the data, which is a much easier problem to solve than the verifiable storage of Filecoin, we're providing an incentive for why someone would do that. Think about it. There are different incentives throughout the process. There's an incentive for including the transaction. That incentive is very clear, but there's not really any incentive in any blockchain I'm aware of for why I should then send that data. Why should I satisfy a read from a user? A user asks for a read, and why do I care?

That's what Laconic is trying to solve–we're incentivizing the reading of that data. And by incentivizing the reading of that data, that's step number two. Now we can talk about the incentives of step number three, which is a long-term persistent storage of that data. Because if you think about just having the incentives of just Filecoin and just Ethereum, you have this gap in the middle. Why do I take the Ethereum data and transform that and publish it to Filecoin? There's not really an incentive for me to do that.

Whereas with Laconic, there starts to become more of an incentive to do that because I need to support my own read infrastructure. People will come to you and know to come to a single place to get their historic reads as well as their more recent reads. And so you’ll be incentivized to charge them. There will already be an ecosystem in place where people are accustomed to paying for data. And when they want to pay for old data or new data, they'll come to the same place, buy that data, and that will incentivize archival storage. Right now we don't have a very good model for why archival storage persists. And it is a real mechanism design issue actually.

Laconic and IPLD

InterPlanetary Linked Data (IPLD) is the core of our system. The first thing we do is take the Ethereum data, which could be any blockchain or any hash linked data structure, and we convert that into an IPLD object. We then index it in that context. We’re storing the RLP encoded bytes, but we are also storing the CID (Content ID), the multi-format address of that object. That's how we're able to generate evidence.

On Ethereum, you have transaction receipts and you have the event messages. When you have an event message on Ethereum, the event message does not prove all the way back up to the root. So when you have a set of events, which is what The Graph consumes, the way that you prove that event is correct is that you find the block that that event was in, and then you rerun that whole block and at the end of it you see if you have the same event that you started with. Whereas, if I have an account balance on Ethereum, I have a block number and then I can get a proof. So I don't have to recompute the whole block to figure out the account balance in that block. I just get the proof from the Ethereum client about that account balance at that block, and I can present that proof and the balance to the user using eth_getProof.

But the actual logs in Ethereum are not provable in this way. This is why The Graph isn't provable and there are a lot of consequences from this. But because we use IPLD, we can create those hash links. Where the link was missing in the original Ethereum protocol, we can augment that protocol and generate a proof using the Ethereum data and our additional links, which are relatively easily. It's not some weird, crazy different format. It's this format that is very similar to the existing Ethereum formats, that prove that this log actually came from this block.

Laconic Member Validators

Laconic L2 has seven Founding Members right now. These seven Members validate, ingest the blocks, and make commitments to the state of those blocks. They then share that information with a paying customer. 

There are plans to increase the validator set. From a customer perspective, if our customer is a Dapp developer and they're saying, “right now I have to use Infura, Alchemy, Blocknative to assert that my data is correct because if one of them goes down for whatever reason, that's three right there.” 

That sounds like a pain in the ass. With Laconic, you integrate one protocol and you get seven Member Validators instead of three, and you get an assertion from us that you can verify yourself that we're actually physically located in different places. Alchemy and Infura both run in AWS, I presume. If AWS goes down, you just lost two out of your three, if not all three out of your three in that case. Seven is a low number, but seven is incredibly high compared to what people have right now, or they think they have four and they have one, whereas we're positively asserting that you have seven.

RPC Services and Laconic

On the path to building Watchers, we realized we had to build extremely performant RPC endpoints, and we had to build out a deployment system. We realized that that was actually what people wanted to buy from us. Most Dapps don’t want to bother with Watchers right now. What they want to see is this immediate savings on the RPC endpoint side. From there, oncet our foot is in the door, we can say, "Well, we can give you even more savings. You are using that RPC endpoint to build your own indexer. We have a whole library of tools to build indexers that will auto-generate indexers for you. And we have a marketplace where you can go to get other people to run that indexer for you when you don't want to scale it."

Currently, RPC endpoints are subsidized by VCs. Dapp developers are never experiencing the true cost of running an indexing service or running an RPC endpoint. They're not exposed to that in a free market way. There's this actor, this venture capitalist, who is going in and giving away free samples at a massive scale. The challenge for us is in how we compete with that? There's also a challenge in that our customers are depending on this centralization service and don't realize it. 

SHARE THIS ARTICLE