Podcast: Play in new window | Download
Euvie: Hey crypto world. My name is Euvie Ivanova and welcome to the new instalment of our Thought Leader series on CryptoRadio. Our guest in this episode is Trent McConaghy, AI and blockchain researcher and the founder of Ocean Protocol and BigChainDB. [00:01:00] Trent is on a mission to democratize data and help ensure that humanity has a role in an increasingly autonomous world. We discuss how AI and blockchain technologies can help one another, how artificial intelligence DAOs work, and how humanity can maintain control over its systems in the future. For all the show notes and resources from this episode go to cryptoradio.io/trent.
This episode is brought to you by bitguild.io. Bitguild is a new gaming platform built using blockchain technology. [00:01:30] Their mission is to redefine the relationship between gamers and game developers. On the Bitguild platform, gamers maintain full ownership and control of their virtual items, which are stored on the blockchain. They can transfer items and progress between compatible games and they can make in-game transactions safely and cheaply, and sometimes free. Developers who join the platform will get a direct link to an established player base, a strong community, and a network of likeminded developers building on the same platform.
Developers will also have the potential for [00:02:00] direct game development funding from Bitguild. Bitguild just completed their token sale on April 5th 2018 in which they raised 35,000 Eth or just over 13 million USD at the current market rate. If you want to find out more go to cryptoradio.io/bitguild.
Mike: Trent, thanks for joining us on the show. Can you give us a bit of a background, how you got into these industries, and [00:02:30] just tell us a bit about yourself?
Trent: Sure. I was raised on a pig farm in Canada, then became an AI researcher. That was actually really, truly my first job that wasn’t working on a farm. I did that while in undergrad. I was doing undergrad in electrical engineering and com sci in Canada. From that, from people starting to pay me to do AI, I started a company, [inaudible [0:02:49], which was AI for designing computer chips which was focusing on the creative AI side of things. That was the late 90s, built that up. Me and [inaudible [0:02:58] sold in the [00:03:00] early 2000s, 2003.
From that, I did two more projects related to AI. One of them was my PhD, which was reconciling human creativity with machine creativity. Did that out of Leuven, Belgium. The second one was a company that was AI for computer chips as well but really focusing on challenges with Moore’s Law. It was looking like there was some issues where process [inaudible [0:03:24] and tolerance is going haywire was really going to hurt Moore’s Law and the ability for chips to [00:03:30] keep getting smaller. Started a company for that. Did the PhD, did that company. That company got acquired fairly recently by Semans. That’s my AI background. I’ve been doing AI for a long time then.
In terms of the Bitcoin and the blockchain side, I learned about Bitcoin in 2010 and in 2013 I really dove much more deeply into and learned really about blockchain. I kick myself for not looking more deeply into what powered Bitcoin earlier. But 2013 I started really getting into that and very quickly realized that you could do a lot of super interesting things. [00:04:00] We’ve done three main projects, myself and my co-founders and the team around us. The first was Ascribe, which was for digital art, IP on the blockchain more generally. That project started in 2013 and shipped in production in February of 2015. That was a first.
We ran into issues of scale that led us to be building BigChainDB, which was a blockchain database. More recently, we’ve been working also on Ocean and Ocean Protocol, it’s subtrait for decentralized data and focusing on AI. It’s really helping [00:04:30] to address these issues of data silos that we’re seeing the AIs causing. Ocean is designed to change the [inaudible [0:04:34] from silos towards true democratization of data and AI services. That’s really what I’m up to. In summary, did AI for almost 20 years and blockchain for both five.
Mike: In the last few years, the idea of AI coupling with the blockchain and blockchain enabling AI is a protocol platform for this. There’s been one idea about this and you said recently it’s been evolving and changing, what was the idea about [00:05:00] how AI would be used in the blockchain and how is it looked at now?
Trent: There’s a lot of potential actually introspections in between AI and blockchain. I think we’ve only scratched the surface of the possibilities. In some cases, AI can help blockchain. In some cases, blockchain can help AI. In some cases, you have things that can’t really exist unless both AI and blockchain are present. I’ve been at asking myself this for years and for the first real zinger that emerged that was over drinks with myself and [inaudible [0:05:27] was this idea that [00:05:30] you can have AI agents, AGIs they’re called, artificial intelligent agents living on a blockchain and decentralized subtrait processing, something like Ethereum or something, and if you have the app and these things can run around and they can actually have their own wallets, of course, therefore you have these AIs that [inaudible [0:05:46] that you can’t turn off.
Unlike some sort of traditional. AI you can always unplug it even if it’s running the [inaudible [0:05:53] can unplug it when anybody asks or something, right? There’s actually tons of ramifications for that. That was really [00:06:00] the first realization like, “Wow,” right? These things could even get rights if you frame it right, we’ll get into that later. I call those AI DAO, artificial intelligent decentralized autonomous organizations. That was an idea from almost years ago now. After that, I started asking what are the different ways the blockchain can help AI. There’s some very simple things like having the providence of the training data and the providence of the compute.
All of this really matters, if you have a self-driving car, that crashes, then ideally you’d have some sort of block box just like airplane black boxes that give you an idea of okay, what lead to this crash [00:06:30] so that we can improve things in the future, right? That was just providence of the data working from and how the [inaudible [0:06:36] would be very helpful. There’s some other tweaks here and there, too, how blockchain can help for example in terms of bringing people together for incentivising tokens.
A good example of that is [inaudible [0:06:47] where they’re actually a community of data scientists that are collectively driving this hedge fund and they’re all incentivised to work together because they all have the shared token [inaudible [0:06:55]. Of course, for this hedge fund they’re contributing AI algorithms and then they get paid [00:07:00] [inaudible [0:07:00] based on how well the hedge fund does. That was a few things and they spent a few more. Something that, as I was reading about this and exploring this, all roads seemed to point to one place and that is the problem with data silos.
Traditionally, AI has led to basically modern AI becoming all this, the most popular form of AI these days, really drives on data. The more data you have, if you have 10 extra 1,000 extra data then you can bring your accuracy from about 60 percent to 90 percent or 99.9 percent. That can make all the difference [00:07:30] in making a successful product or not. After that, how much revenue you make and stuff, companies are really incentivized to gather lots of data and, of course, they don’t want the competitors to see that so they’re incentivized to hoard that data, which leads to data silos.
What we realized with blockchain was that you could actually change the incentives from data silos to democratizing data. That’s partly via data market places but then also going more deeply, because you want to have not just priced data, as in data market places, but data commons. If you have a [00:08:00] good design for your incentive structure, then you can actually address that. That’s what I’m thinking with Ocean. That’s a bit of a sampling. There’s a lot of other [inaudible [0:08:07] things, too, out there that different projects are doing that are pointing the way that are really helpful for AI. Singularity Net is really going for decentralized AGI especially.
You’ve got Open Mind, which is going for trading [inaudible [0:08:20] that’s decentralized, no one owns the controls, that is respecting privacy in really, really strong ways. And other projects, too. Lots of different AI projects out there enables by [00:08:30] blockchain. Blockchain doesn’t enable AI but blockchain can help remove a lot of the centralizing tendencies that AI has. At the same time, it can help enable new features in AI such as these AIs that know themselves, with these AI DAOs.
Euvie: Yeah. I’m interested actually in the incentive structure, because like you said with the centralized entities the incentives are very different than what they look like on the blockchain. How are you guys setting that up and what kind of incentive structures do you see being developed in the future.
Trent: [00:09:00] For sure. Just a few words on incentives first of all. I have to admit I was pretty late to the game in understanding the power and importance of incentives. Even in working on BigChainDB and so on, we really focused on the other benefits of blockchain, the decentralized aspect, more spreading control and power, the immutability aspect where you can get really great providence and then the assets aspect where you own something if you have the private key. Those are really nice characteristics, really helpful for improving efficiency and maybe even unlocking some new features. To me, I came to realize [00:09:30] over the years, especially the last couple years, that the most powerful thing of all is that blockchains are incentive machines.
In that, I mean you can use them to structure incentives, especially in the sense of mining you can basically set up a machine that no one else controls that can get people to do stuff. Why would they do the stuff? Because they’re getting paid in these tokens that the network provides, magic internet money. I provide value to network X, network X gives me tokens back in return. For Bitcoin, for example, it wants to maximize its security. [00:10:00] It defines security as hash rate, so if help you [inaudible [0:10:01] then I can expect Bitcoin tokens back in return. That’s wildly powerful and it’s much more powerful – some people have understood that for a long, long time but more and more people are starting to realize that now. I was a bit of a late bloomer but now I’ve really come to understand that.
With that understanding, I’ve come to realize, “Wow, there is a whole space of possible designs here that you can go for when designing this.” Realizing that was very useful in designing Ocean. Towards the question then of what does the Ocean incentive structure look like to address this problem of data silos, the answer is the Ocean [00:10:30] network gives you tokens if you make data available to the network when asked. How many tokens you get is a function of how many tokens you have staked in a [inaudible [0:10:40] sense. The more tokens you stake, the more block rewards you get when you make that data available.
Ocean actually [inaudible [0:10:45]. So, overall, Ocean is a service that’s designed to maximise the supply of available data and services – services as in other compete services and so on. Then it plugs in various networks that provide data or services, rather than trying to do that all itself. [00:11:00] That’s its overall objective function, if you will: maximize the supply of high-quality data and services. Then it manifests that via this block reward that I just described where the block reward is you get block rewards proportional to how much you have staked and any time you make that data or service available. That’s basically the core of Ocean. It’s a very simple structure at the very, very core but we think that’s really, really important that you keep that very, very core thing simple and powerful, just like Bitcoin has demonstrated in the past.
Euvie: Yeah, I’m interested actually in how [00:11:30] incentive structures look like for AIs, because when you have something autonomous running then we start thinking about can you direct it. Perhaps having incentives would be one of the ways to do that.
Trent: Yeah, for sure. I think there’s a whole variety of things there, depending on how you look at things, right? If you think of what training at traditional [inaudible [0:11:49] they don’t really have incentives per say, because you just run them with someone controlling it, right? You’re given some training data, you build this model for classification or [00:12:00] [inaudible [0:12:00] or whatever, then this is run. It’s like this machine that’s pretty dumb in that way, it’s just a mapping, right, just a function. Then you have when you do things like running evolutionary algorithms it’s basically a form of optimization usually. Then the question there is what are you trying to optimize for and you can think of that as an incentive, right?
Evolutionary algorithms you’re basically having survival of the fittest towards maximizing and minimizing some objective functions. It could be, for example, where you’re, say, trying to maximize the top speed of a car. You’re designing a car where you’ve got some sort of car simulator, [00:12:30] then you generate 100 different random cars and then you see how fast each one goes. The ones that are going the fastest, you let them stick around and make babies and then those babies make more babies and so on and there’s always a bit of mutation and crossover and stuff going on.
Overall, the babies that stick around and have babies and more babies, those are the ones that are faster, faster. [inaudible [0:12:46], which is the top speed, this however, as well, is controlled by the person who sets it up. Overall, there’s a person who has control of this. But then you can start generalizing a bit, right, towards AGIs and agents, [00:13:00] where these agents run around and you have simulated worlds and traditionally these agents run around in a simulated world where one energy is controlling that. You maybe have agents running around in open [inaudible [0:13:09], which is one of the more prevalent agent simulators out there for virtual world. Think of it like Second Life for AIs, right.
You can have these agents with their own incentives, they can run around. These agents can be very simple, they can be like ants, right. You can have 10 ants or 10,000 ants, super dumb things running around and maybe having some emergent intelligence. But of course, once again, [00:13:30] how much can they truly have incentives if there’s some sort of overarching controlling energy, right? The amazing thing is once you start putting agents onto a decentralized [inaudible [0:13:39] where they don’t have to rely on any humans to flip the switch, keep them on, keep them running or not, if they have an ability to sustain there well or you’re going to commit well, then it completely changes the rules of incentives for AIs.
Now, suddenly, as long as they have the resources to keep going, to keep replicating, or to keep running, they can do whatever they want, right? That’s actually the major change that [00:14:00] had happened. I think there’s a bit of a middle ground and that is if you think about traditional compute viruses, you can also think about them as dumb AIs; they copy themselves from one machine to the next to the next. They basically stealing a little bit of resources, right? You don’t really know about it, they’re hiding under the hood, but they can spread very quickly and they’re their own mini-lifeform, right?
These new AIs that these decentralized subtraits enable, these AI DAOs, they’re a really new game in town because they can have their own incentives and no one has really thought about that in a really deep way. It’s actually very philosophical and profound. That’s a great question. [00:14:30] How do you want to program incentives for these AIs? That’s a great question. No one has great answers.
Mike: It sounds like you’ll probably get a lot of the same questions over and over again about the dangers of putting AI in the blockchain such as creating profit incentives for them to potentially go rogue with, or the fact that you can’t shut them down. I’m sure there’s even more beyond that. How do you answer those kind of worries that people might have?
Trent: I usually start of by enthusiastically agreeing that we have to be super thoughtful with this. It really is an atomic bomb that is super powerful [00:15:00] and most people don’t realize that it’s this powerful. If you think about the atomic bomb, it had pros and cons to atomic energy, right? You could make a bomb on one side and you could have nuclear power on the other side, which promised a lot of great energy although it has its own drawbacks of course. Same thing here, it’s really a double-edged sword and there’s negative and positive scenarios to all of this.
Definitely, that’s the first thing to understand is it’s really powerful because it could exponentially grow very quickly and accumulate a lot of resources. Maybe you can shut it off or maybe there could be a hard fork, who knows. [00:15:30] Maybe it’ll just swamp the net so everyone gets sick of it, like Crypto Kitties, although people didn’t get that sick of it. Then the question is do we have examples and what can we do about it. We actually do have examples already. Our preface with Nick Bostrom wrote a book called the Paperclip Maximiser, you guys might have heard about that. This AI agent is given the objective function to maximise the number of paperclips and it’s given access to a lot of resources so it just converts every single atom on earth to a paperclip, never mind the humans – we all become paperclips, too.
That’s because it’s optimizing for that. [00:16:00] I saw this myself, too, in building AIs, right? You build an AI to try to design a computer chip, an analogue circuit, an amplifier. It will give you an amplifier but it will cut a bunch of corners because you get to specify those corners and comes up with some ridiculous wonky designs. If you want, you can try to add in constraints bit by bit by bit, but that’s really hard and can take a long time. Maybe you’ll never get the right answer. How I solved it was I managed to give it rails according to human biases of what a good design is and what isn’t. That was the heart of my PhD. To actually reconcile these objective functions [00:16:30] run amuck, reconciling that with human sensibilities.
I’ve thought about this a lot. That’s an example from AI land. There’s a lot of challenge there. Generally, whenever you design the optimizers, how you actually come up with objectives and constraints to match your intent. That’s not much different than computer software programming where a bug is basically mismatched between intent and implication. In blockchain, you have to have something similar, too. Some people like Ralph [inaudible [0:16:53] have called blockchains play forms, right, these things that are out there that people feed them, they live on the resources of others but they give rewards [00:17:00] in return like tokens and stuff.
Ralph [inaudible [0:17:03] and others have all these play forms. Now going back to the paperclip maximiser, imagine if there was something out there, maybe a blockchain, that had an objective function that worked so well and it just kept going and going and going, then it started sucking the energy out of the planet. Guess what? This blockchain that’s out there that’s set to use more energy that all of US by mid of 2019, just over a year from now, already is using more energy than most countries, right? It’s called Bitcoin. So, Bitcoin is actually a paperclip maximiser. Is it going to [00:17:30] suck the life force of the whole planet eventually? Probably not. It’s going exponential now but it’ll hit the top of an S-curve almost certainly.
We actually have a bit of this right now and that’s basically objective functions going haywire, right? I don’t think Satoshi anticipated that it would go this far, but it has. We can usually that actually as a warning signal for when we’re doing [inaudible [0:17:48]. It’s this, “Oh my God, Bitcoin’s objective function is wildly powerful. It’s good to understand because what can we do to safeguard against ones that do damage to the planet that way.” We don’t [00:18:00] really want a blockchain that’s eating the energy from the planet, right, it’s better if it can have more useful work, for example, like storing files like in File Coin. Or making data available like in Ocean or other things.
Euvie: I was just going to ask basically what are some of the good solutions that you can think of, other than proof of work?
Trent: Proof of work, that’s actually part of the solution where you’re not just trying to sell some cryptocurrency that some might argue is useful but it’s clearly more useful if you can actually have something that there’s not even an argument of whether it’s useful or not, right. [00:18:30] Things like storing data or whatever. Overall, I’ve thought about this a lot actually. I think, at the end of the day, you want to make sure that there’s a structure to change the incentives on the fly, without easily being gamed, right? Because that’s the get out of jail free card, because generally you’re going to have humans who are owning tokens in this thing having a stake overall or running the mining.
Collectively, those miners or those token holders can have some form of governance to change things. It can be just a hard fork, that’s the simplest form. Although, we’ve seen with [00:19:00] Bitcoin hard fork, no one has been even really proposing to hard fork with Bitcoin to reduce energy. The good thing is its competitors are coming along, too, so we’ll see. The other end of that spectrum is onchain governance. We’re seeing examples of blockchains out there that have been running now for a while where the onchain governance seems to be working right, things like [inaudible [0:19:18]. We’ve had other forms of onchain governance in the past that had had major flaws, they caused real issues like [inaudible [0:19:24].
Real issues within that chain knocking on. This is good. That’s the fall back [00:19:30] of basically even if you launch the thing, knowing that you have some sort of get out of jail free card for a while, that’s useful. The other things are from the time, let’s say, the white paper is ready, to the time that the network gets deployed, to try to get the best understanding of what the dynamics of the system could look like, right? Part of this could be via simulators, token simulators, and there’s actually a few groups working on those. That will be helpful but it will only cover a fraction of [inaudible [0:19:52], because this is an economy that’s open-ended. Humans are part of it, right, this dynamical system. You can’t really model [00:20:00] what’s going on inside human’s heads and they’re going to try to gain.
What you can do then is a simulation process to have humans in a loop in a simulation process. Ideally, with this more and more [inaudible [0:20:09] the public launch. I can see that that can be very helpful, too, because overall the idea is to try to model the facets of the system towards the public launch of the system such that overall, you’re pointed in the good direction, so overall there hopefully won’t be major global changes after that. At the same time, you have this get out of jail free card called governance.
Euvie: There was something interesting [00:20:30] that you said that I’m stuck on. You said that Satoshi didn’t anticipate that Bitcoin would be using this much energy by this point. Wasn’t that part of the design? Wasn’t this predicted, that it would be using this much energy at this level of scale?
Trent: That’s a good question. Actually, I said the wrong thing, I don’t know if he predicted, I don’t know if he actually did not anticipate mining [inaudible [0:20:50]. If you go to the white paper, you’ll see that he talks often about one human, one miner. More like us [00:21:00] running a miner [inaudible [0:21:01], as opposed to five or seven centralized energies that are running 95 percent plus of the mining power, right. Bitcoin is veering in a slightly different direction, actually a big different direction, than the initial vision. It’s still serving a very nice value to the planet, as well as a demonstrator of what’s possible. I guess it’s to be debated whether or not Satoshi saw the issues of the energy of the planet.
The good thing is, Bitcoin right now is still humans running it, hopefully they will see that, hey, probably not a good idea to have a machine that’s using more electricity than all of USA, [00:21:30] there’s probably a more efficient way to get as much security or 99.99 percent as much security without drawing up so much energy. That’s to be determined, we’ll see. My main point with all of this is actually that Bitcoin is an amazing example of how powerful incentives are, as well as a super interesting example of this lifeform, this DAO like entity, I call it a super dumb AI, if you will, that actually has its own way of being. Of course, we can have these AIs that we talk to and a much richer design [00:22:00] space than just raw blockchains because of all these tools of AIs like [inaudible [0:22:04] networks and these [inaudible [0:22:05] AGIs.
The AI on blockchain space, it’s already starting to grow really rapidly in the last six months but it’s going to still be unrecognizable two years from now. We’re going to start to see really sci-fi-ish happening. It’s really hard to imagine but it’s coming.
Mike: When you sit back and imagine what the future looks like, what do you come up with, what are some of the crazier fringe ideas that you see could be possible within two to five years?
Trent: [00:22:30] Yeah, I don’t know if it’s crazy or fringe but I have a couple of [inaudible [0:22:33]. One of them is I’m actually a big nerd, huge nerd, so I love to read sci-fi and there’s some sci-fi novels that paint pictures of the future of how this could be. One of them is two books, Damon and [inaudible [0:22:45] by Daniel [inaudible [0:22:46]. At the beginning, where it’s present day at the beginning of those novels, but by the end of them there’s no magical technology happening, it’s basically one computer process kicking off another kicking up another based on news and other events.
By the end of the novel, [00:23:00] there’s swarms of self-driving, self-phoning Cadillacs driving around smashing through buildings and doing your own things and stuff, right? That’s one example where there’s actually any major leads of what’s going on. Not just the swarms of the Cadillacs but overall, just various of swarms of things, AI agents interacting with humans and all the rules of society have been rewritten. It happens in this very gradual process. In its case, it was written as a bit of a thriller, so it’s a bit more dystopian than it needed to be but it was still very informative that way.
Another one is, actually Kevin Kelly wrote this seminal book in the early 90s called [00:23:30] Out of Control. It talked about the emerging AI at the time with things like genetic programming, which is evolution if you programmed [inaudible [0:23:38] optimization, these swarm systems where you have a whole bunch of AIs. Each one of them is super dumb, smaller than an ant – in fact, much dumber than an ant. You have 10,000 of them together and they can start doing amazing things. With things like this that I see, I think actually the stuff that’s going happen two, three years from now we’re going to see a lot of swarm stuff happening.
We can build these technologies today where [00:24:00] it’s just one agent, is super dumb, just has some basic knee jerk reaction to something else. If you combine 10,000 of those, there’s going to be some crazy emerging properties. That’s going to be really hard to predict, as well. It’s like if you’ve ever played with cellular [inaudible [0:24:11]. If you type in a rule for updating the squares around a single square, depending on the parameters of the rule and so on, if you do a very simple rule.
You change your parameter by .1 and it gives you radically different emerging patterns, right? All of this is rooted on the idea from complexity science, right, where from very, very simple rules you can have a lot of complexity emerge [00:24:30] and it’s actually very, very hard to predict what will happen. I can see that in swarms and agent basis, it could really lead that way. That’s a couple of examples where I see we could have headed, those are probably the two best examples of books. Beyond that, or complimentary to that, I also see, in general these AI DAOs [inaudible [0:24:45] positively or negatively.
The negative way we’ve talked about a bit where we might actually be incentivized to give them control of our systems in order to be capital efficient. Throughout the 90s and 2000s, all the [inaudible [0:24:57] companies, the chip makers, used to have their own [inaudible [0:24:59], [00:25:00] their own factories to build chips. Everything from [inaudible [0:25:02] to whoever, right, Sony. They actually all sold their factories because the factories got too expensive and they really wanted to have the best reach on their capital. They focused on design and then just outsourced the capital to a couple of players that would manufacture for them, most notably [inaudible [0:25:15].
Now we only have four major organizations in the world that do manufacturing. We have ones that take outsourcing, that’s [inaudible [0:25:20], then the two big [inaudible [0:25]:3], which is Intel and Samsung. That’s it, basically everyone else just designs and outsources, right. We’re seeing this in other industries, too, like BMW. [00:25:30] They sold their factories and they just focused on design. Basically, the idea is capital efficiency. If you’re Uber and you’re rolling out a self-driving car fleet, do you want to own all those cars yourself? You want to get other people to own them?
A third option is simply to get each one of those cars basically it owns itself but it’s got a loan to buy itself out. Over the span of 5 or 10 years, it buys itself out. Uber would be happy with that because it keeps the capital out there. Each car is its own corporation, so it’d have rights and stuff like that. In doing this, we can do this for fleets of self-driving cars, [00:26:00] fleets of self-driving trucks, energy grids, roads, all of the above. You can frame with as a negative thing in that, “Crap, we’re giving up our control to the bots.
But there’s a positive framing, too. That is nature 2.0. It’s this idea that nature 1.0 is the wind and the trees and the soil and the carpet around us where this cradle of civilization, where humanity grew up on this and all the species around us. Now, we build buildings, machines, especially the dawn of the industrial revolution, that don’t really play well with nature. [00:26:30] What if we could actually have machines that are much more nature-like, that interact locally that take in resources from the environment and convert them into something that’s useful to the rest, like a tree taking in air and humidity and co2 nutrients in the soil and the converting that into things that are useful for everyone around?
A self-driving car could be the same sort of thing, where it’s taking in electricity from the grid, maybe even from solar panels on its roof, helping to move humans around in a symbiotic [00:27:00] relationship, adding value but not owned by anyone. Whereas, nature 1.0 is soil and carbon, nature 2.0 is silicon and steel, it would add silicon and steel, it would be in symbiosis with nature 1.0. That’s a positive framing of it all and we can get there gradually, bit by bit by bit, by offloading some of our capital to these benevolent bots.
Euvie: Yeah, really interesting part of this is that you can use simulators. Obviously, they’re probably pretty resource intensive, but for certain things they would be worth it [00:27:30] because if you could simulate a bunch of different scenarios before you actually go and deploy things then you can avoid all kinds of catastrophies or just inefficiencies. Whereas in nature, it actually has to physically try each iteration and each mutation, then you just end up with a bunch of these mutant things or evolutionary dead ends or death and destruction. In this case, you could do a lot of that in a simulated environment and it doesn’t actually cause any harm other than just using a bunch of power.
Trent: Yeah. [00:28:00] Evolution is massively parallel, right? We have trillions and trillions and trillions of cells out there, each with their own DNA, basically looking for out for itself and evolving in parallel, right? That’s very valuable but definitely before we hit the world of atoms, if we can simulate to a degree in the digital world that could save a lot of pain. There is a caveat and that is even in the world of circuits, which is supposed to be a fairly closed system where it’s well defined, I discovered that it’s actually really hard sometimes. Basically, the simulator can lie [00:28:30] sometimes for something that’s supposed to be simple.
When we started having systems that involved interactions with nature or interactions with humans, oh man. That’s going to be hard to simulate at any level of accurate data, also. I see it right now as more of a linking thing where maybe you can detect 20 percent or, maybe if we’re lucky, 80 percent of potential issues early, but then we still need to basically iterate towards trying to capture more and more issues before we deploy live, right? Overall though, it’s like this grand experiment that we could still screw up. [00:29:00] That’s no different than the world as it is anyway, right. When people discovered that coal was really useful for energy, everyone understood the environmental consequences, they just ran with it.
Then 100 plus, 200 plus years later, it’s like, “Crap, this is really terrible for the environment.” We’ve scrambled as a civilization to clean up our act in a literal sense. I can see that there are going to be things that get deployed, maybe to nature or nature 2.0 that might look good at first and then, 10 years down the line, 50 years down the line, they’ll be like, “Crap.” We have another example of this, too, [00:29:30] that’s with social media like Facebook. Like, “Cool, I can connect with my friends. I don’t have to pay for it, I only have to watch an ad now and then.” That actually had, the way that [inaudible [0:29:39], and I agree fully with that in the way I think too.
If someone is extracting information about you in return for ads, they have completely different incentives than you. Whereas if I were paying for that service, then the incentives are much more aligned, right. That’s an, “Oh crap,” thing, too. Almost 20 years after social media was invented, we were realizing the issues around it, [00:30:00] [inaudible [0:30:00]. That’s a near-term crisis, based on a misunderstanding of the vast power of web 2.0 social networking and there’s going to be stuff happening in the web 3.0/nature 2.0 world, too. The good thing is a lot of the community in the world of blockchain is very cognizant of this.
Myself and many others are really, really trying hard to understand what the implications are and to give ourselves as many get out of jail free cards as possible as a community, understanding that this stuff is wildly powerful. AI and blockchain [00:30:30] are two technology each on their own wildly opinionated, right? You can’t say, “It’s just code.” It’s not just code. Every time you deploy a piece of AI, every time you deploy a piece of blockchain software, it is infused with opinion and ethic. If you don’t design for the ethics, then you have a bad design reference, right? Basically, that’s the core, right? We’ll never be able to have something perfect but we can at least try to avert the big errors and give yourselves ways to improve wherever there’s errors that emerge.
Mike: Can you expand a bit more on the idea of [00:31:00] self-owned AIs and use the example of a vehicle buying itself out? Can you expand on that idea a bit more?
Trent: Basically, I’ll just give the recipe. In fact, before I do that I’m going to give a very, very even simpler recipe so it’s easier for your audience. I call it the Art DAO. How it works is, I’ll use Ethereum as an example because it’s the most prevalent blockchain out there for decentralized processing. I create a smart contract that, of course, a smart contract can hold its own funds, right. That smart contract has a very simple bit of AI code in it that [00:31:30] can automatically generate art. It can be used in some evolutionary algorithm, people have been doing that since the 90s, or maybe some deep learning like these deep dreams from a few years ago.
Let’s say, every time it generates that art, it costs, say, $1 worth of Ether, right, because it can be pretty random. It doesn’t need to be great. It has to be just interesting enough for people to buy. Then let’s say it sells that for $10. Now, it’s got $10 and it owns that right, because it’s out there running as a contract, there’s no one controlling it at all, although ties have been severed, [inaudible [0:31:57]. Once it’s sold that – [00:32:00] by the way, if you sell it, you just post it for sale on some centralized [inaudible [0:32:03] marketplace, but let’s say open bazaar. Actually, for digital art, that open bazaar decentralized marketplace. It sells it and makes $10 worth of Ether. With that it creates 10 more artworks, sells each of those. Now, it’s got $100. Sells those. Now, it’s got $1,000. Keeps growing and growing, growing geometrically. Before you know it, you’ve got a million dollars, the world’s first AI millionaire.
It’s amazing because it’s actually owning this itself. It’s running on a decentralized subtrait, no one can shut if off unless they shut off [00:32:30] all of Ethereum or a hard fork if you’re in the way. Basically, you can’t shut it off. It’s fascinating. It’s just this [inaudible [0:32:36] that’s out there, it’s pretty dumb. All it knows is how to generate art and get wealthier. Now, take that idea that I’ve just described and, if you think about it, these dollars or Ether that it’s holding, those actually could be keys to control resources in the world of atoms and heat space, right. For example, therefore if you can control keys you can control the resource, right?
You can think of it like our brain has a resource to control the body, right, it’s got access [00:33:00] control to the rest of the body. My brain can control my body, right. Basically, this AI DAO can control a car, right, it’s the brains of the car, can control a car. The brains not living in the car, or just in the car, they’re also living in [inaudible [0:31:11] decentralized subtrait, right. It’s basically manifested a body for itself, this car that’s driving around, and it has its own wallet just like the Art DAO, right. Now, instead of accumulating resources by selling artwork, it basically accumulates resources Uber style. Maybe it hooks into the Uber network and whenever a person comes along, that person says, “[00:33:30] Okay, take me here.” They drop that person off and make $10.
They drive around and make $10 here, $10 there, maybe, say, $1,000 a month. Of course, to start with, how do they buy themselves out, right? Actually, quick background, when the DAO got going in 2016, they actually set themselves up as a corporation [inaudible [0:33:47]. Switzerland has it set up where you can basically remove the people. Basically, you can have a corporation that isn’t attached to any people and, of course, corporations have rights, right, corporations are people too [00:34:00] they have personhood. Imagine you take something like the DAO that was set up but actually you remove all the people. It’s a corporation. Imagine Uber, the company goes along to it and says, “Hey, here’s a contract, you have access to this car, you control this car. But you owe us, say, $20,000 to buy this car. You can pay us out $1,000 a month for 20 months.”
So, it does. It earns $1,000 profit for month, 20 months later it has bought out from Uber and it fully owns itself, not unlike a mortgage [00:34:30] or regular car payment. There you go, after 20 months you’ve got a car that it’s a DAO, in the sense of it’s a process living on a decentralized subtrait like Ethereum, it has its own resources, including its own body that it has actually bought from Uber. It’s not just Uber, Mercedes could do this or [inaudible [0:34:44] could do the same thing. What a way to sell cars. They create their own customers. Actually, they kind of did that in creating Car To Go back in the day, BMW would [inaudible [0:34:52]. There’s actually precedence there, too. That’s basically a manifestation. It’s bled together a few of these already existing [inaudible [0:34:59] that we have, [00:35:00] with the idea of corporations that have personhood and that goes back hundreds of years, extending that into the DAO where it had personhood, as well, essentially where there weren’t humans involved at all.
Then merging that with ideas from AI DAO starting with the Art DAO, which basically [inaudible [0:35:13] wealth as a smart contract using AI technology, then finally merging this Art DAO idea with our DAO idea into this car that’s owning and controlling itself. By the way, people might have asked on that, “What if it breaks down?” It’ll just call up a contractor to come along to fix it, [00:35:30] then it’ll have other automatic contracts, too, here and there for various things. Yeah, maybe if it does really well it’ll buy out a whole fleet for itself, who knows.
Mike: I love this idea. I love the idea that you could also have these autonomous devices, the internet of things devices, operating with minimal desire for profit so that the costs are always driven down and down and down. As long as their expenses are paid and they have working capital, then they can just keep operating and then people will take the benefit of that.
Trent: Yeah. Actually, related to this, [00:36:00] you might really like this part. I should say this idea of the nature 2.0 where you have lots of these things, these swarms of these cars and swarms of these trucks and electric grids and roads, I’ve been codeveloping this with [inaudible [0:36:10] from [inaudible [0:36:11], he’s based in the Netherlands. With this, he had this idea which I think is great. Imagine the cars paid for itself and it keeps going and it doesn’t have really incentives to get super wealthy. What if all the excess funds after that go straight to UBI, right?
Basically, you can have this nature 2.0 and employ AI DAOs that can then help to pay humans for [00:36:30] living [inaudible [0:36:31] for basic income. Not just basic, you can keep going up Maslow’s Hierarchy all the way up to universal self-actualization income. To me, that’s fascinating because Uber has massive revenues [inaudible [0:36:40] and many other things because it costs money to do things in [inaudible [0:36:42], right? These things that cost lots and lots of money, imagine if that’s actually going directly back into universal income.
Mike: I love it, yeah. It’s such a more efficient method of quote unquote taxation than tax would actually be, because you’ve still got the corporation entity and structure behind it, so there is [00:37:00] incentivization for efficiency and maximizing the efficiency but still the profits can be returned to the people. Quite interesting.
Trent: And if you think about what governments are, right, what is the different governments of a few roles, right, but one of the big roles is to basically have a mechanism to pool resources, pool money from people toward creating shared resources, right, resources for the commons, whether it’s roads or otherwise. That made sense as a technology from a couple hundred years ago [00:37:30] but now we have blockchain as another way to organize a bunch of humans towards doing things. Then the way where you don’t even need humans involved anymore. I think that’s really wonderful, pooling the resources of humans such that you get these fleets of cars and roads and so on, to manage it all.
Mike: I love it. Trent, this has been a really fascinating conversation. I hope we can get you on again some time in the future. Thanks again Trent.
Trent: Sure, my pleasure.
Trent McConaghy, the founder of BigChainDB and Ocean Protocol discusses the relationship between artificial intelligence, blockchain technology and democratizing data.
As a long-term AI and blockchain researcher and entrepreneur, Trent McConaghy is on a mission to democratize data and help ensure that humanity has a role in an increasingly autonomous world. He does so through his current projects BigChainDB and Ocean Protocol. BigChainDB is a big data distributed database with blockchain features, that has applications in intellectual property, identity management, supply chains, government and many different industries.
Ocean Protocol is a decentralized protocol and a network of AI services, on which data marketplaces and exchanges can be built to maximize the supply of available data and services.
In this interview we talk about intersections of artificial intelligence which thrives on data and blockchain technology which is a powerful vehicle for democratizing data.
What we cover in this episode:
- How can artificial intelligence help blockchain and vice versa?
- What are AI DAOs how can they help reduce the centralization in AI?
- How do incentive structures work with autonomous AI?
- What are the dangers of putting AI incentives on blockchain?
- Possible alternatives to the proof of work and solutions to save energy on the planet
- What can we expect in the AI and blockchain space in the next 5 years?
- Development of swarm intelligence and incentives to have control over our systems
- Self-owned AIs in transportation and energy and how Nature 2.0 will look like
Resources:
- Trent McConaghy Website
- Ocean Protocol
- BigChainDB
- The Paperclip Maximizer, Nick Bostrom’s thought experiment on AI ethics
- DAEMON, a book by Daniel Suarez
- Out of Control, a book by David Kelly
- Crypto Radio’s Thought Leaders Series