The central dogma of AI is that it was built on the motivation that it would provide a statistical advantage at war. Even today, we benchmark AI systems against humans, juxtaposing the machine against ourselves. The problem is that if those machines set their own goals and are authoritarian by nature, this clashes with our values and democracy. It also poses an existential risk to mankind. Today’s guest, George Zarkadakis is challenging this central dogma of AI by trying to find ways where AI systems are embedded in a social contract so that they are not the “other” but are our partners. George is a science communicator, an artificial intelligence engineer, a futurist, and a digital innovation professional. He holds a Ph.D. in artificial intelligence in medicine from City, University of London and is the author of two books on the subject of AI. Today George shares his perspective on the intersection of AI and society and how it can be used for the advancement of humanity. Tuning in you’ll hear an introduction to the challenges posed to society by AI and the need for regulation. Discover how we can create some governance around artificial intelligence in order to benefit everybody in society, how the politics governing AI play out differently in the Chinese political system than in the West, and the importance of dialogue, inclusion, and having the right policies in place. To hear expert advice and opportunities for legal professionals in the area of AI, tune in today!
Key Points From This Episode:
An introduction to George Zarkadakis and his impressive career.
How George has seen AI grow and infiltrate society in the time frame of his career.
The two main schools of thought around AI.
How the second school of thought advanced through solving the problems of pattern recognition.
An interesting use case of the second school of thought in medicine.
Why AI is sometimes referred to as general-purpose technology and how it is being used to fuel the fourth industrial revolution.
George’s perspective on the intersection of AI and society and how it can be used for the advancement of humanity.
The biggest problem George has with AI: the central dogma of AI and how we juxtapose the machine against ourselves.
The first ramification of this problem: that if those machines set their own goals and are authoritarian by nature this clashes with our values and democracy.
The second ramification of this problem: existential risk.
Why George is trying to find ways where AI systems are embedded in a social contract so that they are not the ‘other' but are our partners.
The potential benefit of AI to society instead of the existential risk it poses.
The enormous opportunity to use these new technologies to create abundance.
Thoughts on how we can create some governance around artificial intelligence in order to benefit everybody in society.
Thoughts on politics and AI and how this plays out in the Chinese political system differently than in the US and the West.
The importance of dialogue, inclusion, and having the right policies in place.
How data provides nourishment to algorithms and its role in the digital economy.
How soloed data is held by companies who don’t share the value of the data with the data providers.
Why George believes that there is currently a problem with the lack of innovation as opposed to too much innovation.
Why the impact on innovation should be considered in a merger.
The concept of a data trust: How we can create an organization that can act as an intermediary between data providers and data consumers.
How a data trust may be similar to a pension trust.
The potential benefits of a data trust on the quality of life and human happiness.
How knowledge is increasingly overtaking capital and labor as the key element needed to create value.
Why we should look into how we can become part of this new economy instead of resisting it.
Advice and opportunities for legal professionals in the area of AI.
The need for more dialogue between lawyers and technologists, entrepreneurs, and business people.
Tweetables:
“I’d like to challenge the central dogma of AI, and that’s what I’m doing in my book, and trying to find ways whereby AI systems are embedded in a social contract so that they are not the ‘other' but are our partners and we are the ones that set the goals.” — @zarkadakis [0:17:09]
“I too think that there is an enormous opportunity that is presenting itself to us as societies to really take these new technologies and create abundance.” — @zarkadakis [0:19:32]
“The big question there is how do we create some governance around this fantastic thing called artificial intelligence that can benefit everybody in society and not just a few?” — @zarkadakis [0:20:14]
“Rather than fighting the capitalists — we should look into how can we become part of this new economy which is all about knowledge and knowledge is translated in data and algorithms.” — @zarkadakis [0:38:34]
“As technologists, as entrepreneurs, as business people, we can’t imagine the future without understanding that society should be governed by law and that law reflects how people want to transact with each other.” — @zarkadakis [0:41:14]