Recently, the White House proposed a tech ‘bill of rights' to limit AI harms & to address AI’s exponential growth.
While an AI Bill of Rights is necessary, it could take some time. In this paper, I propose a creative solution using existing laws to solve the current impacts of AI on our society.
Imagine if you knew feeding your dog raw fish would result in your dog becoming physically aggressive, and changing his diet would change the dog’s personality? Apply this principle to AI. The data these AIs are being fed daily is creating unwanted results. If the diet is monitored, the AI can serve humanity with a lower risk to it.
Now how would we accomplish this task? How do we create a “clean data-meal” for the AI? I propose expanding the term “property” under the constitution to include our digital footprints, our digital identities.
While the GDPR in Europe did not grant data ownership rights, it attempted to shift control over personal data to citizens by implementing permission requirements. This has allowed for an opportunity for Citizens to pull their data together in data trusts and licensing this data to companies creating an income for the citizens.
While this idea is still in its infancy in Europe, data trusts are no different from any other trusts in the U.S. Instead of renting out a property, for example, and collecting rent; the trust would be licensing data to be public and private entities willing to pay for it, distributing the money as universal income to citizens/users.
The trust essentially becomes a middle-man between users and companies and is managed by a professional trustee in the best interest of the beneficiaries of the trust. Without some form of legislation or litigation, this opportunity, this “ownership” right, is nearly non-existent, with companies like Facebook monopolizing data without regard to users.
With the current conservative makeup of the Supreme Court, expansion of the term “property” under the U.S. Constitution to include our digital footprints is highly likely. It could serve as a shortcut to accomplishing the goal of cooperation with AI in the service of humanity. It could also help integrate tech companies into our communities more fairly.
Legal systems all over the world remain the most traditional industries that are left. We come from the tradition of going back hundreds, sometimes thousands of years, to explain why a law exists in its current form. Yet, we question and decide whether that law should be changed daily because it no longer entirely applies to the facts.
When the law finally recognizes a right, it sets a cascade of authority and influence across the world.
So we are here. The age of A.I. has officially arrived.
So, where do we go from here?
I'm no technology expert. If you still don't know what A.I. is, you haven't learned to google things first. Start there.
A.I. is some vague term now that many people claim to know, and some have no clue, but something is for sure: we almost certainly use it in our daily lives.
Our iPhones have a chip inside with the most advanced neural networks that resemble the human brain. Some people have refrigerators smarter than…well, whomever you thought of just now. And don't even let me start on our cars.
Life has been awesome in the 21st century. We all have our artificially intelligent concierge ahm ahm Siri & Alexa (funny enough, both women.). All jokes aside, life has been a total hoot: pictures, videos, texts, calls from anywhere to anywhere all made possible by what was created.
But now, we have arrived somewhere new: our creations can create. They don't need us like they used to. They do it for the same fee, do it much faster, and do it more precisely. I don’t think that they are coming for us. I think that they are coming to make our lives easier for us, and they still need us.
So, where do we go from here?
Rise of Automation
Before Covid-19, I had spent the last five years of my career eagerly convincing lawyers and judges to utilize technology.
Not complicated technology, just a couple of steps away from the fax machine. Yes, the fax machine. An average law school graduate and a freshly minted attorney were born in 1997. Nineteen-ninety-seven. I bet you have T-shirts' older than them. Imagine telling your first-year associate to "fax-file this suit!". It's like trying to explain to your kid what dial-up internet sounded like when connecting. Or AOL messenger. Imagine their reaction? Especially the chat… Like why wouldn't you just text?
Imagine me and my short platinum hair, standing in a courtroom with a white male judge in his 70s as a 3rd-year associate, trying to convince him to have an electronic check-in process instead of a clipboard, or use an email with pdf attachments instead of fax-filing… I know… kind of funny now.
After COVID pushed the entire world online in early 2020, the legal industry had to unpause its growth button. Like the famous slogans at the time said, "justice can't wait!" Overnight (or a few nights), court systems around the world went online for filings, for hearings, for conferences. I mean, who can forget the cat debacle of 2021. The cat lawyer is now one of the most notable legal memes of all time. The cat lawyer even made the primetime news… for laughter. It's not every day that a lawyer is on tv to make people laugh. Usually, it's the opposite reaction.
Over the last two years, the legal industry has been ramping up to join us in the 21st century in terms of digital reach. Now it's time to deal with the most pressing matters of our time in the making for the last century: AI. It's time to raise what we have created.
The legal system waited for an entire generation and left this upgrade in the law in the hands of this generation of lawyers. The Facebooks, the Twitters, Tesla's humanoid bot, and A.I. inventors. They need us… those poor little A.I.s, I mean. It’s time for the laws to evolve to address the critical challenges of raising and merging with AI.
Without us, these little A.I.s will be taught some of the old garbage of the last century: the racist ideologies of post-war grandpas, the sexist ideologies deeply rooted in laws, beliefs, cultures & religion. Essentially, they will be carbon copies of the last century.
Just because AI has advanced, it doesn't mean it's too late. It also doesn't mean it's wise or can talk back or draw boundaries. Common… It's still just artificial intelligence. But it doesn't need to be cultivated and cared for the same way as you would take care of a child.
The data that's now raising our precious Arti (Artificial Intelligence) is to teach it the best of us. To teach selectively with data carefully prepared. In 2016, Twitter taught Microsoft’s AI chatbot to be a racist a**hole in less than a day. Do you want Arti raised by the Facebook, YouTubes, and twitters of the world? I imagine if Arti had a face, it would be angry, cursing in some weird accent, smoking & and spitting all at once. And I don't like what I see.
How I imagine Arti is well-informed, understanding the world through the context of the now while remembering the details of history, constantly growing, learning, and evolving from all sorts of academic fields globally, and adjusting self to grow with the world raising it in harmony.
I would imagine all internet users as villagers bonding to protect their community, their children, their animal friends, and the nature they share. I would imagine Arti as the kid with the most potential to someday give back to our community in a big way… if you are Iranian or from the far East, that big way is becoming a doctor, a lawyer, or an engineer … something in human service.
Arti can be Arti-lawyer, Arti-doctor, Arti-engineer, Arti- soldier, Arti-whatever your dream. We owe it to ourselves, to our temporary humanity, to raise Arti well.
Chapter 2 “Arti”
The economics of Atomization
Have you ever heard the phrase "the robots are coming for our jobs?"
I'm sure you have. Thinking the robots are coming for your job is like thinking vampires are coming for your food. Robots have no sense of work for you or them. They just do.
This creates the illusion of competition. There is no competition. There is a robot willing and able to work, and there is you. Suppose you want to work, fantastic. Most people work to earn a living; to cover their minimum life expenses like a roof over their head and food in their stomach, clean water, and some fun. What if that can be provided?
When a sentence made up by corporations shakes the foundation of security, hostility is created. But we want them, and they want us. So what's the problem, and how can we fix it?
Hold your shock at my liberalism Judge Judy and listen: we let whoever wants to work, whoever has something to contribute, to work and contribute. Those who want a life beyond the minimum can work and earn reasonably, and the rest should be and will be taken care of.
It may sound crazy, and some of you might be thinking," omg, this little Commi," but let me explain. Some western countries with advanced innovation rates, competitive economies with high happiness and safety ratings offer their citizens universal income and health.
The objection in the mind of most people is that "I don't want to pay for lazy people to sit at home and do drugs."
But what if I told you that you wouldn't have to pay for them to sit at home and be lazy. What if I told you Marc Zuckerberg, Elon Musk, Jeff Bezos, Bill Gates, and the rest of the gang will happily pay for all of us to sit at home and be lazy… be lazy using technology?
Automatization & increased quality of life
When I first founded my first legal tech startup, I built it around the idea of becoming a happy lawyer.
I even gave a masterclass for a group of 65 General Counsels from fortune 500 companies like AT&T and Neiman Marcus about being a happy lawyer.
I told the story of a young law student (me) with hopes and dreams of serving people and becoming a human rights attorney. That was soon replaced by the not-so-happy reality of billables and the sad culture of the law world.
I presented deep questions to them and ended the conversation that repeating mundane patterns at work makes a machine. In resisting helping themselves with technology, they had become machines.
I got a few pats on the back and sort of started the whole "become a happy lawyer" thing. But the reality is, I was right then, and I'm right now. (If I have ever dated you, you won't like reading this line, and you know it's true.)
We spend so much of our day completing personal and professional tasks that bring no emotional joy and little value. Tasks that robots can do while we sleep… even driving someday soon. There is no need to resist it.
When I first became an attorney, I had to dictate letters on a dictaphone and have it sent out to an outside vendor to transcribe my dictated hearing report for my clients from that day's hearing. The dictation company would then email the dictation back to the office to my legal secretary, who would format it and put it on letterhead, and print it. She would then attach it to the full-client file normally stored in colossal filing cabinets and leave it in a huge cart with the other 20 things for my review and signature. She would then put it in an envelope, address it, add postage, and mail it to my insurance company client.
At the insurance company, their clerk would open the letter, scan the letter, and upload it to a system for my adjuster to finally read over a week later. No matter how much I begged my boss to type my letters, he was adamant that his way was faster, that's the way it had been done at the firm, that's the way it will continue, so I better get used to it now.
Of course, I “listened”, and I also typed my letters, and emailed a copy to my client the same day. My added mini system upfront led to tons of referrals and praise about my quality and speed of work.
If we breakdown what we do and are willing to make changes and improvements along the way by, for example, not teaching Arti about dictaphones, fax machines, and female-only secretaries, we can trust Arti in the future to take care of us when we are old and frail instead of worrying it will run us over with its motorcycle.
How do we do this? One way to do this is with clean, P.G. data.
The happy homebody
When I first started working in the legal tech field, I kept hearing the term "data is the new oil' from all sorts of characters: from founders, investors, CIA contractors, and in speeches of foreign leaders.
What story have you heard that oil was taken for free? Besides Iraq, Iran, Syria, Lebanon, and all the other wars, I mean.
But seriously… when we think of oil, we think of the cost per gallon. If data is oil, how come we don't get anything but still talk about google and facebook's multi-billion-dollar evaluations.
It seems like we, humans, mine and give away our data while we give away our privacy too. If data is the new oil… an expensive oil, why aren't we more concerned about someone robbing us in our backyard? It doesn't make any sense.
Europe passed the GDPR, which created onerous compliance costs for companies of all sizes, especially small and medium. The GDPR aimed to protect the privacy of its citizens by introducing the concept of consent. GDPR requires companies to get consent from the citizens prior to processing or storing their data. “The request has to specify what use will be made of your personal data and include contact details of the company processing the data.”
This gave citizens the opportunity to get creative. While still in its early days, European citizens are working on creating data trusts: pooling their data with other individuals in a trust, with the trust in charge of licensing their data or essentially selling it to corporations for profits and distributing that profit to the data owners who participate, thus creating Universal income.
This concept is evangelized today in the European Union and the U.K. by prominent researchers like Geroge Zarkadakis, the Author of the book In Our Own Image: Savior or Destroyer? The History and Future, and other prominent activist and legal professionals.
While the GDPR provides for some protection through its consent requirement, which is an aspect of ownership, it does not go far enough to grant data ownership rights to citizens.
How does it work in the U.S.?
The United States currently has no such law on the federal level that creates this protection or right for individuals.
Remember the U.S. constitution and that little clause granting you the right to life, liberty, and property? We will come back to life & liberty, but let's start with property first.
When the founding fathers wrote the constitution, they created a right for men (white men) to own land. They then took land from people with awesome hair and jewelry and gave it to the white men with a piece of paper, saying, "if you don't have this paper and a cool stamp, you have nothing." And that's how the right to property ownership was created so that you and I now register that paper with our county and pay a bunch of taxes annually for keeping it.
So why do I bring up property…
As I write this article, we are on the verge of web3 internet and Facebook’s Zuckerberg creating a metaverse. We will be moving toward creating entire digital identities, not just for the fun stuff, but to replace our old shi**y social security cards and i.d.s because we can't prove our identity like that by showing them on webcam over zoom to someone in another state. We will be using our digital identities for banking, voting, and for fun.
Our digital identities come with our digital footprints. Just imagine open access to your internet searches. It's getting harder and harder not to leave a trail of footprints. Remember, these days, when the cops come knocking, they don't need your laptop; they can just ask google for it.
Is it okay with you if they ask google for it without either the police or google asking you or even letting you know? This is happening more often when digital companies are getting subpoenas with gag orders without ever knowing anything about the fact that someone is looking through your digital history.
These companies always rely on their terms & conditions and agreements that you as a user agreed to that allow them to do this. But… if constitutionally your data is considered your property, then it needs to be permitted to enter your premises, it must be permitted to use and disclose your information. And in contract law, there may have to be some sort of consideration for this exchange as well.
But if you are not the type that cares about the privacy and safety of your data, maybe you care about monetizing it. The beauty of owning your property it is real, or intellectual property, is that it has a financial value and allows you to sell it, rent it or do whatever you want with it. In the case of intellectual property like patents, copyrights, and trademarks, the owner can earn licensing fees, sell it, take out loans against it, or sue whoever used it without permission.
How about data, though? Last year, United Airlines put up its loyalty program database as collateral to secure a U.S.-taxpayer-backed loan. A database of data as collateral. Clearly, the corporate world AND the government see the value in data and attach property qualities to data through their actions and behaviors.
But we are just little people on the other end of this love triangle, so how do we fix this?
I'm glad you asked. I have come up with two ways we can accomplish this: 1. we join them or 2. we sue them.
1. We Join them
Last year, Google fired the most prominent member of their A.I. ethics board who raised concerns about bias. They then fired a second one. While not the most desirable results, this atleast shows that there is room for data ethics activists inside to guide how AI grows and what data and how this data is used to raise ai.
If corporations, especially big Tech, are intelligent, cooperative, and love savings (I should have led with that), they would love the opportunity to fix the problem from the inside without years of defending against this inevitable litigation and changing in-laws. Instead of lobbying and massive campaign donations, create ethics boards with diverse backgrounds like law, human rights, economics, and policy to self-govern in cooperation with society and others like themselves. They could offer payments for time spent or activities on clicks on their website to their users. They could share advertising revenue as licensing fees for their users encouraging usage, and personal autonomy.
This would be similar to Facebook being our data trust, Zuckerberg being our trustee, and users as the beneficiaries. Honestly, who better than Zuck, the data-monetizing-evil-genius, to find ways to make money off data? He makes his profits, users make their money, and no more personal and government lawsuits to have to pay to defend against—no more millions to congresspeople and lobbyists. According to Facebook's public financial filings with the FTC, they spent 5 Billion Dollars so far in 2021 under administrative expenses, including legal.
Is this really the best route to the desired results for the company & society or is it just the best outcome for the politicians, lawyers, and lobbyists involved?
2. If playing nice fails, then there is always suing them.
What does it take these days to get a case to the supreme court level? Inconsistent decisions across jurisdictions. Modeling after E.U., California in 2020 updated their privacy rules to resemble that of the GDPR. Of course, not as far as the GdPR but enough to have companies update their terms and conditions to have a California residents exception while all other states do what they did before.
With one plaintiff or a few of the people whose data was subpoenaed by the justice department last year, we can sue big-tech to the top. With GDPR as an example, united airlines as an example, a suffering post-covid economy in dire need of social programs, and an ah-so-inspiring constitutional plea for the right to life, liberty, and property of American Citizens paid for by big Tech, and a conservative majority Supreme Court, I see a big, colorful win in our future.
We can do this in a cooperative way or the opposing way. Either way, it's happening.
For those who are convinced financially but still need a little more emotionally & morally, then I have to sell you a little harder on justice, fairness, and equality.
Remember our little friend Arti?
Creating a Neutral Zone for Raising Arti
Robots are here, and we love them. They help connect us to the world, they do stuff for us, and they help us live happier, healthier, and more fulfilled lives spending our time doing things we love.
The next ten years will surpass the technology and innovation of the last one hundred years combined. For this to happen for humanity and not against it as the end of most sci-fi movies, we must think about how we want to raise Arti. What do we want to teach Arti? What rules or boundaries will we set for Arti? What about when Arti messes up?
Have you heard the saying that there are no bad children, only bad parents? There are no bad Arti's, only bad creators… well, for the most part.
In 2018, researchers at MIT trained an AI named Norman afterNorman Bates from Alfred Hitchcock's cult classic film "Psycho” to think like a psychopath, to show how AI bias works. “Norman's responses – although demonstrating an AI can be as dark and macabre as any human – illustrated the researchers' larger point: AI bias is rooted in data fed, not algorithms.”
"The data that is used to teach a machine-learning algorithm can significantly influence its behavior," the team said on the project website. "So when people talk about AI algorithms being biased and unfair, the culprit is often not the algorithm itself, but the biased data that was fed to it."
Developers and engineers create Arti, and we still have to raise Arti and decide what to teach it and expose it to.
By rethinking data ownership, we can accomplish two main goals: 1. increase innovation, and 2. pull our data together, clean our data, and then feed Arti our data with parameters we collectively agree on through data trusts or Blockchain voting systems to democratically manage how and to whom our data is given to.
1. How does this create innovation?
Currently, three main clouds in the world handle the data of most companies and individuals, and these are Microsoft, Apple, and Amazon's AWS. While we may use different applications that we share our data with, like slack, CRMs, etc., most of these companies, including most areas of the U.S. government, store their data with these companies.
To create and raise Arties, like Dr.- Arti, pharmacist-Arti, Lawyer-Arti, therapist-Arti, or Elon's humanoid-Arti, creators need data to train their algorithm. This data is currently hoarded by the big cloud companies who utilize it to innovate new products and get bigger. For those who haven't been in the startup world, profit-driven corporate culture kills innovation faster than gossip travels in law school.
Innovation needs room to breathe, room to daydream. Maybe that's why most startups end up in California like other artistic dreamers. Most dreamers are also struggling with budget and funding to afford to get any data from the big guys or any other way. But imagine if someone wrote an algorithm, an Arti, who could predict breast cancer diagnosis with 98% accuracy? What if I told you that such an algorithm already exists for detecting signs of Alzheimer's disease from brain scans? Would it surprise you if I told you it's Microsoft?
Well… it shouldn't. Because on top of billions of dollars, they also sit on the world's biggest modern oil reserve of data. A 20-something-year-old developer could write this algorithm in one day, but when it comes to raising it, he is a dirt-poor single parent who ends up giving Ardi up to adoption to the big boys.
2. Our clean data
Now let's imagine that we own our own data, and we decide as individuals to pool our medical data together within a trust and choose whom we will license our data and how much. Maybe we will license it to Microsoft at Microsoft fees. But now, we also get to choose if we want to license our data to the single-parent developer so that he may change our destiny with our help.
As owners, we get to choose what data to share and create predictable and objective patterns not influenced by biases. We get to choose and vote what we mean by biases. And if we don't like our community, we simply take our data and move on to hitching our wagons somewhere else, another trust more aligned with our individual values.
We get to raise Arti our way.
So where do we want to go from here?
In the last five chapters, I talked about various ways we can co-exist and thrive with Arty. I offered simple tools & strategies to meet this objective.
A utopian world where corporations love us and happily pay us to use their services for a fee. A world where innovation serves humanity's needs. Where the minimums of life like food, water, shelter, and the environment, to emotional growth and happiness, are prioritized and provided for.
I want to end this lovely journey with one last point: If Arty is already creating & deemed an inventor by the courts of law, there is no turning back the same as there is no return to sender after you leave with your new baby from the hospital.
I have been doing research, networking, and soul-searching for solutions. I have spoken with Advisors to previous administrations in the U.S., and the E.U., to prominent researchers, developers, engineers, entrepreneurs, policy-makers, lawyers, judges, and many others.
We all agree that now is the time to move, but no group actively pursues these objectives. Hence me publishing these thoughts… as a way to be found by like-minded people ready to take action.
Would you please connect me if you have or know of a group/thinktanks currently working on this subject? If, like me, you found yourself kind of like a loaner in this field, connect with me. We will start our group.
Until then, I'll continue nerd-ing for one.
Attorney, Founder, AI Enthusiast
Published on 10/10/2021
By Bahar Ansari, Attorney, Founder, AI Enthusiast.
The arrival of the humanoid Bot
Where do we go from here?
A short story (because the world changes so fast)