Skip to content

Harnessing the Positive Power of AI as a Lawyer: Get Amplified, per Former Google CEO and Executive Chair, Eric Schmidt

September 25, 2023

I was blessed to catch a CNN interview with Former Google CEO and Executive Chair, Eric Schmidt, earlier this year.

With what seems like the majority of folks, including the CNN reporter, focusing on the dangers of AI, Schmidt sees its benefits.

Schmidt’s take is an important one for lawyers and law firms, who and the people they serve – including the society at large have so much to gain through is AI.

No question Schmidt sees the risk of AI, without the proper guardrails.

Here’s the lion’s share of what Schmidt had to say (emphasis added):

The benefits of AI include an AI tutor, an AI doctor, helping everyone get smarter, all available globally, lifting people from poverty, solving climate change, accelerating drug discovery.

The power and innovation that has been released in this lifetime is phenomenal.

At the same time we are concerned about potential risks. As these models get larger they take on properties that are concerning.

The ability to change from one media to another, such as voice casting is just one concern.

Imagine a situation where you could do cyberattacks. It has learned how to do this, not because it was programmed, but because it has encountered the information.

What the industry does today is put guardrails on the system. We’re very concerned that the guardrails need to get set right and that those guardrails need to be applied everywhere.

Work changes? Some of my friends [lawyers] write documents and they say “edit this” and GPT-4 produces a better version. That’s AI acting as an amplifier.

In most cases, AI will make you quicker, more efficient, smarter, a better communicator, whatever it is that you are doing. You are amplified.

The same will be true for physicists, chemists, teachers, poets and so forth. That’s all good.

The issue is that the “same person” gets amplification to spread evil.

Look at what happened in 2016 when the Russians used a series of people to create fake identities to “flood the zones.”

We are going to have a heck of a year in 2024. What happened in 2016 with false identities in voting from the Russians could be flooded by one person in 2024 to fill fake identities across social networks.

Today, the tools are available to one bad person but the new stack could fill an entire system of false information that looks legimate and spreads through social media.

Social media is largely not regulated and we see the consequences. There is an agreement between industry and the government that we not make the same mistake in the case of AI.

The US government is beginning to think about it. It’s a good process.

This technology will be regulated in some way, because of its potentional danger.

Just don’t regulate it out of existence, in which case we won’t realize the benefits.

How bad? This technology could attack a whole country with a cyberattack. “Do it until everybody is dead.”

Imagine you want to kill a million people, “show me a biological path to do it.”

These are the dangers that we have to make sure do not happen. We need to put on guardrails and limits.

People are working on these problems, but we don’t understand the solutions yet.”

Scary stuff? Very.

But recognize the good that can be done with the effective use of AI.

Many of us went to law school to change the world for the better. To provide greater access to justice. To provide greater access to legal services. To hold those doing wrong, accountable.

By learning how to use AI to achieve these goals, our work will be amplified – as Schmidt says. We’ll have a fighting chance, something we’d never have had other wise.

That’s the good in AI. You get amplified.