Shady lawyer dupes federal judge using artificial intelligence

Steven S. is a personal injury lawyer whose client was suing Colombia-based Avianca Airlines.

The client claimed that he was hit by a metal serving cart onboard a flight and suffered injuries.

During the court case, Steven cited six other cases to show precedent for the allegations, including Shaboon Vs. Egypt Air as an example.

Trouble is, none of the six cases referenced by Steven ever existed.

There was literature for the cases that the lawyer used.

But they were bogus judicial decisions with made-up quotes and citations.

You see, the attorney representing the airline asked for information on the cases Steven cited.

So, the federal judge reviewed the requests and determined that the cases were fake and said the court faced an “unprecedented circumstance.”

Once the court realized none of the cases even existed, it came out that a member of Steven’s law firm used ChatGTP to conduct their research.

The artificial intelligence tool claimed that the court cases were legitimate.

After the mistake was realized, Steven submitted an affidavit to the court admitting to using artificial intelligence.

Steven argued that he shouldn’t be sanctioned because he didn’t intend to deceive the judge and didn’t act in bad faith.

While the artificial intelligence software ChatGTP has soared in popularity, like any form of artificial intelligence, it’s far from perfect.

The scary thing is that governments around the world are looking for ways to replace human beings with artificial intelligence.

But this means that these “super-intelligent machines” could end up competing – and warring – against each other.

So, here are a few ways that wars could be fought using artificial intelligence and how it could affect you.

Miscalculations:

It’s no secret that militaries around the world make mistakes.

Everything from misguided bombs to inaccurate intel.

But as militaries increase the use of AI there could be bigger miscalculations, with devastating consequences.

These mistakes could involve nuclear weapons and be more costly than anyone could imagine.

This is why it’s important to be prepared for all types of disasters, including nuclear war.

While the chances might be slim, the consequences of being caught flat-footed are devastating.

Which is why I always recommend preparing your home for nuclear incidents.

Data:

The way artificial intelligence works is by analyzing a ton of data.

For militaries to use AI they need access to data, including a lot of secret and classified information.

For example, militaries would need to train their AI to recognize weapons.

Not just weapons used by the U.S., but every weapon imaginable.

Additionally, your personal information could be collected by militaries.

Everything from where you work to where you live, this data could be valuable from a strategic standpoint.

This is why it’s so important to protect your data.

The less information you share, the less it will be used to train artificial intelligence.

Explaining decisions:

Artificial intelligence is trained to reach conclusions.

But often, there is no way to explain how the AI reached a conclusion, it just does it.

However, from a military standpoint, countries are often forced to explain their decisions and actions.

If a country attacks and harms citizens, those actions will be scrutinized.

Artificial intelligence would have a hard time explaining the actions.

And even if you aren’t in the military, you should always be prepared to explain your defensive actions.

For example, if a madman attacks you on the streets you need to explain that you feared for your safety and why and how you reacted.

Which is why it’s important to always have a personal and home defense plan in place.

Leave A Reply

Your email address will not be published.