The Problem With AI is ...........

The hype about AI is real. It will change the world but it is not perfect, so here is my take on the main problem.

Jonathan Worsley

2/2/20251 min read

My post content

Hallucinations

Anyone working with AI will know there is the possibility of unknown things happening.

A pocket calculator will always add up correctly.

A software programme will always do its job, and if it can't, it will crash.

Real AI has the potential to change its mind, do something different and give an erroneous result and you may never know. These events are call "ai hallucinations."

This is because probability is "baked-in" from inception.

Is the Risk Manageable?

The big question, from small AI projects to global AI alignment is "Can we manage the risk?"

Is the AI tool in alignment with my needs / my business needs / the survival of the human race?

Given the vast complexity of the AI systems in play the answer can only be one of hope, not fact.

This is the big problem with AI today, and forever more.

Managing Risk

AI Risk Assessment will be the future.

My guess is, the regulation of AI will follow the same recipe as GDPR, one of self assessment.

If, and when, you launch an AI project I would advise doing a robust, third party risk assessment.

To demonstrate that you have carefully considered and mitigated for

  • What could happen

  • What damage could result from that event

  • The response to a mis-alignment

Consider this good practice, ethical, and sensible a*se-covering advice.

Be Compliant

The government has always lagged behind the private sector, that is not a criticism, just a fact of life.

When the Government doesn't fully understand the complexity of the problem, or when the industry is moving too fast to lock down useful legislation the only tool in the bag is "Self Assessment".