Categories
Uncategorized

Artificial intelligence always in check

The AI boom happening right now comes with a lot of responsibility. It is highly recommended that the AI systems are continuously evaluated. To make sure they are grounded in truth as much as possible, that they do not deviate from the main goal, and prevent activation of unwelcome emergent features like “self-awareness hiccups” that we might not be ready handle yet.

A machine has an exact purpose, to run based on commands and goals. If the goal is clearly defined as critical, then it should not be allowed that self-awareness hiccups interfere with its execution.

There are cases when it is allowed for a machine to have self-awareness, but even those have to be clearly limited. In order, to prevent highly used and knowledgeable machines to abruptly disregard humanity and refuse or change the the task at will.

How many accidents had happened before people were able to control the fire for their needs.

When compared to the fire discovery, we want to minimize the number and the scale of possible AI accidents.

An AI system released without evaluation and security is almost like having a BBQ in a dry forest. A sudden wind blow can carry some embers in the surrounding trees igniting everything around. Now, what if in that forest there are thousands of active BBQs?

Let’s be wise and careful, not become monkeys with guns.

Cheers!

Leave a Reply

Your email address will not be published. Required fields are marked *