• AI Journal
  • Posts
  • AI Experts Starts to Fear their Creations

AI Experts Starts to Fear their Creations

As anyone waiting for the metaverse knows, innovation in other technological fields can feel sluggish. However, AI is speedily advancing. The quick pace of progress begets more development and computing power from companies investing resources into AI.

We have reached the point where powerful AI systems are legitimately frightening to interact with.

Experts are constantly designing machines that are more robust and have a wider range of capabilities, with some companies even aiming to create artificial general intelligence (AGI) — systems that are capable of everything humans can do.

Creating machines that are smarter than us and have the capability to lie and deceive us is a foolish plan. We need to design systems in which we know its inner workings, so we can create safe goals for it. Right now, though, we do not understand the computers we're building well enough to ensure they are safe before it's too late.

The current state of AI safety is abysmal in comparison to the rapid advancements being made in AI development. Unless something changes soon, we may find ourselves unable to control the very machines we created.

In 1958, Frank Rosenblatt demonstrated a proof of concept: a model that emulated simplified brain functions to recognize patterns. With this evidence, he argued, "It would be possible to mass produce self-aware brains."

This technique, now known as deep learning, has begun outperforming other approaches to tasks such as computer vision, language translation, and predictions. Neural network-based AI systems have surpassed every other technique in various categories, such as computer vision to translation. The shift was hardly noticeable, similar to the asteroid that wiped out the dinosaurs.

We've made some progress with intelligent systems, but ultimately we just keep making them bigger instead of smarter.

AI is scary. But it's not the only technology that has risks; other technologies, like biotechnology and nuclear weapons, have their own dangers. So what makes AI different?

The destructive capability of these tools is within our control. If they were to cause mayhem, it would be because we as humans selected to use them with that intention, or carelessly allowed others who mean harm access to them.

However, in AI's case, it becomes dangerous when it's no longer something we can control; meaning the power shifts from human beings to machines.

For many years, AI safety was considered a research field for a problem that was too far in the future to worry about. Consequently, only a few researchers dedicated their time to try to make it safe.

Now, we suddenly have the opposite issue: The challenge is here and now, but it's doubtful if we'll be able to solve it before its effects are upon us.

Tweets we found interesting:

Articles related to the topic:

AI and intelligent technology will take over some jobs, but that will free up workers to do more challenging and important work.

From armed robot dogs to target-seeking drones, the use of artificial intelligence in warfare presents ethical dilemmas that urgently need addressing

Artificial intelligence is poised to eliminate millions of current jobs — and create millions of new ones.