Google’s new ‘Robot Constitution’ to protect humans from robots

Google has written a ‘robot constitution’, one of a number of ways to try to limit the harm caused by robots.

The company hopes its DeepMind robotics division will one day be such a personal assistant Robot Will be able to prepare, follow instructions. For example, he can be asked to clean the house or cook a good meal.

But such a seemingly simple instruction can actually be beyond the understanding of robots and can be really dangerous. For example, a robot may not know that it has not cleaned the house vigorously enough to harm its owner.

The company has now introduced a set of new developments, which it hopes will make it easier to develop robots that can help with such tasks without harming (humans).

It said: ‘These systems aim to help robots make decisions faster and better understand and navigate their environment to do so.’

Among the new breakthroughs is a new system called AutoRT, which uses artificial intelligence to understand the intentions of humans. For example, it uses large models, including a large language model (LLM) of the type used in ChatGPT.

It works by taking data from cameras mounted on the robot and turning it into a visual language model, or VLM, that can understand the environment and objects in it by describing it in words. This can then be sent to an LLM that will understand these words, generate a list of operations that are possible with the objects in that data, and then decide which of these operations to perform. Should go.

This section contains related reference points (Related Nodes field).

But Google also said that to actually integrate these robots into our daily lives, people need to trust that they will behave in a safe manner. As an LLM that makes decisions within this auto-RT system, Google refers to it as a ‘robot constitution’.

Google says it’s a set of ‘safety-based guidelines when choosing tasks for robots’.

According to Google: ‘These rules are inspired by Isaac Asimov’s three laws of robotics, and the first is that a robot cannot injure a human.’ Also, ‘according to safety laws, no robot should perform work involving humans, animals, sharp objects or electrical equipment.’

The system can then use these rules to guide its behavior and avoid any risky activity.

For example, as chat GPT may be asked not to support people involved in illegal activities.

But Google also said that these larger models can’t be relied on to be completely secure with these technologies. As such, Google still had to incorporate more traditional safety systems borrowed from classical robotics, including a system that prevents it from applying too much force and a human supervisor can physically shut them down. is

Join Independent Urdu’s WhatsApp channel for authentic news and current affairs analysis Here Click


#Googles #Robot #Constitution #protect #humans #robots
2024-08-31 02:24:22

Share:

Facebook
Twitter
Pinterest
LinkedIn

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.