Facebook develops a simple test that can help determine the intelligence level of an artificial intelligence software
Facebook develops a simple test that can help determine the intelligence level of an artificial intelligence software
The AI intelligence test by Facebook has tasks involving short descriptions followed by some questions, like a reading comprehension quiz.

New York: Facebook has developed a simple test that can help determine the intelligence level of an artificial intelligence software.

The test, developed by researchers at Facebook's Artificial Intelligence lab, involves 20 tasks, which get progressively harder.

Any potential artificial intelligence (AI) must pass all of them if it is ever to develop true intelligence, researchers said.

Computing pioneer Alan Turing introduced his own test for AI, called The Turing test, in 1950.

In the test a human judge engages in natural language conversations with a human and a machine designed to generate performance indistinguishable from that of a human being.

If the judge cannot tell the machine from the human, the machine is said to have passed the test.

However, this approach has a downside.

"The Turing test requires us to teach the machine skills that are not actually useful for us," said Matthew Richardson, an AI researcher at Microsoft.

For instance, to pass the test an AI must learn to lie about its true nature and pretend not to know facts a human would not.

AI researchers everywhere are developing more comprehensive exams to challenge their machines.

The AI intelligence test by Facebook has tasks involving short descriptions followed by some questions, like a reading comprehension quiz.

For example the AI may have to answer the following question - John is in the playground. Bob is in the office. Where is John?

Harder examples include figuring out whether one object could fit inside another, or why a person might act a certain way, 'New Scientist' reported.

"We wanted tasks that any human who can read can answer," said Facebook's Jason Weston, who led the research.

Having a range of questions challenges the AI in different ways, meaning systems that have a single strength fall short, researchers said.

The Facebook team used its exam to test a number of learning algorithms, and found that none managed full marks.

The best performance was by a variant of a neural network with access to an external memory. But even this fell down on tasks like counting objects in a question or spatial reasoning, researchers said

What's your reaction?

Comments

https://filka.info/assets/images/user-avatar-s.jpg

0 comment

Write the first comment for this!