Atari master

Atari is a new AI which, as the name suggest, is related to arcade games. The creators claim that is is 10 times faster than Google DeepMind AI. What sets it apart is the capability of solving problems in environments where actions to achieve a goal are not immediately obvious.

It uses a reinforcement learning technique, which relies on increasing the number of points when performing actions that bring the agent closer towards the goal, and decreasing the points otherwise. As an example, it rewards actions such as ‘climb the ladder’ or ‘jump over that pit’.

Associate Professor Fabio Zambetta from RMIT University had said “We’ve shown that the right kind of algorithms can improve results using a smarter approach rather than purely brute forcing a problem end-to-end on very powerful computers. Our results show how much closer we’re getting to autonomous AI and could be a key line of inquiry if we want to keep making substantial progress in this field.”


However, their findings are not new in the domain, as other similar methods already exist. For example, Unity’s ML Agents use reinforcement learning for multiple types of games, with various tasks, such as: balancing objects on flat surfaces, playing tennis, controlling double-jointed arms, learning to walk, and navigating in a labyrinth to complete a task. The method is well-documented and can be found on GitHub.

Details about how Atari achieves its performance, in terms of technology and algorithms have not been made public yet. An oral presentation will take place at the 33rd AAAI Conference on Artificial Intelligence in Honolulu, which will hopefully shed light on these implementation issues.

Source (Atari master: New AI smashes Google DeepMind in video game challenge, RMIT Australia, 31.01.2019)

Original paper: Dann, M., Zambetta, F. and Thangarajah, J., 2019, July. Deriving subgoals autonomously to accelerate learning in sparse reward domains. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 33, No. 01, pp. 881-889).

Google Lens can now recognise over a billion items

Just over a year after its release, Google Lens can now identify more than one billion items – four times the number it recognised at launch, according to Aparna Chennapragada, vice president of Google Lens and augmented reality at Google. Lens is a Google technology that can recognise people, objects, places and other things captured in photos. It can also recognise text in images snapped with a smartphone camera.


Credit: AFP Relaxnews

Google Lens relies on “machine learning” – not to be confused with artificial intelligence – which allows the system to learn and improve based on accumulated experience. The more people use Google Lens, the more the service improves. However, that doesn’t mean that Google Lens is immune to teething problems, such as confusing several breeds of dogs or different models of lamps.

Google Lens can be used to identify items or text in photos. It is available in Google Photos, to collect information about one or more items in a picture, via Google Assistant, and is directly integrated into the Camera app on certain handsets, such as the Pixel smartphone range.

Source (Google Lens can now recognise over a billion items, The Star, 23.12.2018)

Machine learning without negative data

A research team from the RIKEN Center for Advanced Intelligence Project (AIP) has successfully developed a new method for machine learning that allows an AI to make classifications without what is known as “negative data,” a finding which could lead to wider application to a variety of classification tasks.


Credit: Riken

According to lead author Takashi Ishida from RIKEN AIP, “Previous classification methods could not cope with the situation where negative data were not available, but we have made it possible for computers to learn with only positive data, as long as we have a confidence score for our positive data, constructed from information such as buying intention or the active rate of app users. Using our new method, we can let computers learn a classifier only from positive data equipped with confidence.”

Ishida proposed, together with researcher Niu Gang from his group and team leader Masashi Sugiyama, that they let computers learn well by adding the confidence score, which mathematically corresponds to the probability whether the data belongs to a positive class or not. They succeeded in developing a method that can let computers learn a classification boundary only from positive data and information on its confidence (positive reliability) against classification problems of machine learning that divide data positively and negatively.

To see how well the system functioned, they used it on a set of photos that contains various labels of fashion items. For example, they chose “T-shirt,” as the positive class and one other item, e.g., “sandal,” as the negative class. Then they attached a confidence score to the “T-shirt” photos. They found that without accessing the negative data (e.g., “sandal” photos), in some cases, their method was just as good as a method that involves using positive and negative data.

According to Ishida, “This discovery could expand the range of applications where classification technology can be used. Even in fields where machine learning has been actively used, our classification technology could be used in new situations where only positive data can be gathered due to data regulation or business constraints. In the near future, we hope to put our technology to use in various research fields, such as natural language processing, computer vision, robotics, and bioinformatics.”

Read more here (RIKEN. “Smarter AI: Machine learning without negative data.” ScienceDaily. ScienceDaily, 26 November 2018.)

AI-written screenplay made into a film

With a dark, ominous atmosphere and gibberish script, short film Sunspring was penned by a computer and stars Silicon Valley’s Thomas Middleditch.


Artificial intelligence has recently been trying its hand at various human creative endeavours, from cooking to art, poetry to board games, but nothing is quite as surreal as a robot writing the script for a science fiction movie – until now. The script and movie were the product of director Oscar Sharp and Ross Goodwin, a New York University AI researcher. A so-called recurrent neural network, which named itself Benjamin, was fed the scripts of dozens of science fiction movies including such classics as Highlander Endgame, Ghostbusters, Interstellar and The Fifth Element.

From there it was asked to create a screenplay, including actor directions, using a set of prompts required by the Sci-Fi London film festival’s 48-hour challenge. The resulting screenplay and pop song were then given to the cast, including Thomas Middleditch from Silicon Valley, Elisabeth Gray and Humphrey Ker to interpret and make into a film. The actors were randomly assigned to the parts and set to it. The result is a weirdly entertaining, strangely moving dark sci-fi story of love and despair. The sentences make sense in isolation, although the dialogue doesn’t really when taken together – but if you were half watching while doing something else you would definitely get the feeling that something just happened.

Source (HAL90210, “This is what happens when an AI-written screenplay is made into a film”, TheGuardian, 10.06.2016)