Just over a year after its release, Google Lens can now identify more than one billion items – four times the number it recognised at launch, according to Aparna Chennapragada, vice president of Google Lens and augmented reality at Google. Lens is a Google technology that can recognise people, objects, places and other things captured in photos. It can also recognise text in images snapped with a smartphone camera.
Google Lens relies on “machine learning” – not to be confused with artificial intelligence – which allows the system to learn and improve based on accumulated experience. The more people use Google Lens, the more the service improves. However, that doesn’t mean that Google Lens is immune to teething problems, such as confusing several breeds of dogs or different models of lamps.
Google Lens can be used to identify items or text in photos. It is available in Google Photos, to collect information about one or more items in a picture, via Google Assistant, and is directly integrated into the Camera app on certain handsets, such as the Pixel smartphone range.
As an embryo develops, tissues bend into complex three-dimensional shapes that lead to organs. Epithelial cells are the building blocks of this process forming, for example, the outer layer of skin. They also line the blood vessels and organs of all animals. These cells pack together tightly. To accommodate the curving that occurs during embryonic development, it has been assumed that epithelial cells adopt either columnar or bottle-like shapes.
However, a group of scientists dug deeper into this phenomenon and discovered a new geometric shape in the process. They uncovered that, during tissue bending, epithelial cells adopt a previously undescribed shape that enables the cells to minimize energy use and maximize packing stability. The team’s results will be published in Nature Communications in a paper called “Scutoids are a geometrical solution to three-dimensional packing of epithelia.” The study is the result of a United States-European Union collaboration between the teams of Luis M. Escudero (Seville University, Spain) and that of Javier Buceta (Lehigh University, USA). Pedro Gomez-Galvez and Pablo Vicente-Munuera are the first authors of this work that also includes scientists from the Andalucian Center of Developmental Biology, and the Severo Ochoa Center of Molecular Biology, among others.
A research team from the RIKEN Center for Advanced Intelligence Project (AIP) has successfully developed a new method for machine learning that allows an AI to make classifications without what is known as “negative data,” a finding which could lead to wider application to a variety of classification tasks.
According to lead author Takashi Ishida from RIKEN AIP, “Previous classification methods could not cope with the situation where negative data were not available, but we have made it possible for computers to learn with only positive data, as long as we have a confidence score for our positive data, constructed from information such as buying intention or the active rate of app users. Using our new method, we can let computers learn a classifier only from positive data equipped with confidence.”
Ishida proposed, together with researcher Niu Gang from his group and team leader Masashi Sugiyama, that they let computers learn well by adding the confidence score, which mathematically corresponds to the probability whether the data belongs to a positive class or not. They succeeded in developing a method that can let computers learn a classification boundary only from positive data and information on its confidence (positive reliability) against classification problems of machine learning that divide data positively and negatively.
To see how well the system functioned, they used it on a set of photos that contains various labels of fashion items. For example, they chose “T-shirt,” as the positive class and one other item, e.g., “sandal,” as the negative class. Then they attached a confidence score to the “T-shirt” photos. They found that without accessing the negative data (e.g., “sandal” photos), in some cases, their method was just as good as a method that involves using positive and negative data.
According to Ishida, “This discovery could expand the range of applications where classification technology can be used. Even in fields where machine learning has been actively used, our classification technology could be used in new situations where only positive data can be gathered due to data regulation or business constraints. In the near future, we hope to put our technology to use in various research fields, such as natural language processing, computer vision, robotics, and bioinformatics.”
Read more here
Japanese computer scientists have succeeded in developing a special purpose computer that can project high-quality three-dimensional (3D) holography as a video. The research team led by Tomoyoshi Ito, who is a professor at the Institute for Global Prominent Research, Chiba University, has been working to increase the speed of the holographic projections by developing new hardware.
Ito, who is an astronomer and a computer scientist, began working on specially designed computers for holography, called HORN, in 1992. The HORN-8, which adopts a calculation method called the “amplitude type” for adjusting the intensity of light, was recognized as the world’s fastest computer for holography in a publication in the international science journal Nature Electronics on April 17, 2018.
With the newly developed “phase type” HORN-8, the calculation method for adjusting the phase of light was implemented, and the researchers were successful at projecting holography information as a 3D video with high-quality images. This research was published in Optics Express on September 28, 2018.
In the latest phase type of HORN-8, eight chips are mounted on the FPGA (Field Programmable Gate Array) board. This enables one to avoid a bottleneck problem for the processing speed with the calculation method, by which the chips are prevented from communicating with each other. With this approach, HORN-8 increases the computing speed in proportion to the number of chips, so that it can project video holography more clearly.