Ubtech Walker

Ubtech is showing off the Walker, a bipedal robot that can walk a security patrol in your home and will respond to your voice commands. The Walker won’t be ready until 2019, and by that time, the company hopes to add working arms as well.

For now, the bot detects motion and can alert you if something’s amiss on its patrol. It can record security incidents with its built in camera, and you can interact with Walker via a touchscreen or with your voice.

Walker also allows you to make video calls, and supposedly integrates with your email and calendar as well. When you don’t need Walker keeping a straight face on security detail, the bot can also dance and play games.

Read more here (Andrew Gebhart, Ubtech’s bipedal robot walks the walk at CES, CNET, 09.01.2018)

Mathematicians solve age-old spaghetti mystery

If you happen to have a box of spaghetti in your pantry, try this experiment: Pull out a single spaghetti stick and hold it at both ends. Now bend it until it breaks. How many fragments did you make? If the answer is three or more, pull out another stick and try again. Can you break the noodle in two? If not, you’re in very good company.

The spaghetti challenge has flummoxed even the likes of famed physicist Richard Feynman ’39, who once spent a good portion of an evening breaking pasta and looking for a theoretical explanation for why the sticks refused to snap in two.Feynman’s kitchen experiment remained unresolved until 2005, when physicists from France pieced together a theory to describe the forces at work when spaghetti — and any long, thin rod — is bent. They found that when a stick is bent evenly from both ends, it will break near the center, where it is most curved. This initial break triggers a “snap-back” effect and a bending wave, or vibration, that further fractures the stick. Their theory, which won the 2006 Ig Nobel Prize, seemed to solve Feynman’s puzzle. But a question remained: Could spaghetti ever be coerced to break in two?

spagetti

Credit: MIT

The answer, according to a new MIT study, is yes — with a twist. In a paper published this week in the Proceedings of the National Academy of Sciences, researchers report that they have found a way to break spaghetti in two, by both bending and twisting the dry noodles. They carried out experiments with hundreds of spaghetti sticks, bending and twisting them with an apparatus they built specifically for the task. The team found that if a stick is twisted past a certain critical degree, then slowly bent in half, it will, against all odds, break in two.The researchers say the results may have applications beyond culinary curiosities, such as enhancing the understanding of crack formation and how to control fractures in other rod-like materials such as multifiber structures, engineered nanotubes, or even microtubules in cells.

Read more about it here (Massachusetts Institute of Technology. “Mathematicians solve age-old spaghetti mystery.” ScienceDaily. ScienceDaily, 13 August 2018.)

Original paper: Heisser, R.H., Patil, V.P., Stoop, N., Villermaux, E. and Dunkel, J., 2018. Controlling fracture cascades through twisting and quenching. Proceedings of the National Academy of Sciences115(35), pp.8665-8670.

Popular encryption software: Researchers help close security hole

Cybersecurity researchers at the Georgia Institute of Technology have helped close a security vulnerability that could have allowed hackers to steal encryption keys from a popular security package by briefly listening in on unintended “side channel” signals from smartphones. The attack, which was reported to software developers before it was publicized, took advantage of programming that was, ironically, designed to provide better security. The attack used intercepted electromagnetic signals from the phones that could have been analyzed using a small portable device costing less than a thousand dollars. Unlike earlier intercept attempts that required analyzing many logins, the “One & Done” attack was carried out by eavesdropping on just one decryption cycle.

enc.jpg

Credit: Georgia Tech

Side channel attacks extract sensitive information from signals created by electronic activity within computing devices during normal operation. The signals include electromagnetic emanations created by current flows within the devices computational and power-delivery circuitry, variation in power consumption, and also sound, temperature and chassis potential variation. These emanations are very different from communications signals the devices are designed to produce.

In their demonstration, Prvulovic and collaborator Alenka Zajic listened in on two different Android phones using probes located near, but not touching the devices. In a real attack, signals could be received from phones or other mobile devices by antennas located beneath tables or hidden in nearby furniture.

The “One & Done” attack analyzed signals in a relatively narrow (40 MHz wide) band around the phones’ processor clock frequencies, which are close to 1 GHz (1,000 MHz). The researchers took advantage of a uniformity in programming that had been designed to overcome earlier vulnerabilities involving variations in how the programs operate.

“Any variation is essentially leaking information about what the program is doing, but the constancy allowed us to pinpoint where we needed to look,” said Prvulovic. “Once we got the attack to work, we were able to suggest a fix for it fairly quickly. Programmers need to understand that portions of the code that are working on secret bits need to be written in a very particular way to avoid having them leak.”

Read more here (Georgia Institute of Technology. “Popular encryption software: Researchers help close security hole.” ScienceDaily. ScienceDaily, 9 August 2018.)

Key weakness in modern computer vision systems

Computer vision algorithms have come a long way in the past decade. They’ve been shown to be as good or better than people at tasks like categorizing dog or cat breeds, and they have the remarkable ability to identify specific faces out of a sea of millions. But research by Brown University scientists shows that computers fail miserably at a class of tasks that even young children have no problem with: determining whether two objects in an image are the same or different. In a paper presented last week at the annual meeting of the Cognitive Science Society, the Brown team sheds light on why computers are so bad at these types of tasks and suggests avenues toward smarter computer vision systems.

img_proc.jpg

Credit: Brown University

For the study, Serre and his colleagues used state-of-the-art computer vision algorithms to analyze simple black-and-white images containing two or more randomly generated shapes. In some cases the objects were identical; sometimes they were the same but with one object rotated in relation to the other; sometimes the objects were completely different. The computer was asked to identify the same-or-different relationship.The study showed that, even after hundreds of thousands of training examples, the algorithms were no better than chance at recognizing the appropriate relationship. The question, then, was why these systems are so bad at this task.

Serre and his colleagues had a suspicion that it has something to do with the inability of these computer vision algorithms to individuate objects. When computers look at an image, they can’t actually tell where one object in the image stops and the background, or another object, begins. They just see a collection of pixels that have similar patterns to collections of pixels they’ve learned to associate with certain labels. That works fine for identification or categorization problems, but falls apart when trying to compare two objects. To show that this was indeed why the algorithms were breaking down, Serre and his team performed experiments that relieved the computer from having to individuate objects on its own. Instead of showing the computer two objects in the same image, the researchers showed the computer the objects one at a time in separate images. The experiments showed that the algorithms had no problem learning same-or-different relationship as long as they didn’t have to view the two objects in the same image.

The source of the problem in individuating objects, Serre says, is the architecture of the machine learning systems that power the algorithms. The algorithms use convolutional neural networks — layers of connected processing units that loosely mimic networks of neurons in the brain. A key difference from the brain is that the artificial networks are exclusively “feed-forward” — meaning information has a one-way flow through the layers of the network. That’s not how the visual system in humans works, according to Serre.

Read more here (Brown University. “Key weakness in modern computer vision systems identified.” ScienceDaily. ScienceDaily, 30 July 2018.)