How should autonomous vehicles be programmed?

A massive new survey developed by MIT researchers reveals some distinct global preferences concerning the ethics of autonomous vehicles, as well as some regional variations in those preferences.  The survey has global reach and a unique scale, with over 2 million online participants from over 200 countries weighing in on versions of a classic ethical conundrum, the “Trolley Problem.” The problem involves scenarios in which an accident involving a vehicle is imminent, and the vehicle must opt for one of two potentially fatal options. In the case of driverless cars, that might mean swerving toward a couple of people, rather than a large group of bystanders.

aveth.jpg

Credit: MIT

The most emphatic global preferences in the survey are for sparing the lives of humans over the lives of other animals; sparing the lives of many people rather than a few; and preserving the lives of the young, rather than older people.

“The main preferences were to some degree universally agreed upon,” Awad notes. “But the degree to which they agree with this or not varies among different groups or countries.” For instance, the researchers found a less pronounced tendency to favor younger people, rather than the elderly, in what they defined as an “eastern” cluster of countries, including many in Asia.

Read more here (Massachusetts Institute of Technology. “How should autonomous vehicles be programmed? Massive global survey reveals ethics preferences and regional differences.” ScienceDaily. ScienceDaily, 24 October 2018.)

Original paper: Awad, E., Dsouza, S., Kim, R., Schulz, J., Henrich, J., Shariff, A., Bonnefon, J.F. and Rahwan, I., 2018. The moral machine experiment. Nature563(7729), pp.59-64.

Roadmap for quantum internet development

A quantum internet may very well be the first quantum information technology to become reality. Researchers at QuTech in Delft, The Netherlands, today published a comprehensive guide towards this goal in Science. It describes six phases, starting with simple networks of qubits that could already enable secure quantum communications — a phase that could be reality in the near future. The development ends with networks of fully quantum-connected quantum computers. In each phase, new applications become available such as extremely accurate clock synchronization or integrating different telescopes on Earth in one virtual ‘supertelescope’. This work creates a common language that unites the highly interdisciplinary field of quantum networking towards achieving the dream of a world-wide quantum internet.

quantum-internet.jpg

Credit: Shutterstock

A quantum internet will revolutionize communication technology by exploiting phenomena from quantum physics, such as entanglement. Researchers are working on technology that enables the transmission of quantum bits between any two points on earth. Such quantum bits can be ‘0’ and ‘1’ at the same time, and can be ‘entangled’: their fates are merged in such a way that an operation on one of the qubits instantly affects the state of the other.

This brings two features which are provably out of reach for the Internet that we know today. The first is that entanglement allows improved coordination between distant sites. This makes it extremely suitable for tasks such as clock synchronization or the linking of distant telescopes to obtain better images. The second is that entanglement is inherently secure. If two quantum bits are maximally entangled, then nothing else in the universe can have any share in that entanglement. This feature makes entanglement uniquely suitable for applications that require security and privacy.

Read more here (Delft University of Technology. “Roadmap for quantum internet development.” ScienceDaily. ScienceDaily, 18 October 2018.)

Robat

Bats use echolocation to map novel environments while simultaneously navigating through them by emitting sound and extracting information from the echoes reflected from objects in their surroundings. Many theoretical frameworks have been proposed to explain how bats routinely solve one of the most challenging problems in robotics, but few attempts have been made to build an actual robot that mimics their abilities. Unlike most previous efforts to apply sonar in robotics, Eliakim and colleagues developed a robot that uses a biological bat-like approach, emitting sound and analyzing the returning echoes to generate a map of space.

robat.jpg

Credit: Itamar Eliakim

Robat has an ultrasonic speaker that mimics the mouth, producing frequency modulated chirps at a rate typically used by bats, as well as two ultrasonic microphones that mimic ears. It moved autonomously through a novel outdoor environment and mapped it in real time using only sound. Robat delineates the borders of objects it encounters, and classifies them using an artificial neural network, thus creating a rich, accurate map of its environment while avoiding obstacles. For example, when reaching a dead end, the robot used its classification abilities to determine whether it was blocked by a wall or by a plant through which it could pass.

“To our best knowledge, our Robat is the first fully autonomous bat-like biologically plausible robot that moves through a novel environment while mapping it solely based on echo information — delineating the borders of objects and the free paths between them and recognizing their type,” Eliakim said. “We show the great potential of using sound for future robotic applications.”

Source (PLOS. “Robot-bat, ‘Robat,’ uses sound to navigate and map a novel environment: Robat uses a bat-like approach, emitting sound and analyzing the returning echoes.” ScienceDaily. ScienceDaily, 6 September 2018.)

Original paper: Eliakim, I., Cohen, Z., Kosa, G. and Yovel, Y., 2018. A fully autonomous terrestrial bat-like acoustic robot. PLoS computational biology14(9), p.e1006406.

Who Let The Dogs Out? Modeling Dog Behavior From Visual Data

Probably the coolest research paper name ever! The idea here was to try and model the thoughts and actions of a dog. The authors attach a number of sensors to the dog’s limbs to collect data for its movement; they also attach a camera to the dog’s head to get the same first-person view of the world that the dog does. A set of CNN feature extractors are used to get image features from the video frames, which are then passed to a set of LSTMs along with the sensor data to learn and predict the dog’s actions. The very new and creative application, along with the unique way the task was framed and carried out make this paper an awesome read! Hopefully it can inspire future research creativity with the way we collect data and apply deep learning techniques.

figure1.png

Read paper here (Ehsani, K., Bagherinezhad, H., Redmon, J., Mottaghi, R. and Farhadi, A., 2018. Who let the dogs out? modeling dog behavior from visual data. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 4051-4060).)

Taken from: The 10 coolest papers from CVPR 2018 (George Seif, Towards Data Science, 28.06.2018)