Animating Pictures

Researchers have developed a method for producing looping videos from one image. The technique is specialized in fluid motion seen in water, smoke or clouds. After training a deep neural network on thousands of images, the framework is capable of estimating motion. The solution posed several challenges, from which the most difficult to overcome was the employment of the splatting technique. Through it, holes in the top part of the images appeared. Using the previously obtained motion, the authors created a symmetric splatting methodology that merges the flow bidirectionally. A presentation of the published work can be seen in the following video.

For an in-depth understanding of the paper, please see the following resources:

Source (University of Washington. “Researchers can turn a single photo into a video.” ScienceDaily. ScienceDaily, 15 June 2021.)

Original paper: Holynski, A., Curless, B., Seitz, S.M. and Szeliski, R., 2020. Animating Pictures with Eulerian Motion Fields. arXiv preprint arXiv:2011.15128.


Not everything that is true can be proven. This discovery transformed infinity, changed the course of a world war and led to the modern computer.

Here is a short documentary on Conway’s Game of Life, beautifully explaining how the logic gates of the Turing machine were implemented to run inside GoL.

For more in-depth explanations, read: Rendell, P., 2014. Turing machine universality of the game of life (Doctoral dissertation, University of the West of England).

Turing Machine Diagram for the Game of Life, Credit: Rendell

Local Cosmic Web from Galaxies

Dark matter is an elusive substance that makes up 80% of the universe. It also provides the skeleton for what cosmologists call the cosmic web, the large-scale structure of the universe that, due to its gravitational influence, dictates the motion of galaxies and other cosmic material. However, the distribution of local dark matter is currently unknown because it cannot be measured directly. Researchers must instead infer its distribution based on its gravitational influence on other objects in the universe, like galaxies.

Credit: Sungwook E. Hong

Previous attempts to map the cosmic web started with a model of the early universe and then simulated the evolution of the model over billions of years. However, this method is computationally intensive and so far has not been able to produce results detailed enough to see the local universe. In the new study, the researchers took a completely different approach, using machine learning to build a model that uses information about the distribution and motion of galaxies to predict the distribution of dark matter.

The researchers built and trained their model using a large set of galaxy simulations, called Illustris-TNG, which includes galaxies, gasses, other visible matter, as well as dark matter. The team specifically selected simulated galaxies comparable to those in the Milky Way and ultimately identified which properties of galaxies are needed to predict the dark matter distribution. The research team then applied their model to real data from the local universe from the Cosmicflow-3 galaxy catalog. The catalog contains comprehensive data about the distribution and movement of more than 17 thousand galaxies in the vicinity of the Milky Way — within 200 megaparsecs.

The map successively reproduced known prominent structures in the local universe, including the “local sheet” — a region of space containing the Milky Way, nearby galaxies in the “local group,” and galaxies in the Virgo cluster — and the “local void” — a relatively empty region of space next to the local group. Additionally, it identified several new structures that require further investigation, including smaller filamentary structures that connect galaxies. For example, it has been suggested that the Milky Way and Andromeda galaxies may be slowly moving toward each other, but whether they may collide in many billions of years remains unclear. Studying the dark matter filaments connecting the two galaxies could provide important insights into their future.

Adapted and abridged from Source (Penn State. “Dark matter map reveals hidden bridges between galaxies.” ScienceDaily. ScienceDaily, 25 May 2021.)

Original paper: Sungwook E. Hong, Donghui Jeong, Ho Seong Hwang, Juhan Kim. Revealing the Local Cosmic Web from Galaxies by Deep Learning. The Astrophysical Journal, 2021; 913 (1): 76 DOI: 10.3847/1538-4357/abf040

A robot that senses hidden objects

The researchers at MIT (Massachusetts Institute of Technology) have developed a robot that uses radio waves, which can pass through walls, to sense occluded objects. The robot, called RF-Grasp, combines this powerful sensing with more traditional computer vision to locate and grasp items that might otherwise be blocked from view. The advance could one day streamline e-commerce fulfillment in warehouses or help a machine pluck a screwdriver from a jumbled toolkit.

Using optical vision alone, robots can’t perceive the presence of an item packed away in a box or hidden behind another object on the shelf — visible light waves, of course, don’t pass through walls. But radio waves can. For decades, radio frequency (RF) identification has been used to track everything from library books to pets. RF identification systems have two main components: a reader and a tag. The tag is a tiny computer chip that gets attached to — or, in the case of pets, implanted in — the item to be tracked. The reader then emits an RF signal, which gets modulated by the tag and reflected back to the reader.

The reflected signal provides information about the location and identity of the tagged item. The technology has gained popularity in retail supply chains — Japan aims to use RF tracking for nearly all retail purchases in a matter of years. The researchers realized this profusion of RF could be a boon for robots, giving them another mode of perception.

Source: Boroushaki

RF Grasp uses both a camera and an RF reader to find and grab tagged objects, even when they’re fully blocked from the camera’s view. It consists of a robotic arm attached to a grasping hand. The camera sits on the robot’s wrist. The RF reader stands independent of the robot and relays tracking information to the robot’s control algorithm. So, the robot is constantly collecting both RF tracking data and a visual picture of its surroundings. Integrating these two data streams into the robot’s decision making was one of the biggest challenges the researchers faced.

Adapted and abridged from Source (Massachusetts Institute of Technology. “A robot that senses hidden objects: System uses penetrative radio frequency to pinpoint items, even when they’re hidden from view.” ScienceDaily. ScienceDaily, 1 April 2021.)

Original paper: Boroushaki, T., Leng, J., Clester, I., Rodriguez, A. and Adib, F., 2020. Robotic Grasping of Fully-Occluded Objects using RF Perception. arXiv preprint arXiv:2012.15436.

Pollinator Park

Three out of four bites of food we eat depend on pollination. Bumblebees, solitary bees, hoverflies, butterflies, moths, wasps, beetles, and flies are all essential to keeping nature healthy. However, the pollinators are in serious decline. Around four in five crop and wild flowering plant species in the EU depend on animal pollination.

The European Commission has teamed up with renowned architect Vincent Callebaut to create the futuristic Pollinator Park. In collaboration with world renowned ‘archiobiotect’ Vincent Callebaut, you are invited into a 30-minute interactive and emotionally engaging virtual reality experience that immerses you in a futuristic world where man and nature co-exist in harmony, hoping to change your perspective and help turn the tide.

The interactive experience is set in 2050 where a cascade of ecological crises has impoverished the world and pollinating insects have all but disappeared. Visitors can walk through different steps in a futuristic farm, which provides a safe haven for pollinators and is an eye-opener for visitors.

Adapted and abridged from Source (This VR experience imagines a future without vital pollinators, Euronews, 30.04.2021)

Wave gliding of the pelicans

Researchers at the University of California San Diego have recently developed a theoretical model that describes how the ocean, the wind and the birds in flight interact. UC San Diego mechanical engineering Ph.D. student Ian Stokes and adviser Professor Drew Lucas, of UC San Diego’s Department of Mechanical and Aerospace Engineering and Scripps Institution of Oceanography, found that pelicans can completely offset the energy they expend in flight by exploiting wind updrafts generated by waves through what is known as wave-slope soaring. In short, by practicing this behavior, sea-birds take advantage of winds generated by breaking waves to stay aloft.

The model could be used to develop better algorithms to control drones that need to fly over water for long periods of time, the researchers said. Potential uses do not stop there. The model can also serve as a basic prediction for the winds generated by passing swell, which is important to physicists that study how the ocean and atmosphere interact in order to improve weather forecasting.

Adapted and abridged from Source (University of California – San Diego. “The wave beneath their wings: Researchers work out intricate dance between waves, wind, and gliding pelicans.” ScienceDaily. ScienceDaily, 21 April 2021.)

Original paper: Stokes, I.A. and Lucas, A.J., 2021. Wave-slope soaring of the brown pelican. Movement Ecology9(1), pp.1-13.

‘Where did I park my car?’ Brain stimulation improves mental time travel

You might remember you ate cereal for breakfast but forget the color of the bowl. Or recall watching your partner put the milk away but can’t remember on which shelf. A new Northwestern Medicine study improved memory of complex, realistic events similar to these by applying transcranial magnetic stimulation (TMS) to the brain network responsible for memory.

Experimental design overview

The study authors used TMS with the goal of altering brain activity and memory for realistic events. Immediately following stimulation, subjects performed a memory task while having their brains scanned using functional magnetic resonance imaging (fMRI). Instead of showing study participants pictures or lists of words — typical practices in laboratory tests that analyze memory — participants in this study watched videos of everyday activities such as such as someone folding laundry or taking out the garbage.

Following stimulation, study participants more accurately answered questions about the content of the video clips, such as identifying the shirt color an actor was wearing or the presence of a tree in the background. Additionally, the study found that brain stimulation led to higher quality reinstatement of memories in the brain, which happens when the brain replays or relives an original event. Following stimulation, a person’s brain activity while watching a video more closely resembled their brain activity when remembering that same video.

Adapted and abridged from Source (Northwestern University. “‘Where did I park my car?’ Brain stimulation improves mental time travel: Study used videos of realistic activities to measure how memory works day to day.” ScienceDaily. ScienceDaily, 4 February 2021.)

Original paper: Hebscher, M., Kragel, J.E., Kahnt, T. and Voss, J.L., 2021. Enhanced reinstatement of naturalistic event memories due to hippocampal-network-targeted stimulation. Current Biology.

A guide to making high-performance, versatile solar cells

Improving solar cell design is integral for improving energy consumption. Scientists have lately focused on making solar cells more efficient, flexible, and portable to enable their integration into everyday applications. Consequently, novel lightweight and flexible thin film solar cells have been developed. It is, however, not easy to combine efficiency with flexibility. For a material (usually a semiconductor) to be efficient, it must have a small “band gap” — the energy required to excite charge carriers for electrical conduction — and should absorb and convert a large portion of the sunlight into electricity. Till date, no such efficient absorber suitable for thin film solar cells has been developed.

In a new study published in Applied Materials and Interfaces, scientists from Korea addressed this issue and proposed a novel solution in the form of “antiperovskite” oxides, denoted as Ba4Pn2O, with Pn as stand-in for Arsenic (As) or Antimony (Sb). Using density functional theory calculations, scientists investigated various physical properties of the antiperovskite oxides and revealed that they exhibit spontaneous electric polarization, making them ferroelectric in nature. Prof. Youngho Kang from Incheon National University, who led the study, explains, In the minimum energy configuration of the Ba4Pn2O structure, we found that the O ions and the Ba ions are displaced from their original positions in opposite directions. These displacements gave rise to a non-zero electric polarization, a classic signature of ferroelectricity.”

Since the spontaneous polarization assists in the separation of eleat their band gaps are ideal for efficient sunlight absorption,ctron-hole pairs, this implied that antiperovskite oxides could efficiently extract charge carriers. In addition, the calculations showed th allowing even a very thin layer of Ba4Pn2O to yield substantial photocurrent.

Adapted and abridged from Source (Incheon National University. “A polarization-driven guide to making high-performance, versatile solar cells.” ScienceDaily. ScienceDaily, 4 January 2021.)

Original paper: Kang, Y. and Han, S., 2020. Antiperovskite oxides as promising candidates for high-performance ferroelectric photovoltaics: First-principles investigation on Ba4As2O and Ba4Sb2O. ACS Applied Materials & Interfaces12(39), pp.43798-43804.

Urban-Air Port

Urban-Air Port, a British-based start-up, has partnered with car giant Hyundai Motor to develop the infrastructure required for when flying cars take to the skies to ferry around people and goods. An airport for flying cars will thrust the English city of Coventry into the future later this year, with a project aimed at demonstrating how air taxis will work in urban centres.

From November, visitors to Coventry will be able to see what a flying car airport looks like and see a passenger-carrying drone and an operational electric vertical take-off and landing (eVTOL) vehicle on the landing pad. Urban-Air Port was selected by a government programme aimed at developing zero-emission flying and new air vehicles, winning a 1.2-million-pound ($1.65-million) grant to help fund the temporary installation of the airport in Coventry city centre.

Source (Flying cars airport of the future to land in England, Reuters, 29.01.2021)

Dancing with robots

Meet Atlas, Spot, and Handle on the dance floor.

Adam Savage and the Tested team had a meeting with an engineer from Boston Dynamics to better understand the choreographer software that is able to make these robots dance.