Weakly supervised learning methods have brought improvements to the semantic segmentation problem. By simplifying the labeling work, more attention is given to the network architecture. In the paper entitled “Weaklier Supervised Semantic Segmentation With Only One Image Level Annotation per Category”, the authros propose a three-stage semantic segmentation framework that deals with image and pixel level understanding at a coarse level and goes deeper towards objects feature learning at a fine grained level.
The novelty consists of using only one sample with image level annotation per category, whose labeling form is more closer to prior conditions required by human to learn new objects. For image classification, response activation clustering (RAC) is proposed to achieve image level labeling, while multi heat map slices fusion (MSF) and saliency-edge-color-texture (SECT) based modification are utilized to generate pixel level annotations, which combine high-level semantic features and imaging prior based low-level attributes. For object common feature learning, dual-branch iterative structure is introduced. Based on conservative and radical strategies, information integration are realized iteratively, the completeness and accuracy of object region are gradually improved.
In the first stage, image level semantic information is extracted in form of response vector, and the relationship of each pair of feature dimensions is analyzed to achieve accurate image level object category annotations. Then, heat maps based on high-level semantics and low-level imaging attributes are utilized in combination to generate pixel level pseudo supervised annotations. In the first two phases, multi attention mechanism is introduced to achieve a better understanding of objects which are not salient or with small scale, as well as to mine detailed expression in images. Using a number of obtained annotations, dual branch network model is designed to learn common features of objects from different instances, more complete and accurate object regions can be obtained iteratively. Based on the methods, semantic segmentation task is implemented through a learning process which takes advantage of prior knowledge as much as possible.
Li, Xi, Huimin Ma, and Xiong Luo. “Weaklier Supervised Semantic Segmentation With Only One Image Level Annotation per Category.” IEEE Transactions on Image Processing 29 (2019): 128-141.
Olives are an important agricultural product, therefore the industry is interested in detecting their external defects. The researchers Nashat Hussain Hassan and Ahmed Nashat from Fayout University, Egypt, have developed an image processing method that can classify healthy or defected olive fruits. Furthermore, a series of techniques have been compared to find the most appropriate low-cost kit that can be used in a real application.
a Healthy olives, b defected class (A), c defected class (B)
The first developed algorithm is called Texture Homogeneity Measuring Techique (T.H.M.T) and it consists of five steps. First, images are collected and then pre-processed by applying a grayscale conversion. The next step is to extract objects by segmenting the images into olives and background. The defects are obtained by scanning the image horizontally and pixels are labeled accordingly with ‘0’ for a healthy area and ‘1’ if a defect is present.
The second proposed method is called Special Image Convolution Algorithm (S.I.C.A.) and it is similar to edge detection, but with specific kernels of 7×7 which are applied both horizontally and vertically. The results are then thresholded based on the values observed by the authors.
Hassan, Nashat M. Hussain, and Ahmed A. Nashat. “New effective techniques for automatic detection and classification of external olive fruits defects based on image processing techniques.” Multidimensional Systems and Signal Processing 30.2 (2019): 571-589.
Neutron stars — the compressed remains of massive stars gone supernova — are the densest “normal” objects in the known universe. (Black holes are technically denser, but far from normal.) Just a single sugar-cube worth of neutron-star material would weigh 100 million tons here on Earth, or about the same as the entire human population. Though astronomers and physicists have studied and marveled at these objects for decades, many mysteries remain about the nature of their interiors: Do crushed neutrons become “superfluid” and flow freely? Do they breakdown into a soup of subatomic quarks or other exotic particles? What is the tipping point when gravity wins out over matter and forms a black hole?
A team of astronomers using the National Science Foundation’s (NSF) Green Bank Telescope (GBT) has brought us closer to finding the answers. The researchers, members of the NANOGrav Physics Frontiers Center, discovered that a rapidly rotating millisecond pulsar, called J0740+6620, is the most massive neutron star ever measured, packing 2.17 times the mass of our Sun into a sphere only 30 kilometers across. This measurement approaches the limits of how massive and compact a single object can become without crushing itself down into a black hole. Recent work involving gravitational waves observed from colliding neutron stars by LIGO suggests that 2.17 solar masses might be very near that limit.
Pulsars get their name because of the twin beams of radio waves they emit from their magnetic poles. These beams sweep across space in a lighthouse-like fashion. Some rotate hundreds of times each second. Since pulsars spin with such phenomenal speed and regularity, astronomers can use them as the cosmic equivalent of atomic clocks. Such precise timekeeping helps astronomers study the nature of spacetime, measure the masses of stellar objects, and improve their understanding of general relativity.
As the ticking pulsar passes behind its white dwarf companion, there is a subtle (on the order of 10 millionths of a second) delay in the arrival time of the signals. This phenomenon is known as “Shapiro Delay.” In essence, gravity from the white dwarf star slightly warps the space surrounding it, in accordance with Einstein’s general theory of relativity. This warping means the pulses from the rotating neutron star have to travel just a little bit farther as they wend their way around the distortions of spacetime caused by the white dwarf.
In Optica, The Optical Society’s journal for high-impact research, researchers from The Hong Kong University of Science and Technology, Hong Kong detail their two-layer all-optical neural network and successfully apply it to a complex classification task. In conventional hybrid optical neural networks, optical components are typically used for linear operations while nonlinear activation functions — the functions that simulate the way neurons in the human brain respond — are usually implemented electronically because nonlinear optics typically require high-power lasers that are difficult to implement in an optical neural network.
Fully functioned 2-layer all optical neural network (AONN)
To overcome this challenge, the researchers used cold atoms with electromagnetically induced transparency to perform nonlinear functions. “This light-induced effect can be achieved with very weak laser power,” said Shengwang Du, a member of the research team. “Because this effect is based on nonlinear quantum interference, it might be possible to extend our system into a quantum neural network that could solve problems intractable by classical methods.”
To confirm the capability and feasibility of the new approach, the researchers constructed a two-layer fully-connected all optical neural network with 16 inputs and two outputs. The researchers used their all-optical network to classify the order and disorder phases of the Ising model, a statistical model of magnetism. The results showed that the all-optical neural network was as accurate as a well-trained computer-based neural network.
Using mouse models, Caltech researchers have now determined that strong, stable memories are encoded by “teams” of neurons all firing in synchrony, providing redundancy that enables these memories to persist over time. The research has implications for understanding how memory might be affected after brain damage, such as by strokes or Alzheimer’s disease. The work was done in the laboratory of Carlos Lois, research professor of biology, and is described in a paper that appears in the August 23 of the journal Science.
Led by postdoctoral scholar Walter Gonzalez, the team developed a test to examine mice’s neural activity as they learn about and remember a new place. In the test, a mouse was placed in a straight enclosure, about 5 feet long with white walls. Unique symbols marked different locations along the walls — for example, a bold plus sign near the right-most end and an angled slash near the center. Sugar water (a treat for mice) was placed at either end of the track. While the mouse explored, the researchers measured the activity of specific neurons in the mouse hippocampus (the region of the brain where new memories are formed) that are known to encode for places.
When an animal was initially placed in the track, it was unsure of what to do and wandered left and right until it came across the sugar water. In these cases, single neurons were activated when the mouse took notice of a symbol on the wall. But over multiple experiences with the track, the mouse became familiar with it and remembered the locations of the sugar. As the mouse became more familiar, more and more neurons were activated in synchrony by seeing each symbol on the wall. Essentially, the mouse was recognizing where it was with respect to each unique symbol.
To study how memories fade over time, the researchers then withheld the mice from the track for up to 20 days. Upon returning to the track after this break, mice that had formed strong memories encoded by higher numbers of neurons remembered the task quickly. Even though some neurons showed different activity, the mouse’s memory of the track was clearly identifiable when analyzing the activity of large groups of neurons. In other words, using groups of neurons enables the brain to have redundancy and still recall memories even if some of the original neurons fall silent or are damaged.
A new Hubble Space Telescope view of Jupiter, taken on June 27, 2019, reveals the giant planet’s trademark Great Red Spot, and a more intense color palette in the clouds swirling in Jupiter’s turbulent atmosphere than seen in previous years. The colors, and their changes, provide important clues to ongoing processes in Jupiter’s atmosphere.
The bands are created by differences in the thickness and height of the ammonia ice clouds. The colorful bands, which flow in opposite directions at various latitudes, result from different atmospheric pressures. Lighter bands rise higher and have thicker clouds than the darker bands.
Among the most striking features in the image are the rich colors of the clouds moving toward the Great Red Spot, a storm rolling counterclockwise between two bands of clouds. These two cloud bands, above and below the Great Red Spot, are moving in opposite directions. The red band above and to the right (northeast) of the Great Red Spot contains clouds moving westward and around the north of the giant tempest. The white clouds to the left (southwest) of the storm are moving eastward to the south of the spot.
A team of researchers from Carnegie Mellon University has published a paper in Science that details a new technique allowing anyone to 3D bioprint tissue scaffolds out of collagen, the major structural protein in the human body. This first-of-its-kind method brings the field of tissue engineering one step closer to being able to 3D print a full-sized, adult human heart. The technique, known as Freeform Reversible Embedding of Suspended Hydrogels (FRESH), has allowed the researchers to overcome many challenges associated with existing 3D bioprinting methods, and to achieve unprecedented resolution and fidelity using soft and living materials.
This method is truly exciting for the field of 3D bioprinting because it allows collagen scaffolds to be printed at the large scale of human organs. And it is not limited to collagen, as a wide range of other soft gels including fibrin, alginate, and hyaluronic acid can be 3D bioprinted using the FRESH technique, providing a robust and adaptable tissue engineering platform. Importantly, the researchers also developed open-source designs so that nearly anyone, from medical labs to high school science classes, can build and have access to low-cost, high-performance 3D bioprinters.
Looking forward, FRESH has applications in many aspects of regenerative medicine, from wound repair to organ bioengineering, but it is just one piece of a growing biofabrication field. “Really what we’re talking about is the convergence of technologies,” says Feinberg. “Not just what my lab does in bioprinting, but also from other labs and small companies in the areas of stem cell science, machine learning, and computer simulation, as well as new 3D bioprinting hardware and software.”
Researchers have trained honeybees to match a character to a specific quantity, revealing they are able to learn that a symbol represents a numerical amount. It’s a finding that sheds new light on how numerical abilities may have evolved over millennia and even opens new possibilities for communication between humans and other species.
The discovery, from the same Australian-French team that found bees get the concept of zero and can do simple arithmetic, also points to new approaches for bio-inspired computing that can replicate the brain’s highly efficient approach to processing. The RMIT University-led study is published in the Proceedings of the Royal Society B. Associate Professor Adrian Dyer said while humans were the only species to have developed systems to represent numbers, like the Arabic numerals we use each day, the research shows the concept can be grasped by brains far smaller than ours.
Studies have shown that a number of non-human animals have been able to learn that symbols can represent numbers, including pigeons, parrots, chimpanzees and monkeys. Some of their feats have been impressive — chimpanzees were taught Arabic numbers and could order them correctly, while an African grey parrot called Alex was able to learn the names of numbers and could sum the quantities.
Robots and prosthetic devices may soon have a sense of touch equivalent to, or better than, the human skin with the Asynchronous Coded Electronic Skin (ACES), an artificial nervous system developed by a team of researchers at the National University of Singapore (NUS). The new electronic skin system achieved ultra-high responsiveness and robustness to damage, and can be paired with any kind of sensor skin layers to function effectively as an electronic skin.
Drawing inspiration from the human sensory nervous system, the NUS team spent a year and a half developing a sensor system that could potentially perform better. While the ACES electronic nervous system detects signals like the human sensor nervous system, it is made up of a network of sensors connected via a single electrical conductor, unlike the nerve bundles in the human skin. It is also unlike existing electronic skins which have interlinked wiring systems that can make them sensitive to damage and difficult to scale up. ACES can detect touches more than 1,000 times faster than the human sensory nervous system. For example, it is capable of differentiating physical contacts between different sensors in less than 60 nanoseconds — the fastest ever achieved for an electronic skin technology — even with large numbers of sensors. ACES-enabled skin can also accurately identify the shape, texture and hardness of objects within 10 milliseconds, ten times faster than the blinking of an eye. This is enabled by the high fidelity and capture speed of the ACES system.
Pairing ACES with the transparent, self-healing and water-resistant sensor skin layer also recently developed by Asst Prof Tee’s team, creates an electronic skin that can self-repair, like the human skin. This type of electronic skin can be used to develop more realistic prosthetic limbs that will help disabled individuals restore their sense of touch. Other potential applications include developing more intelligent robots that can perform disaster recovery tasks or take over mundane operations such as packing of items in warehouses. The NUS team is therefore looking to further apply the ACES platform on advanced robots and prosthetic devices in the next phase of their research.
Machine enhanced humans — or cyborgs as they are known in science fiction — could be one step closer to becoming a reality, thanks to new research Lieber Group at Harvard University, as well as scientists from University of Surrey and Yonsei University. The ability to read electrical activities from cells is the foundation of many biomedical procedures, such as brain activity mapping and neural prosthetics. Developing new tools for intracellular electrophysiology (the electric current running within cells) that push the limits of what is physically possible (spatiotemporal resolution) while reducing invasiveness could provide a deeper understanding of electrogenic cells and their networks in tissues, as well as new directions for human-machine interfaces.
Dr Yunlong Zhao from the ATI at the University of Surrey said: “If our medical professionals are to continue to understand our physical condition better and help us live longer, it is important that we continue to push the boundaries of modern science in order to give them the best possible tools to do their jobs. For this to be possible, an intersection between humans and machines is inevitable. “Our ultra-small, flexible, nanowire probes could be a very powerful tool as they can measure intracellular signals with amplitudes comparable with those measured with patch clamp techniques; with the advantage of the device being scalable, it causes less discomfort and no fatal damage to the cell (cytosol dilation). Through this work, we found clear evidence for how both size and curvature affect device internalisation and intracellular recording signal.”