Automatic Detection and Classification of External Olive Fruits Defects

Olives are an important agricultural product, therefore the industry is interested in detecting their external defects. The researchers Nashat Hussain Hassan and Ahmed Nashat from Fayout University, Egypt, have developed an image processing method that can classify healthy or defected olive fruits. Furthermore, a series of techniques have been compared to find the most appropriate low-cost kit that can be used in a real application.


a Healthy olives, b defected class (A), c defected class (B)

The first developed algorithm is called Texture Homogeneity Measuring Techique (T.H.M.T) and it consists of five steps. First, images are collected and then pre-processed by applying a grayscale conversion. The next step is to extract objects by segmenting the images into olives and background. The defects are obtained by scanning the image horizontally and pixels are labeled accordingly with ‘0’ for a healthy area and ‘1’ if a defect is present.

The second proposed method is called Special Image Convolution Algorithm (S.I.C.A.) and it is similar to edge detection, but with specific kernels of 7×7 which are applied both horizontally and vertically. The results are then thresholded based on the values observed by the authors.

Hassan, Nashat M. Hussain, and Ahmed A. Nashat. “New effective techniques for automatic detection and classification of external olive fruits defects based on image processing techniques.” Multidimensional Systems and Signal Processing 30.2 (2019): 571-589.

Article link

Massive neutron star detected

Neutron stars — the compressed remains of massive stars gone supernova — are the densest “normal” objects in the known universe. (Black holes are technically denser, but far from normal.) Just a single sugar-cube worth of neutron-star material would weigh 100 million tons here on Earth, or about the same as the entire human population. Though astronomers and physicists have studied and marveled at these objects for decades, many mysteries remain about the nature of their interiors: Do crushed neutrons become “superfluid” and flow freely? Do they breakdown into a soup of subatomic quarks or other exotic particles? What is the tipping point when gravity wins out over matter and forms a black hole?


A team of astronomers using the National Science Foundation’s (NSF) Green Bank Telescope (GBT) has brought us closer to finding the answers. The researchers, members of the NANOGrav Physics Frontiers Center, discovered that a rapidly rotating millisecond pulsar, called J0740+6620, is the most massive neutron star ever measured, packing 2.17 times the mass of our Sun into a sphere only 30 kilometers across. This measurement approaches the limits of how massive and compact a single object can become without crushing itself down into a black hole. Recent work involving gravitational waves observed from colliding neutron stars by LIGO suggests that 2.17 solar masses might be very near that limit.

Pulsars get their name because of the twin beams of radio waves they emit from their magnetic poles. These beams sweep across space in a lighthouse-like fashion. Some rotate hundreds of times each second. Since pulsars spin with such phenomenal speed and regularity, astronomers can use them as the cosmic equivalent of atomic clocks. Such precise timekeeping helps astronomers study the nature of spacetime, measure the masses of stellar objects, and improve their understanding of general relativity.

As the ticking pulsar passes behind its white dwarf companion, there is a subtle (on the order of 10 millionths of a second) delay in the arrival time of the signals. This phenomenon is known as “Shapiro Delay.” In essence, gravity from the white dwarf star slightly warps the space surrounding it, in accordance with Einstein’s general theory of relativity. This warping means the pulses from the rotating neutron star have to travel just a little bit farther as they wend their way around the distortions of spacetime caused by the white dwarf.


Conference Paper: Semantic Segmentation Learning for Autonomous UAVs using Simulators and Real Data


Abstract: Deep learning requires large amounts of data for training models. For the task of semantic segmentation, manual annotation is time-consuming and difficult. With the recent advances in game engines, simulators have become more popular as they can instantly generate ground truth data for multiple sensors. In this paper, we make a thorough survey of the most recent and popular simulators and synthetic datasets, exploring solutions for semantic segmentation on images taken from drones. We also propose an extension of the CARLA simulator by introducing an aerial camera. We evaluate a deep learning model trained on both synthetic and real data and present a new dataset which comprises both.

Paper download link: Semantic Segmentation Learning for Autonomous UAVs using Simulators and Real Data

ResearchGate link: paper