Camera Calibration with MATLAB

Continuing the study of mathematics in the field of optical vision, we take a look at how camera calibration can be obtained in Matlab.

Video description: Camera calibration is the process of estimating the intrinsic, extrinsic, and lens-distortion parameters of a camera. It is an essential process to correct for any optical distortion artifacts, estimate the distance of an object from a camera, measure the size of objects in an image, and construct 3D views for augmented reality systems. Computer Vision Toolbox™ provides apps and functions to perform all essential tasks in the camera calibration workflow, including:

– Fully automatic detection and location of checkerboard calibration pattern, including corner detection with subpixel accuracy

– Estimation of all intrinsic and extrinsic parameters, including axis skew

– Calculation of radial and tangential lens distortion coefficients Correction of optical distortion

– Support for calibrating standard, fisheye lens, and stereo vision cameras

Camera Calibrator App and Stereo Camera Calibrator App both allow interactively selecting the calibration images, setting up the distortion coefficients, and then estimating the camera parameters you can export to MATLAB.

Computer Vision Toolbox: https://bit.ly/2XEJCL4

MATLAB for Image Processing and Computer Vision: https://bit.ly/2WUlzEi

——————————————————————————————————–

Get a free product Trial: https://goo.gl/ZHFb5u

Learn more about MATLAB: https://goo.gl/8QV7ZZ

Learn more about Simulink: https://goo.gl/nqnbLe

See What’s new in MATLAB and Simulink: https://goo.gl/pgGtod

© 2019 The MathWorks, Inc. MATLAB and Simulink are registered trademarks of The MathWorks, Inc.

See http://www.mathworks.com/trademarks for a list of additional trademarks. Other product or brand names may be trademarks or registered trademarks of their respective holders.

Intrinsic and Extrinsic Matrices of a Camera

Today we take a look at the fundamental theory behind camera parameters to better understand how matrix multiplication is employed.

Video description: First Principles of Computer Vision is a lecture series presented by Shree Nayar who is faculty in the Computer Science Department, School of Engineering and Applied Sciences, Columbia University. Computer Vision is the enterprise of building machines that “see.” This series focuses on the physical and mathematical underpinnings of vision and has been designed for students, practitioners and enthusiasts who have no prior knowledge of computer vision.

Faster path planning for rubble-roving robots

Robots that need to use their arms to make their way across treacherous terrain just got a speed upgrade with a new path planning approach. The improved algorithm path planning algorithm found successful paths three times as often as standard algorithms, while needing much less processing time.

A new algorithm speeds up path planning for robots that use arm-like appendages to maintain balance on treacherous terrain such as disaster areas or construction sites, U-M researchers have shown. The improved path planning algorithm found successful paths three times as often as standard algorithms, while needing much less processing time. The research enables robots to determine how difficult the terrain is before calculating a successful path forward, which might include bracing on the wall with one or two hands while taking the next step forward.

The method uses machine learning to train the robot how to place its hands and feet to maintain balance and make progress, then a divide-and-conquer approach is employed to split the path according to the level of traverse difficulty. To do this, they need a geometric model of the entire environment. This could be achieved in practice with a flying drone that scouts ahead of the robot. In a virtual experiment with a humanoid robot in a corridor of rubble, the team’s method outperformed previous methods in both success and total time to plan — important when quick action is needed in disaster scenarios. Specifically, over 50 trials, their method reached the goal 84% of the time compared to 26% for the basic path planner, and took just over two minutes to plan compared to over three minutes for the basic path planner.

Source (University of Michigan. “Faster path planning for rubble-roving robots.” ScienceDaily. ScienceDaily, 13 August 2021.)

Original paper: Lin, Y.C. and Berenson, D., 2021. Long-horizon humanoid navigation planning using traversability estimates and previous experience. Autonomous Robots45(6), pp.937-956.

Melodies of an Endless Journey

Video description: The free and peaceful Mondstadt, as well as the bustling port of Liyue, mark every step of your journey. The melodies of wind and rock intertwine on a whole new stage to compose a unique chapter of your adventure.

For the first year anniversary of Genshin Impact, its developer Mihoyo prepared an exquisite concert that takes viewers along a ride inside the game’s world, showcasing landscapes, characters and stories.

Virgin Hyperloop

In an ambitious promotional video, Virgin Hyperloop explains how it envisions its high-speed transportation system working in the future. Pods that will hold up to 28 passengers will travel at speeds surpassing 670 miles per hour — three times faster than high-speed rail and 10 times faster than traditional rail — using proprietary magnetic levitation and propulsion that guides the vehicles on the track. The pods will travel in convoys down the tube so they can head to different destinations. The company has made some bold claims, betting its system will be more sustainable, cost-effective and convenient than presently available modes of transportation, leaving many skeptical about whether the plans will ever come to fruition as promised. 

Here are some insights from the interview with Virgin Hyperloop CEO and co-founder Josh Giegel, for Changing America:

Tell me about the technology. What makes it different from a maglev — magnetic levitation — train in terms of infrastructure and potential? 

What we started working on over the last few years is this idea of “smart pod, dumb tube.” Instead of having switches that move like a train, we just have things that act like an off-ramp so if the pod [the vehicle passengers ride in] wants to get off and take people to a certain city it just pulls off by turning on an electromagnet. So you can now have a really high-capacity system without all these safety risks associated with track-switching.  You put the pod inside of a tube, you take most of the air out, you have a very low energy consumption, you actually make it inherently safer, you make it weatherproof, and you can move as many people as a 30-lane highway in the space of a tube going each direction. It’s all from this foundational premise that we created: new propulsion, new levitation, new batteries, new ways of working all these things together to make a “smart pod, dumb tube,” so a pod one hundred years from now could ride in the same tube we build today. 

What’s the timeline like? When can I buy a ticket to ride? 

What we set out to do last year was show the technology could be made safe. That culminated in myself and one of my colleagues Sara riding on a prototype in November. I think that allayed a lot of concerns about whether we can make it safe.  The next level is getting approved by an independent body and commercializing the technology. We’re in the process of building our commercial technology now, which are these 25 to 30 passenger pods. 

We’ll begin to look at pilot projects that will move cargo first, so think of shorter projects starting around the 2024 through 2026 timeframe. At the same time we’ll be getting independent safety approval needed to get a product certified for passengers. And then ultimately from there go into building the projects out from 2026 through the rest of the decade. So you’ll be looking at the decade of hyperloop, starting with Sara and I riding on it and ending with, I’m hoping, billions of passengers riding, but I will settle for tens, if not hundreds, of millions of passengers in the U.S. and around the world. 

Source (The Hill, “Will the 2020s be the decade of hyperloop?”, 01.09.2021)

Unveiling a century-old mystery: Where the Milky Way’s cosmic rays come from

Astronomers have succeeded in quantifying the proton and electron components of cosmic rays in a supernova remnant. At least 70% of the very-high-energy gamma rays emitted from cosmic rays are due to relativistic protons, according to the novel imaging analysis of radio, X-ray, and gamma-ray radiation. The acceleration site of protons, the main components of cosmic rays, has been a 100-year mystery in modern astrophysics.

Credit: Nagoya University

Astronomers have succeeded for the first time in quantifying the proton and electron components of cosmic rays in a supernova remnant. At least 70% of the very-high-energy gamma rays emitted from cosmic rays are due to relativistic protons, according to the novel imaging analysis of radio, X-ray, and gamma-ray radiation. The acceleration site of protons, the main components of cosmic rays, has been a 100-year mystery in modern astrophysics, this is the first time that the amount of cosmic rays being produced in a supernova remnant has been quantitatively shown and is an epoch-making step in the elucidation of the origin of cosmic rays.

The originality of this research is that gamma-ray radiation is represented by a linear combination of proton and electron components. Astronomers knew a relation that the intensity of gamma-ray from protons is proportional to the interstellar gas density obtained by radio-line imaging observations. On the other hand, gamma-rays from electrons are also expected to be proportional to X-ray intensity from electrons. Therefore, they expressed the total gamma-ray intensity as the sum of two gamma-ray components, one from the proton origin and the other from the electron origin. This led to a unified understanding of three independent observables. This method was first proposed in this study. As a result, it was shown that gamma rays from protons and electrons account for 70% and 30% of the total gamma-rays, respectively. This is the first time that the two origins have been quantified. The results also demonstrate that gamma rays from protons are dominated in interstellar gas-rich regions, whereas gamma rays from electrons are enhanced in the gas-poor region. This confirms that the two mechanisms work together and supporting the predictions of previous theoretical studies.

Source (Nagoya University. “Unveiling a century-old mystery: Where the Milky Way’s cosmic rays come from.” ScienceDaily. ScienceDaily, 23 August 2021.)

Original paper: Fukui, Y., Sano, H., Yamane, Y., Hayakawa, T., Inoue, T., Tachihara, K., Rowell, G. and Einecke, S., 2021. Pursuing the origin of the gamma rays in RX J1713. 7$-$3946 quantifying the hadronic and leptonic components. arXiv preprint arXiv:2105.02734.

A universal equation for the shape of an egg

Researchers have discovered a universal mathematical formula that can describe any bird’s egg existing in nature — a significant step in understanding not only the egg shape itself, but also how and why it evolved, thus making widespread biological and technological applications possible.

Credit: University of Kent

The egg, as one of the most traditional food products, has long attracted the attention of mathematicians, engineers, and biologists from an analytical point of view. As a main parameter in oomorphology, the shape of a bird’s egg has, to date, escaped a universally applicable mathematical formulation. Analysis of all egg shapes can be done using four geometric figures: sphere, ellipsoid, ovoid, and pyriform (conical or pear-shaped). The first three have a clear mathematical definition, each derived from the expression of the previous, but a formula for the pyriform profile has yet to be derived. To rectify this, the researchers have introduced an additional function into the ovoid formula.

The subsequent mathematical model fits a completely novel geometric shape that can be characterized as the last stage in the evolution of the sphere—ellipsoid—Hügelschäffer’s ovoid transformation, and it is applicable to any egg geometry. The required measurements are the egg length, maximum breadth, and diameter at the terminus from the pointed end. This mathematical analysis and description represents the sought-for universal formula and is a significant step in understanding not only the egg shape itself, but also how and why it evolved, thus making widespread biological and technological applications theoretically possible.

Source (University of Kent. “A universal equation for the shape of an egg.” ScienceDaily. ScienceDaily, 31 August 2021.)

Original paper: Narushin, V.G., Romanov, M.N. and Griffin, D.K., 2021. Egg and math: introducing a universal formula for egg shape. Annals of the New York Academy of Sciences.

Dog colour patterns explained

Scientists have unraveled the enigma of inheritance of coat color patterns in dogs. The researchers discovered that a genetic variant responsible for a very light coat in dogs and wolves originated more than two million years ago in a now extinct relative of the modern wolf.

Credit: University of Bern

The Institute of Genetics of the University of Bern has worked on understanding dog colour patterns and discovered that the gene responsible for a very light coat in wolves originated more than 2 million years ago which is now extinct. Wolves and dogs can make two different types of pigment, the black one, called eumelanin and the yellow, pheomelanin. A precisely regulated production of these two pigments at the right time and at the right place on the body gives rise to very different coat colour patterns. Prior to the study, four different patterns had been recognized in dogs and several genetic variants had been theorized which cause these patterns. During the formation of coat color, the so-called agouti signaling protein represents the body’s main switch for the production of yellow pheomelanin. If the agouti signaling protein is present, the pigment producing cells will synthesize yellow pheomelanin. If no agouti signaling protein is present, black eumelanin will be formed.

For the first time, the researchers characterized these two promoters in detail, in hundreds of dogs. They discovered two variants of the ventral promoter. One of the variants conveys the production of normal amounts of agouti signaling protein. The other variant has higher activity and causes the production of an increased amount of agouti signaling protein. The researchers even identified three different variants of the hair cycle-specific promoter. Starting with these variants at the individual promoters, the researchers identified a total of five different combinations, which cause different coat colour patterns in dogs.

Source (University of Bern. “Genetic enigma solved: Inheritance of coat color patterns in dogs.” ScienceDaily. ScienceDaily, 12 August 2021.)

Original paper: Bannasch, D.L., Kaelin, C.B., Letko, A., Loechel, R., Hug, P., Jagannathan, V., Henkel, J., Roosje, P., Hytönen, M.K., Lohi, H. and Arumilli, M., 2021. Dog colour patterns explained by modular promoters of ancient canid origin. Nature ecology & evolution, pp.1-9.

Superstrata bike

Send Superstrata your dimensions, riding style and preferences, and they’ll 3D print you a carbon fibre bike frame made to fit. Prefer a stiffer ride? A bike for commuting, or for touring? Superstrata claim to have over 500,000 possible combinations. There are two versions available: the traditional Terra bike and the Ion e-bike. The Ion has a sleek in-tube battery (no bulky black boxes in sight), takes two hours to charge and lasts for up to 88 kilometres.

Credit: Superstrata

Source (ScienceFocus, “80 cool gadgets: Our pick of the best new tech for 2021”, 22.06.2021)

Superstrata bike

Toward next-generation brain-computer interface systems

A new kind of neural interface system that coordinates the activity of hundreds of tiny brain sensors could one day deepen understanding of the brain and lead to new medical therapies.

Credit: Brown University

Most current BCI systems use one or two sensors to sample up to a few hundred neurons, but neuroscientists are interested in systems that are able to gather data from much larger groups of brain cells. Now, a team of researchers has taken a key step toward a new concept for a future BCI system — one that employs a coordinated network of independent, wireless microscale neural sensors, each about the size of a grain of salt, to record and stimulate brain activity. The sensors, dubbed “neurograins,” independently record the electrical pulses made by firing neurons and send the signals wirelessly to a central hub, which coordinates and processes the signals.

The team first designed and simulated the electronics on a computer, and went through several fabrication iterations to develop operational chips. The second challenge was developing the body-external communications hub that receives signals from those tiny chips. The device is a thin patch, about the size of a thumb print, that attaches to the scalp outside the skull. It works like a miniature cellular phone tower, employing a network protocol to coordinate the signals from the neurograins, each of which has its own network address. The patch also supplies power wirelessly to the neurograins, which are designed to operate using a minimal amount of electricity.

The goal of this new study was to demonstrate that the system could record neural signals from a living brain — in this case, the brain of a rodent. The team placed 48 neurograins on the animal’s cerebral cortex, the outer layer of the brain, and successfully recorded characteristic neural signals associated with spontaneous brain activity.

Source (Brown University. “Toward next-generation brain-computer interface systems.” ScienceDaily. ScienceDaily, 12 August 2021.)

Original paper: Lee, J., Leung, V., Lee, A.H., Huang, J., Asbeck, P., Mercier, P.P., Shellhammer, S., Larson, L., Laiwalla, F. and Nurmikko, A., 2021. Neural recording and stimulation using wireless networks of microimplants. Nature Electronics, pp.1-11.