Camera Calibration with MATLAB

Continuing the study of mathematics in the field of optical vision, we take a look at how camera calibration can be obtained in Matlab.

Video description: Camera calibration is the process of estimating the intrinsic, extrinsic, and lens-distortion parameters of a camera. It is an essential process to correct for any optical distortion artifacts, estimate the distance of an object from a camera, measure the size of objects in an image, and construct 3D views for augmented reality systems. Computer Vision Toolbox™ provides apps and functions to perform all essential tasks in the camera calibration workflow, including:

– Fully automatic detection and location of checkerboard calibration pattern, including corner detection with subpixel accuracy

– Estimation of all intrinsic and extrinsic parameters, including axis skew

– Calculation of radial and tangential lens distortion coefficients Correction of optical distortion

– Support for calibrating standard, fisheye lens, and stereo vision cameras

Camera Calibrator App and Stereo Camera Calibrator App both allow interactively selecting the calibration images, setting up the distortion coefficients, and then estimating the camera parameters you can export to MATLAB.

Computer Vision Toolbox: https://bit.ly/2XEJCL4

MATLAB for Image Processing and Computer Vision: https://bit.ly/2WUlzEi

——————————————————————————————————–

Get a free product Trial: https://goo.gl/ZHFb5u

Learn more about MATLAB: https://goo.gl/8QV7ZZ

Learn more about Simulink: https://goo.gl/nqnbLe

See What’s new in MATLAB and Simulink: https://goo.gl/pgGtod

© 2019 The MathWorks, Inc. MATLAB and Simulink are registered trademarks of The MathWorks, Inc.

See http://www.mathworks.com/trademarks for a list of additional trademarks. Other product or brand names may be trademarks or registered trademarks of their respective holders.

Intrinsic and Extrinsic Matrices of a Camera

Today we take a look at the fundamental theory behind camera parameters to better understand how matrix multiplication is employed.

Video description: First Principles of Computer Vision is a lecture series presented by Shree Nayar who is faculty in the Computer Science Department, School of Engineering and Applied Sciences, Columbia University. Computer Vision is the enterprise of building machines that “see.” This series focuses on the physical and mathematical underpinnings of vision and has been designed for students, practitioners and enthusiasts who have no prior knowledge of computer vision.

Faster path planning for rubble-roving robots

Robots that need to use their arms to make their way across treacherous terrain just got a speed upgrade with a new path planning approach. The improved algorithm path planning algorithm found successful paths three times as often as standard algorithms, while needing much less processing time.

A new algorithm speeds up path planning for robots that use arm-like appendages to maintain balance on treacherous terrain such as disaster areas or construction sites, U-M researchers have shown. The improved path planning algorithm found successful paths three times as often as standard algorithms, while needing much less processing time. The research enables robots to determine how difficult the terrain is before calculating a successful path forward, which might include bracing on the wall with one or two hands while taking the next step forward.

The method uses machine learning to train the robot how to place its hands and feet to maintain balance and make progress, then a divide-and-conquer approach is employed to split the path according to the level of traverse difficulty. To do this, they need a geometric model of the entire environment. This could be achieved in practice with a flying drone that scouts ahead of the robot. In a virtual experiment with a humanoid robot in a corridor of rubble, the team’s method outperformed previous methods in both success and total time to plan — important when quick action is needed in disaster scenarios. Specifically, over 50 trials, their method reached the goal 84% of the time compared to 26% for the basic path planner, and took just over two minutes to plan compared to over three minutes for the basic path planner.

Source (University of Michigan. “Faster path planning for rubble-roving robots.” ScienceDaily. ScienceDaily, 13 August 2021.)

Original paper: Lin, Y.C. and Berenson, D., 2021. Long-horizon humanoid navigation planning using traversability estimates and previous experience. Autonomous Robots45(6), pp.937-956.

Melodies of an Endless Journey

Video description: The free and peaceful Mondstadt, as well as the bustling port of Liyue, mark every step of your journey. The melodies of wind and rock intertwine on a whole new stage to compose a unique chapter of your adventure.

For the first year anniversary of Genshin Impact, its developer Mihoyo prepared an exquisite concert that takes viewers along a ride inside the game’s world, showcasing landscapes, characters and stories.