Exercise Alters Brain Chemistry to Protect Aging Synapses

When elderly people stay active, their brains have more of a class of proteins that enhances the connections between neurons to maintain healthy cognition, a new study has found. This protective impact was found even in people whose brains at autopsy were riddled with toxic proteins associated with Alzheimer’s and other neurodegenerative diseases.

Credit: Harvard Medical School

Casaletto, a neuropsychologist and member of the UCSF Weill Institute for Neurosciences, worked with William Honer, MD, a professor of psychiatry at the University of British Columbia and senior author of the study, to leverage data from the Memory and Aging Project at Rush University in Chicago. That project tracked the late-life physical activity of elderly participants, who also agreed to donate their brains when they died.

Honer and Casaletto found that elderly people who remained active had higher levels of proteins that facilitate the exchange of information between neurons. This result dovetailed with Honer’s earlier finding that people who had more of these proteins in their brains when they died were better able to maintain their cognition late in life. To their surprise, Honer said, the researchers found that the effects ranged beyond the hippocampus, the brain’s seat of memory, to encompass other brain regions associated with cognitive function.

Source (University of California San Francisco, Robin Marks, Exercise Alters Brain Chemistry to Protect Aging Synapses, 7 January 2022.)

Paper: Casaletto, K., Ramos‐Miguel, A., VandeBunte, A., Memel, M., Buchman, A., Bennett, D. and Honer, W., 2022. Late‐life physical activity relates to brain tissue synaptic integrity markers in older adults. Alzheimer’s & Dementia.

Image source: Carapeto, P.V. and Aguayo-Mazzucato, C., 2021. Effects of exercise on cellular and tissue aging. Aging (Albany NY)13(10), p.14522.

Razer’s Project Sophia

At the CES 2022 (Consumer Electronics Show), Razer presented Project Sophia, a concept for a computer that is a desk with embedded modular components. This concept would allow users to swap parts and modules like displays, USB hubs or wireless chargers. It’s not the first time the company’s tried to make the PC more modular—last year, it showed off a more practical design using Intel’s miniature NUC, essentially a tiny CPU and motherboard combo. And its 2014 PC concept, Project Christine, gave us a glimpse at a possible PC future that made upgrading major components very easy.

What ultimately makes Project Sophia different than the best PC case desks are its modules. Users will have up to 13 different modules available when reconfiguring their Project Sophia machine, and these modules are compatible with everything from touchscreen digitizers and tablets to advanced speakers and monitors. Some other components and peripherals that the modules will work with include cameras, microphones, wireless chargers, and even cup warmers. Project Sophia is also lined with Chroma RGB lighting and comes with a built-in OLED display that’s available at either 65 inches or 77 inches.

Source 1 (Time, Patrick Lucas Austin, The 10 Best Gadgets of CES 2022, 7 January 2022)

Source 2 (Windows Central, Brendan Lowry, Razer’s ‘Project Sophia’ is a must-see concept desk for gaming in the future, 5 January 2022)

Microscopic camera the size of a grain of salt

Despite being the size of a grain of salt, a new microscopic camera can capture crisp, full-colour images on par with normal lenses that are 500,000 times larger. The ultra-compact optical device was developed by a team of researchers from Princeton University and the University of Washington.

The tiny camera relies on a special ‘metasurface’ studded with 1.6 million cylindrical posts — each the size of a single HIV virus — which can modulate the behaviour of light. Each of the posts on the 0.5-millimetre-wide surface has a unique shape that allows it to operate like an antenna. Machine-learning based signal processing algorithms then interpret the post’s interaction with light, transforming it into an image. The photographs that the tiny device takes offer the highest-quality images with the widest field of view for any full-colour metasurface camera developed to date.

According to the researchers, the camera could be used in small-scale robots, where size and weight constraints make traditional cameras difficult to implement. The optical metasurface could also be used to improve minimally-invasive endoscopic devices, allowing doctors to better see inside of patients in order to diagnose and treat diseases. Felix Heide, an author of the study, also suggests the camera could be used to turn surfaces into sensors with ultra-high resolution.  ‘You wouldn’t need three cameras on the back of your phone anymore, but the whole back of your phone would become one giant camera,’ he explained.

Source (Ian Randall, DailyMail, Say cheese! Microscopic camera the size of a grain of SALT is developed that can produce crisp, full-colour images ‘on par with lenses 500,000 times larger’)

James Webb Space Telescope

NASA’s James Webb Space Telescope launched at 7:20 a.m. EST Saturday on an Ariane 5 rocket from Europe’s Spaceport in French Guiana, South America. A joint effort with ESA (European Space Agency) and the Canadian Space Agency, the Webb observatory is NASA’s revolutionary flagship mission to seek the light from the first galaxies in the early universe and to explore our own solar system, as well as planets orbiting other stars, called exoplanets. 

Ground teams began receiving telemetry data from Webb about five minutes after launch. The Arianespace Ariane 5 rocket performed as expected, separating from the observatory 27 minutes into the flight. The observatory was released at an altitude of approximately 1,400 kilometers. Approximately 30 minutes after launch, Webb unfolded its solar array, and mission managers confirmed that the solar array was providing power to the observatory. After solar array deployment, mission operators will establish a communications link with the observatory via the Malindi ground station in Kenya, and ground control at the Space Telescope Science Institute in Baltimore will send the first commands to the spacecraft.

Source (NASA, NASA’s Webb Telescope Launches to See First Galaxies, Distant Worlds)

Online product displays can shape your buying behavior

One of the biggest marketing trends in the online shopping industry is personalization through curated product recommendations; however, it can change whether people buy a product they had been considering, according to new research. The study by Uma R. Karmarkar, assistant professor at the UC San Diego Rady School of Management and School of Global Policy and Strategy, finds that display items that come from the same category as the target product, such as a board game matched with other board games, enhance the chances of a target product’s purchase. In contrast, consumers are less likely to buy the target product if it is mismatched with products from different categories, for example, a board game displayed with kitchen knives.

Credit: Pexels

The study utilized eye-tracking — a sensor technology that makes it possible to know where a person is looking — to examine how different types of displays influenced visual attention. Participants in the study looked at their target product for the same amount of time when it was paired with similar items or with items from different categories; however, shoppers spent more time looking at the mismatched products, even though they were only supposed to be there “for display.”

Karmarkar talked with industry experts about product recommendations systems, which shaped her approach to these questions. Recommender algorithms can have different designs to meet a variety of retailers’ respective goals. Products can be shown with “mismatched” displays when retailers are using cross-promoting tactics based on prior customer behavior or inventory they may want to sell more rapidly.

“This shows how outside forces shape our decisions in ways we might not recognize,” she said. “If a shopper is looking for something specific, they are likely to focus their attention, regardless of recommender displays. But when people are just ‘browsing stuff online,’ different page designs can create different patterns of attention. Store displays can change what we choose, even when they don’t change what we like.”

Source (University of California – San Diego. “Online product displays can shape your buying behavior.” ScienceDaily. ScienceDaily, 20 August 2021.)

Paper: Karmarkar, U.R., Carroll, A.L., Burke, M. and Hijikata, S., 2021. Category Congruence of Display-Only Products Influences Attention and Purchase Decisions. Frontiers in neuroscience, p.1060.

Turbulence in interstellar gas clouds reveals multi-fractal structures

Astronomers describe the complex structure of the interstellar medium using a new mathematical method. The dispersion of interstellar turbulence in gas clouds before star formation unfolds in a cosmically small space.

Credit: University of Cologne

Turbulence in interstellar dust clouds must dissipate before a star can form through gravity. Until now it was thought that their structure could be described using a fractal structure, but researchers have found Turbulence in interstellar dust clouds must dissipate before a star can form through gravity. Until now it was thought that their structure could be described using a fractal structure, but researchers have found that it was not enough. The filamentary structure of a flux density map can be broken into smaller regions of interest, each area being described by multifractal structures which lead to different types of dynamics inside a single cloud. The results showcase that the dynamics of star formation clouds are localized and exhibit enhanced energy dissipation from inside rather than from space intermittencies.

Source (University of Cologne. “Turbulence in interstellar gas clouds reveals multi-fractal structures.” ScienceDaily. ScienceDaily, 1 June 2021.)

Paper: Yahia, H., Schneider, N., Bontemps, S., Bonne, L., Attuel, G., Dib, S., Ossenkopf-Okada, V., Turiel, A., Zebadua, A., Elia, D. and Maji, S.K., 2021. Description of turbulent dynamics in the interstellar medium: multifractal-microcanonical analysis-I. Application to Herschel observations of the Musca filament. Astronomy & Astrophysics649, p.A33. DOI: 10.1051/0004-6361/202039874

MaxDIA: Taking proteomics to the next level

A new software improves data-independent acquisition proteomics by providing a computational workflow that permits highly sensitive and accurate data analysis. Proteins are essential for our cells to function, yet many questions about their synthesis, abundance, functions, and defects still remain unanswered. High-throughput techniques can help improve our understanding of these molecules. For analysis by liquid chromatography followed by mass spectrometry (MS), proteins are broken down into smaller peptides, in a process referred to as “shotgun proteomics.” The mass-to-charge ratio of these peptides is subsequently determined with a mass spectrometer, resulting in MS spectra. From these spectra, information about the identity of the analyzed proteins can be reconstructed. However, the enormous amount and complexity of data make data analysis and interpretation challenging.

Protein ratio distributions in the four species. Credit: Max-Planck-Gesellschaft

Two main methods are used in shotgun proteomics: Data-dependent acquisition (DDA) and data-independent acquisition (DIA). In DDA, the most abundant peptides of a sample are preselected for fragmentation and measurement. This allows to reconstruct the sequences of these few preselected peptides, making analysis simpler and faster. However, this method induces a bias towards highly abundant peptides. DIA, in contrast, is more robust and sensitive. All peptides from a certain mass range are fragmented and measured at once, without preselection by abundance.

Jürgen Cox and his team have now developed a software that provides a complete computational workflow for DIA data. It allows, for the first time, to apply algorithms to DDA and DIA data in the same way. Consequently, studies based on either DDA or DIA will now become more easily comparable. MaxDIA analyzes proteomics data with and without spectral libraries. Using machine learning, the software predicts peptide fragmentation and spectral intensities. Hence, it creates precise MS spectral libraries in silico. In this way, MaxDIA includes a library-free discovery mode with reliable control of false positive protein identifications.

Source (Max-Planck-Gesellschaft. “MaxDIA: Taking proteomics to the next level.” ScienceDaily. ScienceDaily, 12 July 2021.)

Paper: Sinitcyn, P., Hamzeiy, H., Salinas Soto, F., Itzhak, D., McCarthy, F., Wichmann, C., Steger, M., Ohmayer, U., Distler, U., Kaspar-Schoenefeld, S. and Prianichnikov, N., 2021. MaxDIA enables library-based and library-free data-independent acquisition proteomics. Nature Biotechnology, pp.1-11.

A personalized exosuit for real-world walking

Researchers have developed a new approach in which robotic exosuit assistance can be calibrated to an individual and adapt to a variety of real-world walking tasks in a matter of seconds. The bioinspired system uses ultrasound measurements of muscle dynamics to develop a personalized and activity-specific assistance profile for users of the exosuit.

Credit: Harvard John A. Paulson School of Engineering and Applied Sciences

People rarely walk at a constant speed and a single incline. We change speed when rushing to the next appointment, catching a crosswalk signal, or going for a casual stroll in the park. Slopes change all the time too, whether we’re going for a hike or up a ramp into a building. In addition to environmental variably, how we walk is influenced by sex, height, age, and muscle strength, and sometimes by neural or muscular disorders such as stroke or Parkinson’s Disease.

Researchers from the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) have developed a new approach in which robotic exosuit assistance can be calibrated to an individual and adapt to a variety of real-world walking tasks in a matter of seconds. The bioinspired system uses ultrasound measurements of muscle dynamics to develop a personalized and activity-specific assistance profile for users of the exosuit.

The new system only needs a few seconds of walking, even one stride may be sufficient, to capture the muscle’s profile. For each of the ultrasound-generated profiles, the researchers measure how much metabolic energy the person uses during walking with and without the exosuit. The researchers found that the muscle-based assistance provided by the exosuit significantly reduced the metabolic energy of walking across a range of walking speeds and inclines. The exosuit also applied lower assistance force to achieve the same or improved metabolic energy benefit than previous published studies.

Source (Harvard John A. Paulson School of Engineering and Applied Sciences. “A personalized exosuit for real-world walking: Ultrasound measurements of muscle dynamics provide customized, activity-specific assistance.” ScienceDaily. ScienceDaily, 10 November 2021.)

Original paper: R. W. Nuckols, S. Lee, K. Swaminathan, D. Orzel, R. D. Howe, C. J. Walsh. Individualization of exosuit assistance based on measured muscle dynamics during versatile walkingScience Robotics, 2021; 6 (60) DOI: 10.1126/scirobotics.abj1362

Camera Calibration in Python with OpenCV 

Previously, we touched on the topics of intrinsic and extrinsic camera parameters, as well as camera calibration to remove lens distortion. In the following video, a more comprehensive explanation is presented to link these together, with the added touch of being able to implement in Python these concepts.

Video description: In this Computer Vision and OpenCV Tutorial, We’ll talk about Camera Calibration and Geometry. We will first talk about the basics of camera geometry and how it can be used for calibrating cameras. We will see different types of distortion on cameras and images. At the end of the video, I’ll show you in a Python script how to apply what we have learned and calibrate a camera from a practical computer vision setup.

Camera Calibration with MATLAB

Continuing the study of mathematics in the field of optical vision, we take a look at how camera calibration can be obtained in Matlab.

Video description: Camera calibration is the process of estimating the intrinsic, extrinsic, and lens-distortion parameters of a camera. It is an essential process to correct for any optical distortion artifacts, estimate the distance of an object from a camera, measure the size of objects in an image, and construct 3D views for augmented reality systems. Computer Vision Toolbox™ provides apps and functions to perform all essential tasks in the camera calibration workflow, including:

– Fully automatic detection and location of checkerboard calibration pattern, including corner detection with subpixel accuracy

– Estimation of all intrinsic and extrinsic parameters, including axis skew

– Calculation of radial and tangential lens distortion coefficients Correction of optical distortion

– Support for calibrating standard, fisheye lens, and stereo vision cameras

Camera Calibrator App and Stereo Camera Calibrator App both allow interactively selecting the calibration images, setting up the distortion coefficients, and then estimating the camera parameters you can export to MATLAB.

Computer Vision Toolbox: https://bit.ly/2XEJCL4

MATLAB for Image Processing and Computer Vision: https://bit.ly/2WUlzEi


Get a free product Trial: https://goo.gl/ZHFb5u

Learn more about MATLAB: https://goo.gl/8QV7ZZ

Learn more about Simulink: https://goo.gl/nqnbLe

See What’s new in MATLAB and Simulink: https://goo.gl/pgGtod

© 2019 The MathWorks, Inc. MATLAB and Simulink are registered trademarks of The MathWorks, Inc.

See http://www.mathworks.com/trademarks for a list of additional trademarks. Other product or brand names may be trademarks or registered trademarks of their respective holders.