Connect with us

Latest Tech News

The Benefits of Peripheral Vision for Machines

Peripheral vision for machines

New research from MIT suggests that a certain type of computer vision model, trained to be robust to imperceptible noise added to image data, encodes visual representations similar to how humans do with peripheral vision. Photo credit: Jose-Luis Olivares, MIT

Researchers are finding similarities between the way some computer vision systems process images and the way people see out of the corner of their eyes.

Maybe computer vision and human vision have more in common than you think?

research out[{” attribute=””>MIT suggests that a certain type of robust computer-vision model perceives visual representations similarly to the way humans do using peripheral vision. These models, known as adversarially robust models, are designed to overcome subtle bits of noise that have been added to image data.

The way these models learn to transform images is similar to some elements involved in human peripheral processing, the researchers found. But because machines do not have a visual periphery, little work on computer vision models has focused on peripheral processing, says senior author Arturo Deza, a postdoc in the Center for Brains, Minds, and Machines.

“It seems like peripheral vision, and the textural representations that are going on there, have been shown to be pretty useful for human vision. So, our thought was, OK, maybe there might be some uses in machines, too,” says lead author Anne Harrington, a graduate student in the Department of Electrical Engineering and Computer Science.

Machine Learning Peripheral Vision

Researchers started with a set of images, and used three different computer vision models to synthesize representations of those images from noise: a “normal” machine-learning model, one that had been trained to be adversarially robust, and one that had been specifically designed to account for some aspects of human peripheral processing, called Textorms. Credit: Courtesy of the researchers

The results suggest that designing a machine-learning model to include some form of peripheral processing could enable the model to automatically learn visual representations that are robust to some subtle manipulations in image data. This work could also help shed some light on the goals of peripheral processing in humans, which are still not well-understood, Deza adds.

The research will be presented at the International Conference on Learning Representations.

Table of Contents

Double vision

Humans and computer vision systems both have what is known as foveal vision, which is used for scrutinizing highly detailed objects. Humans also possess peripheral vision, which is used to organize a broad, spatial scene. Typical computer vision approaches attempt to model foveal vision — which is how a machine recognizes objects — and tend to ignore peripheral vision, Deza says.

But foveal computer vision systems are vulnerable to adversarial noise, which is added to image data by an attacker. In an adversarial attack, a malicious agent subtly modifies images so each pixel has been changed very slightly — a human wouldn’t notice the difference, but the noise is enough to fool a machine. For example, an image might look like a car to a human, but if it has been affected by adversarial noise, a computer vision model may confidently misclassify it as, say, a cake, which could have serious implications in an autonomous vehicle.

Peripheral Vision Psychophysical Human Experiments

Researchers designed a series of psychophysical human experiments where participants were asked to distinguish between original images and the representations synthesized by each model. This photo shows an example of the experiment’s set up. Credit: Courtesy of the researchers

To overcome this vulnerability, researchers conduct what is known as adversarial training, where they create images that have been manipulated with adversarial noise, feed them to the neural network, and then correct its mistakes by relabeling the data and then retraining the model.

“Just doing that additional relabeling and training process seems to give a lot of perceptual alignment with human processing,” Deza says.

He and Harrington wondered if these adversarially trained networks are robust because they encode object representations that are similar to human peripheral vision. So, they designed a series of psychophysical human experiments to test their hypothesis.

Screen time

They started with a set of images and used three different computer vision models to synthesize representations of those images from noise: a “normal” machine-learning model, one that had been trained to be adversarially robust, and one that had been specifically designed to account for some aspects of human peripheral processing, called Texforms.

The team used these generated images in a series of experiments where participants were asked to distinguish between the original images and the representations synthesized by each model. Some experiments also had humans differentiate between different pairs of randomly synthesized images from the same models.

Participants kept their eyes focused on the center of a screen while images were flashed on the far sides of the screen, at different locations in their periphery. In one experiment, participants had to identify the oddball image in a series of images that were flashed for only milliseconds at a time, while in the other they had to match an image presented at their fovea, with two candidate template images placed in their periphery.

Peripheral Vision Experiment Sides

In the experiments, participants kept their eyes focused on the center of a screen while images were flashed on the far sides of the screen, at different locations in their periphery, like these animated gifs. In one experiment, participants had to identify the oddball image in a series of images that were flashed for only milliseconds at a time. Credit: Courtesy of the researchers

Peripheral Vision Experiment Center

In this experiment, researchers had humans match the center template with one of the two peripheral ones, without moving their eyes from the center of the screen. Credit: Courtesy of the researchers

When the synthesized images were shown in the far periphery, the participants were largely unable to tell the difference between the original for the adversarially robust model or the Texform model. This was not the case for the standard machine-learning model.

However, what is perhaps the most striking result is that the pattern of mistakes that humans make (as a function of where the stimuli land in the periphery) is heavily aligned across all experimental conditions that use the stimuli derived from the Texform model and the adversarially robust model. These results suggest that adversarially robust models do capture some aspects of human peripheral processing, Deza explains.

The researchers also computed specific machine-learning experiments and image-quality assessment metrics to study the similarity between images synthesized by each model. They found that those generated by the adversarially robust model and the Texforms model were the most similar, which suggests that these models compute similar image transformations.

“We are shedding light into this alignment of how humans and machines make the same kinds of mistakes, and why,” Deza says. Why does adversarial robustness happen? Is there a biological equivalent for adversarial robustness in machines that we haven’t uncovered yet in the brain?”

Deza is hoping these results inspire additional work in this area and encourage computer vision researchers to consider building more biologically inspired models.

These results could be used to design a computer vision system with some sort of emulated visual periphery that could make it automatically robust to adversarial noise. The work could also inform the development of machines that are able to create more accurate visual representations by using some aspects of human peripheral processing.

“We could even learn about human vision by trying to get certain properties out of artificial neural networks,” Harrington adds.

Previous work had shown how to isolate “robust” parts of images, where training models on these images caused them to be less susceptible to adversarial failures. These robust images look like scrambled versions of the real images, explains Thomas Wallis, a professor for perception at the Institute of Psychology and Centre for Cognitive Science at the Technical University of Darmstadt.

“Why do these robust images look the way that they do? Harrington and Deza use careful human behavioral experiments to show that peoples’ ability to see the difference between these images and original photographs in the periphery is qualitatively similar to that of images generated from biologically inspired models of peripheral information processing in humans,” says Wallis, who was not involved with this research. “Harrington and Deza propose that the same mechanism of learning to ignore some visual input changes in the periphery may be why robust images look the way they do, and why training on robust images reduces adversarial susceptibility. This intriguing hypothesis is worth further investigation, and could represent another example of a synergy between research in biological and machine intelligence.”

Reference: “Finding Biological Plausibility for Adversarially Robust Features via Metameric Tasks” by Anne Harrington and Arturo Deza, 28 September 2021, ICLR 2022 Conference.
OpenReview.net

This work was supported, in part, by the MIT Center for Brains, Minds, and Machines and Lockheed Martin Corporation.

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Latest Tech News

MIT Uses AI To Discover Hidden Magnetic Properties in Multi-Layered Electronic Material

Hidden magnetic properties in multilayer electronic material

MIT researchers discovered hidden magnetic properties in multilayer electronic material by analyzing polarized neutrons with neural networks. Credit: Ella Maru Studio

A[{” attribute=””>MIT team incorporates AI to facilitate the detection of an intriguing materials phenomenon that can lead to electronics without energy dissipation.

Superconductors have long been considered the principal approach for realizing electronics without resistivity. In the past decade, a new family of quantum materials, “topological materials,” has offered an alternative but promising means for achieving electronics without energy dissipation (or loss). Compared to superconductors, topological materials provide a few advantages, such as robustness against disturbances. To attain the dissipationless electronic states, one key route is the so-called “magnetic proximity effect,” which occurs when magnetism penetrates slightly into the surface of a topological material. However, observing the proximity effect has been challenging.

The problem, according to Zhantao Chen, a mechanical engineering PhD student at MIT, “is that the signal people are looking for that would indicate the presence of this effect is usually too weak to detect conclusively with traditional methods.” That’s why a team of scientists — based at MIT, Pennsylvania State University, and the National Institute of Standards and Technology — decided to try a nontraditional approach, which ended up yielding surprisingly good results.

What lies beneath, and between, the layers

For the past few years, researchers have relied on a technique known as polarized neutron reflectometry (PNR) to probe the depth-dependent magnetic structure of multilayered materials, as well as to look for phenomena such as the magnetic proximity effect. In PNR, two polarized neutron beams with opposing spins are reflected from the sample and collected on a detector. “If the neutron encounters a magnetic flux, such as that found inside a magnetic material, which has the opposite orientation, it will change its spin state, resulting in different signals measured from the spin up and spin down neutron beams,” explains Nina Andrejevic, PhD in materials science and engineering. As a result, the proximity effect can be detected if a thin layer of a normally nonmagnetic material — placed immediately adjacent to a magnetic material — is shown to become magnetized.

But the effect is very subtle, extending only about 1 nanometer in depth, and ambiguities and challenges can arise when it comes to interpreting experimental results. “By bringing machine learning into our methodology, we hoped to get a clearer picture of what’s going on,” notes Mingda Li, the Norman C. Rasmussen Career Development Professor in the Department of Nuclear Science and Engineering who headed the research team. That hope was indeed borne out, and the team’s findings were published on March 17, 2022, in a paper in Applied Physics Review.

The researchers investigated a topological insulator — a material that is electrically insulating in its interior but can conduct electric current on the surface. They chose to focus on a layered materials system comprising the topological insulator bismuth selenide (Bi2Se3) interfaced with the ferromagnetic insulator europium sulfide (EuS). Bi2Se3 is, by itself, a nonmagnetic material, so the magnetic EuS layer dominates the difference between the signals measured by the two polarized neutron beams. However, with the help of machine learning, the researchers were able to identify and quantify another contribution to the PNR signal — the magnetization induced in the Bi2Se3 at the interface with the adjoining EuS layer. “Machine learning methods are highly effective in eliciting underlying patterns from complex data, making it possible to discern subtle effects like that of proximity magnetism in the PNR measurement,” Andrejevic says.

When the PNR signal is first fed to the machine learning model, it is highly complex. The model is able to simplify this signal so that the proximity effect is amplified and thus becomes more conspicuous. Using this pared-down representation of the PNR signal, the model can then quantify the induced magnetization — indicating whether or not the magnetic proximity effect is observed — along with other attributes of the materials system, such as the thickness, density, and roughness of the constituent layers.

Better seeing through AI

“We’ve reduced the ambiguity that arose in previous analyses, thanks to the doubling in the resolution achieved using the machine learning-assisted approach,” say Leon Fan and Henry Heiberger, undergraduate researchers participating in this study. What that means is that they could discern materials properties at length scales of 0.5 nm, half of the typical spatial extent of proximity effect. That’s analogous to looking at writing on a blackboard from 20 feet away and not being able to make out any of the words. But if you could cut that distance in half, you might be able to read the whole thing.

The data analysis process can also be sped up significantly through a reliance on machine learning. “In the old days, you could spend weeks fiddling with all the parameters until you can get the simulated curve to fit the experimental curve,” Li says. “It can take many tries because the same [PNR] Signal could correspond to different combinations of parameters.”

“The neural network gives you an instant answer,” Chen adds. “There is no more guesswork. No more trial and error.” For this reason, the framework has been installed in some reflectometry beamlines to support the analysis of broader material types.

Some outside observers have praised the new study — it’s the first to assess the effectiveness of machine learning in identifying the proximity effect and among the first machine-learning-based packages to be used for PNR data analysis. “The work of Andrejevic et al. provides an alternative way to capture fine detail in PNR data and shows how higher resolution can be consistently achieved,” said Kang L. Wang, Distinguished Professor and Raytheon Professor of Electrical Engineering at the University of California, Los Angeles.

“This is really exciting progress,” commented Chris Leighton, Distinguished McKnight University Professor at the University of Minnesota. “Their new approach to machine learning could not only significantly speed up this process, but also extract even more material information from the available data.”

The MIT-led group is already considering expanding the scope of their investigation. “The magnetic proximity effect is not the only weak effect that interests us,” says Andrejevic. “The machine learning framework we developed can be easily applied to different types of problems, such as the superconducting proximity effect, which is of great interest in research[{” attribute=””>quantum computing.”

Reference: “Elucidating proximity magnetism through polarized neutron reflectometry and machine learning” by Nina Andrejevic, Zhantao Chen, Thanh Nguyen, Leon Fan, Henry Heiberger, Ling-Jie Zhou, Yi-Fan Zhao, Cui-Zu Chang, Alexander Grutter and Mingda Li, 17 March 2022, Applied Physics Review.
DOI: 10.1063/5.0078814

This research was funded by the U.S. Department of Energy Office of Science’s Neutron Scattering Program.

Continue Reading

Latest Tech News

Could We Make Cars Out of Petroleum Residue?

Through

Carbon fiber illustration

A new method of making carbon fiber could turn refinery by-products into high-quality, ultra-lightweight structural materials for cars, planes and spacecraft.

As the world struggles to improve the efficiency of cars and other vehicles to reduce greenhouse gas emissions and increase the range of electric vehicles, the search is on for ever lighter materials that are strong enough to be used in car bodies.

Lightweight carbon fiber materials, similar to the material used in some tennis racquets and bicycles, combine exceptional strength with low weight, but were more expensive to manufacture than comparable steel or aluminum structural members. Well, researchers at[{” attribute=””>MIT and elsewhere have come up with a way of making these lightweight fibers out of an ultracheap feedstock: the heavy, gloppy waste material left over from the refining of petroleum, material that refineries today supply for low-value applications such as asphalt, or eventually treat as waste.

Not only is the new carbon fiber cheap to make, but it offers advantages over the traditional carbon fiber materials because it can have compressional strength, meaning it could be used for load-bearing applications. The new process is described on March 18, 2022m in the journal Science Advances, in a paper by graduate student Asmita Jana, research scientist Nicola Ferralis, professor Jeffrey Grossman, and five others at MIT, Western Research Institute in Wyoming, and Oak Ridge National Laboratory in Tennessee.

Circle of Carbon Fibers

A circle of carbon fibers which have a diameter of about 10 micrometers. Credit: Nicola Ferralis

The research began about four years ago in response to a request from the Department of Energy, which was seeking ways to make cars more efficient and reduce fuel consumption by lowering their overall weight. “If you look at the same model car now, compared to 30 years ago, it’s significantly heavier,” Ferralis says. “The weight of cars has increased more than 15 percent within the same category.”

A heavier car requires a bigger engine, stronger brakes, and so on, so the reducing the weight of the body or other components has a ripple effect that produces additional weight savings. The DOE is pushing for the development of lightweight structural materials that match the safety of today’s conventional steel panels but also can be made cheaply enough to potentially replace steel altogether in standard vehicles.

Composites made from carbon fibers are not a new idea, but so far in the automotive world they have only been used in a few very expensive models. The new research aims to turn that around by providing a low-cost starting material and relatively simple processing methods.

Human Hair and Carbon Fiber

A human hair and carbon fiber, with a clear ruler on the bottom half of the image. The human hair, pictured in a vertical orientation, is thicker (about 60 micrometers) than the carbon fiber behind it. Credit: Nicola Ferralis

Carbon fibers of the quality needed for automotive use cost at least $10 to $12 per pound currently, Ferralis says, and “can be way more,” up to hundreds of dollars a pound for specialized application like spacecraft components. That compares to about 75 cents a pound for steel, or $2 for aluminum, though these prices fluctuate widely, and the materials often rely on foreign sources. At those prices, he says, making a pickup truck out of carbon fiber instead of steel would roughly double the cost.

These fibers are typically made from polymers (such as polyacrilonitrile) derived from petroleum, but use a costly intermediate step of polymerizing the carbon compounds. The cost of the polymer can account for more than 60 percent of the total cost of the final fiber, Ferralis says. Instead of using a refined and processed petroleum product to start with, the team’s new approach uses what is essentially the dregs left after the refining process, a material known as petroleum pitch. “It’s what we sometimes call the bottom of the barrel,” Ferralis says.

“Pitch is incredibly messy,” he says. It’s a hodgepodge of mixed heavy hydrocarbons, and “that’s actually what makes it beautiful in a way, because there’s so much chemistry that can be exploited. That makes it a fascinating material to start with.”

It’s useless for combustion; although it can burn, it’s too dirty a fuel be practical, and this is especially true with tightening environmental regulations. “There’s so much of it,” he says, “the inherent value of these products is very low, so then it is often landfilled.” An alternative source of pitch, which the team also tested, is coal pitch, a similar material that is a byproduct of coking coal, used for example for steel production. That process yields about 80 percent coke and 20 percent coal pitch, “which is basically a waste,” he says.

Working in collaboration with researchers at Oak Ridge National Laboratory, who had the expertise in manufacturing carbon fibers under a variety of conditions, from lab scale all the way up to pilot-plant scale, the team set about finding ways to predict the performance in order to guide the choice of conditions for those fabrication experiments.

“The process that you need to actually make a carbon fiber [from pitch] is actually extremely small, both in terms of energy requirements and in terms of the actual processing that you need to do,” says Ferralis.

Jana explains that pitch “consists of this heterogeneous group of molecules where you would expect the properties to change dramatically as you change shape or size,” whereas an industrial material must have very consistent properties.

By carefully modeling how bonds form and crosslink between individual molecules, Jana was able to develop a way to predict how a given set of processing conditions would affect the resulting fiber properties. “We were able to reproduce the results so amazingly[{” attribute=””>accuracy,” she says, “to the point where companies could take those graphs and be able to predict” characteristics such as density and elastic modulus of the fibers.

The work produced results showing by adjusting the starting conditions, carbon fibers could be made that were not only strong in tension, as most such fibers are, but also strong in compression, meaning they could potentially be used in load-bearing applications. This opens up entirely new possibilities for the usefulness of these materials, they say.

DOE’s call was for projects to bring the cost of lightweight materials down below $5 a pound, but the MIT team estimates that their method can to do better than that, reaching something like $3 a pound, though they haven’t yet done a detailed economic analysis.

“The new route we’re developing is not just a cost effect,” Ferralis says. “It might open up new applications, and it doesn’t have to be vehicles.” Part of the complication of making conventional fiber composites is that the fibers have to be made into a cloth and laid out in precise, detailed patterns. The reason for that, he says, “is to compensate for the lack of compressive strength.” It’s a matter of engineering to overcome the deficiencies of the material, he says, but with the new process all that extra complexity would not be needed.

Reference: “Atoms to fibers: Identifying novel processing methods in the synthesis of pitch-based carbon fibers” by Asmita Jana, Taishan Zhu, Yanming Wang, Jeramie J. Adams, Logan T. Kearney, Amit K. Naskar, Jeffrey C. Grossman and Nicola Ferralis, 18 March 2022, Science Advances.
DOI: 10.1126/sciadv.abn1905

The research team included Taishan Zhu and Yanming Wang at MIT, Jeramie Adams at Western Reserve University, and Logan Kearney and Amit Naskar at Oak Ridge National Laboratory. The work was supported by the U.S. Department of Energy.

Continue Reading

Latest Tech News

Tiny Magnets Could Hold the Secret to Miniaturizable Quantum Computers

On-chip quantum circuit

Photo Credit: Image by Ellen Weiss/Argonne National Laboratory

Magnetic interactions could point to quantum devices.

From MRI machines to computer hard drive storage, magnetism has played a role in crucial discoveries reshaping our society. In the new area of[{” attribute=””>quantum computing, magnetic interactions could play a role in relaying quantum information.

In new research from the U.S. Department of Energy’s (DOE) Argonne National Laboratory, scientists have achieved efficient quantum coupling between two distant magnetic devices, which can host a certain type of magnetic excitations called magnons. These excitations happen when an electric current generates a magnetic field. Coupling allows magnons to exchange energy and information. This kind of coupling may be useful for creating new quantum information technology devices.

“Remote coupling of magnons is the first step, or almost a prerequisite, for doing quantum work with magnetic systems,” said Argonne senior scientist Valentine Novosad, an author of the study. “We show the ability for these magnons to communicate instantly with each other at a distance.”

This instant communication does not require sending a message between magnons limited by the speed of light. It is analogous to what physicists call quantum entanglement.

Remote Magnon Magnon Coupling Circuit Schematic

Schematic of the remote magnon-magnon coupling circuit. Two single-crystal YIG spheres are embedded to the NbN coplanar superconducting resonator circuit, where microwave photon mediates coherent magnon-magnon interaction. Credit: Image by Yi Li/Argonne National Laboratory

Following on from a 2019 study, the researchers sought to create a system that would allow magnetic excitations to talk to one another at a distance in a superconducting circuit. This would allow the magnons to potentially form the basis of a type of quantum computer. For the basic underpinnings of a viable quantum computer, researchers need the particles to be coupled and stay coupled for a long time.

In order to achieve a strong coupling effect, researchers have built a superconducting circuit and used two small yttrium iron garnet (YIG) magnetic spheres embedded on the circuit. This material, which supports magnonic excitations, ensures efficient and low-loss coupling for the magnetic spheres.

The two spheres are both magnetically coupled to a shared superconducting resonator in the circuit, which acts like a telephone line to create strong coupling between the two spheres even when they are almost a centimeter away from each other – 30 times the distance of their diameters.

“This is a significant achievement,” said Argonne materials scientist Yi Li, lead author of the study. “Similar effects can also be observed between magnons and superconducting resonators, but this time we did it between two magnon resonators without direct interaction. The coupling comes from indirect interaction between the two spheres and the shared superconducting resonator.”

One additional improvement over the 2019 study involved the longer coherence of the magnons in the magnetic resonator. “If you speak in a cave, you may hear an echo,” said Novosad. “The longer that echo lasts, the longer the coherence.”

“Before, we definitely saw a relationship between magnons and a superconducting resonator, but in this study their coherence times are much longer because of the use of the spheres, which is why we can see evidence of separated magnons talking to each other,” Li added.

According to Li, because the magnetic spins are highly concentrated in the device, the study could point to miniaturizable quantum devices. “It’s possible that tiny magnets could hold the secret to new quantum computers,” he said.

The magnonic devices were fabricated at Argonne’s Center for Nanoscale Materials, a DOE Office of Science user facility.

A paper based on the study, “Coherent coupling of two remote magnonic resonators mediated by superconducting circuits,” was published in the January 24 issue of Physical Review Letters.

Reference: “Coherent Coupling of Two Remote Magnonic Resonators Mediated by Superconducting Circuits” by Yi Li, Volodymyr G. Yefremenko, Marharyta Lisovenko, Cody Trevillian, Tomas Polakovic, Thomas W. Cecil, Peter S. Barry, John Pearson, Ralu Divan, Vasyl Tyberkevych, Clarence L. Chang, Ulrich Welp, Wai-Kwong Kwok and Valentine Novosad, 24 January 2022, Physical Review Letters.
DOI: 10.1103/PhysRevLett.128.047701

Other authors of the study include Argonne’s Volodymyr Yefremenko, Marharyta Lisovenko, Tomas Polakovic, Thomas Cecil, Pete Barry, John Pearson, Ralu Divan, Clarence Chang, Ulrich Welp and Wai-Kwong Kwok. Cody Trevillian and Vasyl Tyberkevych of Oakland University in Michigan also contributed to the research.

The research was funded by DOE’s Office of Science (Office of Basic Energy Sciences).

Continue Reading