Powering an established method of mathematical calculation with AI can reduce noise, memory use, and training time in neural network models.
Researchers at the University of Pennsylvania School of Engineering and Applied Science (Penn) have developed a new AI technique that could make one of mathematics’ toughest tasks much easier: solving inverse partial differential equations, or inverse PDEs.
Instead of relying on the usual derivative-heavy method that can be bogged down when data are noisy or the equations are high-order, the team’s approach uses a smoothing-based mathematical trick to recover hidden causes from observed effects more efficiently.
Inverse PDEs matter because they let scientists work backward from patterns they can measure, to the processes they cannot directly see. That makes them useful in areas as different as genetics, climate modeling, and materials science, where researchers often need to infer the rules behind complex systems rather than simply describe what is happening in the moment.
The Penn group says the new method is not mainly about adding more computing power. Instead, it changes the math inside the model so the system does not have to repeatedly calculate derivatives through recursive automatic differentiation, a process that can become unstable and memory-intensive.
By attaching “mollifier layers” to the output side of a neural network, the researchers report that they can cut training time and memory use while also improving accuracy in tests across multiple PDE types.
The work also has a concrete biological target. One of the first applications is chromatin biology, where scientists study how tightly packed DNA and its associated proteins regulate gene activity. In that setting, the new method could help estimate epigenetic reaction rates from microscopy data, giving researchers a way to model how chromatin changes during development, aging, and disease.
This AI technique could possible move the field beyond static images and toward dynamic models of cell behavior. The same framework may eventually be useful anywhere scientists face noisy measurements and complicated equations, including fluid mechanics, weather prediction, and other physics-constrained learning problems.
For scientists, the appeal is practical as much as theoretical. A method that is faster, less memory-hungry, and more stable can make inverse PDEs more workable on real-world data, where measurements are rarely clean and computational budgets are limited.
In that sense, the Penn result is less about a single application than about giving researchers a better tool for seeing the hidden machinery inside complex systems.