Physics informed neural networks(PINNs)are a deep learning approach designed to solve partial differential equations(PDEs).Accurately learning the initial conditions is crucial when employing PINNs to solve PDEs.Howev...Physics informed neural networks(PINNs)are a deep learning approach designed to solve partial differential equations(PDEs).Accurately learning the initial conditions is crucial when employing PINNs to solve PDEs.However,simply adjusting weights and imposing hard constraints may not always lead to better learning of the initial conditions;sometimes it even makes it difficult for the neural networks to converge.To enhance the accuracy of PINNs in learning the initial conditions,this paper proposes a novel strategy named causally enhanced initial conditions(CEICs).This strategy works by embedding a new loss in the loss function:the loss is constructed by the derivative of the initial condition and the derivative of the neural network at the initial condition.Furthermore,to respect the causality in learning the derivative,a novel causality coefficient is introduced for the training when selecting multiple derivatives.Additionally,because CEICs can provide more accurate pseudo-labels in the first subdomain,they are compatible with the temporal-marching strategy.Experimental results demonstrate that CEICs outperform hard constraints and improve the overall accuracy of pre-training PINNs.For the 1D-Korteweg–de Vries,reaction and convection equations,the CEIC method proposed in this paper reduces the relative error by at least 60%compared to the previous methods.展开更多
Neural network methods have been widely used in many fields of scientific research with the rapid increase of computing power.The physics-informed neural networks(PINNs)have received much attention as a major breakthr...Neural network methods have been widely used in many fields of scientific research with the rapid increase of computing power.The physics-informed neural networks(PINNs)have received much attention as a major breakthrough in solving partial differential equations using neural networks.In this paper,a resampling technique based on the expansion-shrinkage point(ESP)selection strategy is developed to dynamically modify the distribution of training points in accordance with the performance of the neural networks.In this new approach both training sites with slight changes in residual values and training points with large residuals are taken into account.In order to make the distribution of training points more uniform,the concept of continuity is further introduced and incorporated.This method successfully addresses the issue that the neural network becomes ill or even crashes due to the extensive alteration of training point distribution.The effectiveness of the improved physics-informed neural networks with expansion-shrinkage resampling is demonstrated through a series of numerical experiments.展开更多
传统的数值求解方法面临维数灾难和效率与精度平衡问题,而基于数据驱动的神经网络求解方法又存在训练量冗余和不可解释性问题。针对此问题,物理信息神经网络(Physical Information Neural Networks,PINNs)关注了训练数据中隐含的物理先...传统的数值求解方法面临维数灾难和效率与精度平衡问题,而基于数据驱动的神经网络求解方法又存在训练量冗余和不可解释性问题。针对此问题,物理信息神经网络(Physical Information Neural Networks,PINNs)关注了训练数据中隐含的物理先验知识,融合了神经网络拟合复杂变量的能力,赋予了传统神经网络所缺乏的物理可解释性。应用该算法模型,提出了一种基于PINN的Burgers方程求解模型,该算法模型在训练中施加物理信息约束,因此能用少量的训练样本学习预测到分布在时空域上的偏微分方程模型。实验结果表明,在1+1维Burgers方程算例下,所提方法相比于经典的机器学习算法能有效捕抓到方程的变化并进行精确模拟,相比于有限差分法,可以大幅度缩短模拟时间。通过对不同的网络参数进行比较实验,所提方法在10%的噪声破坏下能产生合理的识别准确度,网络逼近方程的待定系数误差在0.001以内。展开更多
基金supported by the National Natural Science Foundation of China(Grant Nos.1217211 and 12372244).
文摘Physics informed neural networks(PINNs)are a deep learning approach designed to solve partial differential equations(PDEs).Accurately learning the initial conditions is crucial when employing PINNs to solve PDEs.However,simply adjusting weights and imposing hard constraints may not always lead to better learning of the initial conditions;sometimes it even makes it difficult for the neural networks to converge.To enhance the accuracy of PINNs in learning the initial conditions,this paper proposes a novel strategy named causally enhanced initial conditions(CEICs).This strategy works by embedding a new loss in the loss function:the loss is constructed by the derivative of the initial condition and the derivative of the neural network at the initial condition.Furthermore,to respect the causality in learning the derivative,a novel causality coefficient is introduced for the training when selecting multiple derivatives.Additionally,because CEICs can provide more accurate pseudo-labels in the first subdomain,they are compatible with the temporal-marching strategy.Experimental results demonstrate that CEICs outperform hard constraints and improve the overall accuracy of pre-training PINNs.For the 1D-Korteweg–de Vries,reaction and convection equations,the CEIC method proposed in this paper reduces the relative error by at least 60%compared to the previous methods.
基金Project supported by the National Key Research and Development Program of China(Grant No.2020YFC1807905)the National Natural Science Foundation of China(Grant Nos.52079090 and U20A20316)the Basic Research Program of Qinghai Province(Grant No.2022-ZJ-704).
文摘Neural network methods have been widely used in many fields of scientific research with the rapid increase of computing power.The physics-informed neural networks(PINNs)have received much attention as a major breakthrough in solving partial differential equations using neural networks.In this paper,a resampling technique based on the expansion-shrinkage point(ESP)selection strategy is developed to dynamically modify the distribution of training points in accordance with the performance of the neural networks.In this new approach both training sites with slight changes in residual values and training points with large residuals are taken into account.In order to make the distribution of training points more uniform,the concept of continuity is further introduced and incorporated.This method successfully addresses the issue that the neural network becomes ill or even crashes due to the extensive alteration of training point distribution.The effectiveness of the improved physics-informed neural networks with expansion-shrinkage resampling is demonstrated through a series of numerical experiments.
文摘传统的数值求解方法面临维数灾难和效率与精度平衡问题,而基于数据驱动的神经网络求解方法又存在训练量冗余和不可解释性问题。针对此问题,物理信息神经网络(Physical Information Neural Networks,PINNs)关注了训练数据中隐含的物理先验知识,融合了神经网络拟合复杂变量的能力,赋予了传统神经网络所缺乏的物理可解释性。应用该算法模型,提出了一种基于PINN的Burgers方程求解模型,该算法模型在训练中施加物理信息约束,因此能用少量的训练样本学习预测到分布在时空域上的偏微分方程模型。实验结果表明,在1+1维Burgers方程算例下,所提方法相比于经典的机器学习算法能有效捕抓到方程的变化并进行精确模拟,相比于有限差分法,可以大幅度缩短模拟时间。通过对不同的网络参数进行比较实验,所提方法在10%的噪声破坏下能产生合理的识别准确度,网络逼近方程的待定系数误差在0.001以内。