EI、Scopus 收录
中文核心期刊

物理信息神经网络的一种自适应配置点算法

AN ADAPTIVE COLLOCATION POINT ALGORITHM FOR PHYSICS-INFORMED NEURAL NETWORKS

  • 摘要: 物理信息神经网络(PINN)能够将方程模型融入到损失最小化训练中, 能同时学习输入数据分布和物理规律, 大多数PINN是通过均匀采样配置点来覆盖整个求解区域, 且各个配置点间都同等发挥作用, 其配置点策略简便易行, 但也使得PINN增加了部分不必要的配置点, 且对部分复杂规律的学习能力不足. 文章提出一种配置点自适应设置策略, 以提高PINN学习能力和学习效率. 首先通过损失函数残差及梯度联合分布确定配置点选择概率, 同时在迭代一定次数后进行重采样, 避免过早陷入局部最优, 这样可以使一部分配置点分布在损失较高或变化较明显处, 从而改善配置点的分布情况, 达到以较少的配置点也能准确反映方程模型的效果, 提升学习效率; 其次引入配置点的变权重设定, 使每个配置点对方程残差的影响有所侧重, 在网络迭代训练中自动提高损失值较高部分配置点的权重, 从而使PINN更专注于损失较大的部分, 即复杂规律的学习. 最后通过Burgers方程, Schrodinger方程, Helmholtz方程和Navier-Stokes方程4种典型算例与传统PINN及其各种改进方法进行比较实验. 数值结果表明, 该算法可以在较少的配置点数量和迭代次数设定下, 有效提升求解精度和计算效率.

     

    Abstract: Physics-Informed Neural Networks (PINNs) integrate equation models into loss minimization training, enabling simultaneous learning of input data distributions and physical laws. Most PINNs employ a uniform sampling strategy to cover the entire solution domain, with each collocation point playing an equal role in the training process. While this straightforward configuration point strategy is easy to implement, it often leads PINNs to incorporate unnecessary collocation points and demonstrates insufficient learning capability for certain complex laws. In this paper, we propose an adaptive collocation point selection strategy to enhance the learning capability and efficiency of PINNs. Firstly, the selection probability of collocation points is determined through the joint distribution of the residuals of the loss function and their gradients. Additionally, after a certain number of iterations, resampling is conducted to avoid premature convergence to local optima. This approach allows some collocation points to be distributed in regions with high loss or significant variation, thereby improving the overall distribution of collocation points. As a result, the model can accurately reflect the equation with fewer collocation points, thus enhancing learning efficiency. Secondly, we introduce a variable weight setting for collocation points, allowing each point to exert varying degrees of influence on the equation's residuals. During the network's iterative training, the weights of collocation points associated with higher loss values are automatically increased, enabling the PINN to concentrate more on the aspects with greater loss—namely, the learning of complex laws. Finally, comparative experiments are conducted using four typical cases: the Burgers' equation, the Schrödinger equation, the Helmholtz equation, and the Navier-Stokes equations, benchmarked against traditional PINNs and various improved methods. Numerical results demonstrate that the proposed algorithm effectively enhances solution accuracy and computational efficiency under conditions of fewer collocation points and iterations.

     

/

返回文章
返回
Baidu
map