期刊文献+
共找到3篇文章
< 1 >
每页显示 20 50 100
Trajectory optimization of a reentry vehicle based on artificial emotion memory optimization 被引量:2
1
作者 FU Shengnan WANG Liang XIA Qunli 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2021年第3期668-680,共13页
The trajectory optimization of an unpowered reentry vehicle via artificial emotion memory optimization(AEMO)is discussed.Firstly,reentry dynamics are established based on multiple constraints and parameterized control... The trajectory optimization of an unpowered reentry vehicle via artificial emotion memory optimization(AEMO)is discussed.Firstly,reentry dynamics are established based on multiple constraints and parameterized control variables with finite dimensions are designed.If the constraint is not satisfied,a distance measure and an adaptive penalty function are used to address this scenario.Secondly,AEMO is introduced to solve the trajectory optimization problem.Based on the theories of biology and cognition,the trial solutions based on emotional memory are established.Three search strategies are designed for realizing the random search of trial solutions and for avoiding becoming trapped in a local minimum.The states of the trial solutions are determined according to the rules of memory enhancement and forgetting.As the iterations proceed,the trial solutions with poor quality will gradually be forgotten.Therefore,the number of trial solutions is decreased,and the convergence of the algorithm is accelerated.Finally,a numerical simulation is conducted,and the results demonstrate that the path and terminal constraints are satisfied and the method can realize satisfactory performance. 展开更多
关键词 trajectory optimization adaptive penalty function artificial emotion memory optimization(AEMO) multiple constraint
在线阅读 下载PDF
Optimization and Deployment of Memory-Intensive Operations in Deep Learning Model on Edge
2
作者 Peng XU Jianxin ZHAO Chi Harold LIU 《计算机科学》 CSCD 北大核心 2023年第2期3-12,共10页
As a large amount of data is increasingly generated from edge devices,such as smart homes,mobile phones,and wearable devices,it becomes crucial for many applications to deploy machine learning modes across edge device... As a large amount of data is increasingly generated from edge devices,such as smart homes,mobile phones,and wearable devices,it becomes crucial for many applications to deploy machine learning modes across edge devices.The execution speed of the deployed model is a key element to ensure service quality.Considering a highly heterogeneous edge deployment scenario,deep learning compiling is a novel approach that aims to solve this problem.It defines models using certain DSLs and generates efficient code implementations on different hardware devices.However,there are still two aspects that are not yet thoroughly investigated yet.The first is the optimization of memory-intensive operations,and the second problem is the heterogeneity of the deployment target.To that end,in this work,we propose a system solution that optimizes memory-intensive operation,optimizes the subgraph distribution,and enables the compiling and deployment of DNN models on multiple targets.The evaluation results show the performance of our proposed system. 展开更多
关键词 memory optimization Deep compiler Computation optimization Model deployment Edge computing
在线阅读 下载PDF
Multi-core optimization for conjugate gradient benchmark on heterogeneous processors
3
作者 邓林 窦勇 《Journal of Central South University》 SCIE EI CAS 2011年第2期490-498,共9页
Developing parallel applications on heterogeneous processors is facing the challenges of 'memory wall',due to limited capacity of local storage,limited bandwidth and long latency for memory access. Aiming at t... Developing parallel applications on heterogeneous processors is facing the challenges of 'memory wall',due to limited capacity of local storage,limited bandwidth and long latency for memory access. Aiming at this problem,a parallelization approach was proposed with six memory optimization schemes for CG,four schemes of them aiming at all kinds of sparse matrix-vector multiplication (SPMV) operation. Conducted on IBM QS20,the parallelization approach can reach up to 21 and 133 times speedups with size A and B,respectively,compared with single power processor element. Finally,the conclusion is drawn that the peak bandwidth of memory access on Cell BE can be obtained in SPMV,simple computation is more efficient on heterogeneous processors and loop-unrolling can hide local storage access latency while executing scalar operation on SIMD cores. 展开更多
关键词 multi-core processor NAS parallelization CG memory optimization
在线阅读 下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部