期刊文献+
共找到59,730篇文章
< 1 2 250 >
每页显示 20 50 100
DDPG-Based Intelligent Computation Offloading and Resource Allocation for LEO Satellite Edge Computing Network
1
作者 Jia Min Wu Jian +2 位作者 Zhang Liang Wang Xinyu Guo Qing 《China Communications》 2025年第3期1-15,共15页
Low earth orbit(LEO)satellites with wide coverage can carry the mobile edge computing(MEC)servers with powerful computing capabilities to form the LEO satellite edge computing system,providing computing services for t... Low earth orbit(LEO)satellites with wide coverage can carry the mobile edge computing(MEC)servers with powerful computing capabilities to form the LEO satellite edge computing system,providing computing services for the global ground users.In this paper,the computation offloading problem and resource allocation problem are formulated as a mixed integer nonlinear program(MINLP)problem.This paper proposes a computation offloading algorithm based on deep deterministic policy gradient(DDPG)to obtain the user offloading decisions and user uplink transmission power.This paper uses the convex optimization algorithm based on Lagrange multiplier method to obtain the optimal MEC server resource allocation scheme.In addition,the expression of suboptimal user local CPU cycles is derived by relaxation method.Simulation results show that the proposed algorithm can achieve excellent convergence effect,and the proposed algorithm significantly reduces the system utility values at considerable time cost compared with other algorithms. 展开更多
关键词 computation offloading deep deterministic policy gradient low earth orbit satellite mobile edge computing resource allocation
在线阅读 下载PDF
A Study for Inter-Satellite Cooperative Computation Offloading in LEO Satellite Networks
2
作者 Gang Yuanshuo Zhang Yuexia +2 位作者 Wu Peng Zheng Hui Fan Guangteng 《China Communications》 2025年第2期12-25,共14页
Low Earth orbit(LEO)satellite networks have the advantages of low transmission delay and low deployment cost,playing an important role in providing reliable services to ground users.This paper studies an efficient int... Low Earth orbit(LEO)satellite networks have the advantages of low transmission delay and low deployment cost,playing an important role in providing reliable services to ground users.This paper studies an efficient inter-satellite cooperative computation offloading(ICCO)algorithm for LEO satellite networks.Specifically,an ICCO system model is constructed,which considers using neighboring satellites in the LEO satellite networks to collaboratively process tasks generated by ground user terminals,effectively improving resource utilization efficiency.Additionally,the optimization objective of minimizing the system task computation offloading delay and energy consumption is established,which is decoupled into two sub-problems.In terms of computational resource allocation,the convexity of the problem is proved through theoretical derivation,and the Lagrange multiplier method is used to obtain the optimal solution of computational resources.To deal with the task offloading decision,a dynamic sticky binary particle swarm optimization algorithm is designed to obtain the offloading decision by iteration.Simulation results show that the ICCO algorithm can effectively reduce the delay and energy consumption. 展开更多
关键词 computation offloading inter-satellite co-operation LEO satellite networks
在线阅读 下载PDF
Robust Transmission Design for Federated Learning Through Over-the-Air Computation
3
作者 Hamideh Zamanpour Abyaneh Saba Asaad Amir Masoud Rabiei 《China Communications》 2025年第3期65-75,共11页
Over-the-air computation(AirComp)enables federated learning(FL)to rapidly aggregate local models at the central server using waveform superposition property of wireless channel.In this paper,a robust transmission sche... Over-the-air computation(AirComp)enables federated learning(FL)to rapidly aggregate local models at the central server using waveform superposition property of wireless channel.In this paper,a robust transmission scheme for an AirCompbased FL system with imperfect channel state information(CSI)is proposed.To model CSI uncertainty,an expectation-based error model is utilized.The main objective is to maximize the number of selected devices that meet mean-squared error(MSE)requirements for model broadcast and model aggregation.The problem is formulated as a combinatorial optimization problem and is solved in two steps.First,the priority order of devices is determined by a sparsity-inducing procedure.Then,a feasibility detection scheme is used to select the maximum number of devices to guarantee that the MSE requirements are met.An alternating optimization(AO)scheme is used to transform the resulting nonconvex problem into two convex subproblems.Numerical results illustrate the effectiveness and robustness of the proposed scheme. 展开更多
关键词 federated learning imperfect CSI optimization over-the-air computing robust design
在线阅读 下载PDF
Computational Analysis on the Hydrodynamics of a Semisubmersible Naval Ship
4
作者 Utku Cem Karabulut Baris Barlas 《哈尔滨工程大学学报(英文版)》 2025年第2期331-344,共14页
Semisubmersible naval ships are versatile military crafts that combine the advantageous features of high-speed planing crafts and submarines.At-surface,these ships are designed to provide sufficient speed and maneuver... Semisubmersible naval ships are versatile military crafts that combine the advantageous features of high-speed planing crafts and submarines.At-surface,these ships are designed to provide sufficient speed and maneuverability.Additionally,they can perform shallow dives,offering low visual and acoustic detectability.Therefore,the hydrodynamic design of a semisubmersible naval ship should address at-surface and submerged conditions.In this study,Numerical analyses were performed using a semisubmersible hull form to analyze its hydrodynamic features,including resistance,powering,and maneuvering.The simulations were conducted with Star CCM+version 2302,a commercial package program that solves URANS equations using the SST k-ωturbulence model.The flow analysis was divided into two parts:at-surface simulations and shallowly submerged simulations.At-surface simulations cover the resistance,powering,trim,and sinkage at transition and planing regimes,with corresponding Froude numbers ranging from 0.42 to 1.69.Shallowly submerged simulations were performed at seven different submergence depths,ranging from D/LOA=0.0635 to D/LOA=0.635,and at two different speeds with Froude numbers of 0.21 and 0.33.The behaviors of the hydrodynamic forces and pitching moment for different operation depths were comprehensively analyzed.The results of the numerical analyses provide valuable insights into the hydrodynamic performance of semisubmersible naval ships,highlighting the critical factors influencing their resistance,powering,and maneuvering capabilities in both at-surface and submerged conditions. 展开更多
关键词 Semisubmersible naval ship Ship resistance Planing hull computational fluid dynamics URANS equations Free surface effect High-resolution-interface-capturing scheme Numerical ventilation problem
在线阅读 下载PDF
Secure Computation Efficiency Resource Allocation for Massive MIMO-Enabled Mobile Edge Computing Networks
5
作者 Sun Gangcan Sun Jiwei +3 位作者 Hao Wanming Zhu Zhengyu Ji Xiang Zhou Yiqing 《China Communications》 SCIE CSCD 2024年第11期150-162,共13页
In this article,the secure computation efficiency(SCE)problem is studied in a massive multipleinput multiple-output(mMIMO)-assisted mobile edge computing(MEC)network.We first derive the secure transmission rate based ... In this article,the secure computation efficiency(SCE)problem is studied in a massive multipleinput multiple-output(mMIMO)-assisted mobile edge computing(MEC)network.We first derive the secure transmission rate based on the mMIMO under imperfect channel state information.Based on this,the SCE maximization problem is formulated by jointly optimizing the local computation frequency,the offloading time,the downloading time,the users and the base station transmit power.Due to its difficulty to directly solve the formulated problem,we first transform the fractional objective function into the subtractive form one via the dinkelbach method.Next,the original problem is transformed into a convex one by applying the successive convex approximation technique,and an iteration algorithm is proposed to obtain the solutions.Finally,the stimulations are conducted to show that the performance of the proposed schemes is superior to that of the other schemes. 展开更多
关键词 EAVESDROPPING massive multiple input multiple output mobile edge computing partial offloading secure computation efficiency
在线阅读 下载PDF
From the perspective of experimental practice: High-throughput computational screening in photocatalysis
6
作者 Yunxuan Zhao Junyu Gao +2 位作者 Xuanang Bian Han Tang Tierui Zhang 《Green Energy & Environment》 SCIE EI CAS CSCD 2024年第1期1-6,共6页
Photocatalysis,a critical strategy for harvesting sunlight to address energy demand and environmental concerns,is underpinned by the discovery of high-performance photocatalysts,thereby how to design photocatalysts is... Photocatalysis,a critical strategy for harvesting sunlight to address energy demand and environmental concerns,is underpinned by the discovery of high-performance photocatalysts,thereby how to design photocatalysts is now generating widespread interest in boosting the conversion effi-ciency of solar energy.In the past decade,computational technologies and theoretical simulations have led to a major leap in the development of high-throughput computational screening strategies for novel high-efficiency photocatalysts.In this viewpoint,we started with introducing the challenges of photocatalysis from the view of experimental practice,especially the inefficiency of the traditional“trial and error”method.Sub-sequently,a cross-sectional comparison between experimental and high-throughput computational screening for photocatalysis is presented and discussed in detail.On the basis of the current experimental progress in photocatalysis,we also exemplified the various challenges associated with high-throughput computational screening strategies.Finally,we offered a preferred high-throughput computational screening procedure for pho-tocatalysts from an experimental practice perspective(model construction and screening,standardized experiments,assessment and revision),with the aim of a better correlation of high-throughput simulations and experimental practices,motivating to search for better descriptors. 展开更多
关键词 PHOTOCATALYSIS High-throughput computational screening PHOTOCATALYST Theoretical simulations Experiments
在线阅读 下载PDF
A fast forward computational method for nuclear measurement using volumetric detection constraints
7
作者 Qiong Zhang Lin-Lv Lin 《Nuclear Science and Techniques》 SCIE EI CAS CSCD 2024年第2期47-63,共17页
Owing to the complex lithology of unconventional reservoirs,field interpreters usually need to provide a basis for interpretation using logging simulation models.Among the various detection tools that use nuclear sour... Owing to the complex lithology of unconventional reservoirs,field interpreters usually need to provide a basis for interpretation using logging simulation models.Among the various detection tools that use nuclear sources,the detector response can reflect various types of information of the medium.The Monte Carlo method is one of the primary methods used to obtain nuclear detection responses in complex environments.However,this requires a computational process with extensive random sampling,consumes considerable resources,and does not provide real-time response results.Therefore,a novel fast forward computational method(FFCM)for nuclear measurement that uses volumetric detection constraints to rapidly calculate the detector response in various complex environments is proposed.First,the data library required for the FFCM is built by collecting the detection volume,detector counts,and flux sensitivity functions through a Monte Carlo simulation.Then,based on perturbation theory and the Rytov approximation,a model for the detector response is derived using the flux sensitivity function method and a one-group diffusion model.The environmental perturbation is constrained to optimize the model according to the tool structure and the impact of the formation and borehole within the effective detection volume.Finally,the method is applied to a neutron porosity tool for verification.In various complex simulation environments,the maximum relative error between the calculated porosity results of Monte Carlo and FFCM was 6.80%,with a rootmean-square error of 0.62 p.u.In field well applications,the formation porosity model obtained using FFCM was in good agreement with the model obtained by interpreters,which demonstrates the validity and accuracy of the proposed method. 展开更多
关键词 Nuclear measurement Fast forward computation Volumetric constraints
在线阅读 下载PDF
Model-free prediction of chaotic dynamics with parameter-aware reservoir computing
8
作者 Jianmin Guo Yao Du +3 位作者 Haibo Luo Xuan Wang Yizhen Yu Xingang Wang 《Chinese Physics B》 2025年第4期143-152,共10页
Model-free,data-driven prediction of chaotic motions is a long-standing challenge in nonlinear science.Stimulated by the recent progress in machine learning,considerable attention has been given to the inference of ch... Model-free,data-driven prediction of chaotic motions is a long-standing challenge in nonlinear science.Stimulated by the recent progress in machine learning,considerable attention has been given to the inference of chaos by the technique of reservoir computing(RC).In particular,by incorporating a parameter-control channel into the standard RC,it is demonstrated that the machine is able to not only replicate the dynamics of the training states,but also infer new dynamics not included in the training set.The new machine-learning scheme,termed parameter-aware RC,opens up new avenues for data-based analysis of chaotic systems,and holds promise for predicting and controlling many real-world complex systems.Here,using typical chaotic systems as examples,we give a comprehensive introduction to this powerful machine-learning technique,including the algorithm,the implementation,the performance,and the open questions calling for further studies. 展开更多
关键词 chaos prediction time-series analysis bifurcation diagram parameter-aware reservoir computing
在线阅读 下载PDF
FedCLCC:A personalized federated learning algorithm for edge cloud collaboration based on contrastive learning and conditional computing
9
作者 Kangning Yin Xinhui Ji +1 位作者 Yan Wang Zhiguo Wang 《Defence Technology(防务技术)》 2025年第1期80-93,共14页
Federated learning(FL)is a distributed machine learning paradigm for edge cloud computing.FL can facilitate data-driven decision-making in tactical scenarios,effectively addressing both data volume and infrastructure ... Federated learning(FL)is a distributed machine learning paradigm for edge cloud computing.FL can facilitate data-driven decision-making in tactical scenarios,effectively addressing both data volume and infrastructure challenges in edge environments.However,the diversity of clients in edge cloud computing presents significant challenges for FL.Personalized federated learning(pFL)received considerable attention in recent years.One example of pFL involves exploiting the global and local information in the local model.Current pFL algorithms experience limitations such as slow convergence speed,catastrophic forgetting,and poor performance in complex tasks,which still have significant shortcomings compared to the centralized learning.To achieve high pFL performance,we propose FedCLCC:Federated Contrastive Learning and Conditional Computing.The core of FedCLCC is the use of contrastive learning and conditional computing.Contrastive learning determines the feature representation similarity to adjust the local model.Conditional computing separates the global and local information and feeds it to their corresponding heads for global and local handling.Our comprehensive experiments demonstrate that FedCLCC outperforms other state-of-the-art FL algorithms. 展开更多
关键词 Federated learning Statistical heterogeneity Personalized model Conditional computing Contrastive learning
在线阅读 下载PDF
Streamlined photonic reservoir computer with augmented memory capabilities
10
作者 Changdi Zhou Yu Huang +5 位作者 Yigong Yang Deyu Cai Pei Zhou Kuenyao Lau Nianqiang Li Xiaofeng Li 《Opto-Electronic Advances》 2025年第1期45-57,共13页
Photonic platforms are gradually emerging as a promising option to encounter the ever-growing demand for artificial intelligence,among which photonic time-delay reservoir computing(TDRC)is widely anticipated.While suc... Photonic platforms are gradually emerging as a promising option to encounter the ever-growing demand for artificial intelligence,among which photonic time-delay reservoir computing(TDRC)is widely anticipated.While such a computing paradigm can only employ a single photonic device as the nonlinear node for data processing,the performance highly relies on the fading memory provided by the delay feedback loop(FL),which sets a restriction on the extensibility of physical implementation,especially for highly integrated chips.Here,we present a simplified photonic scheme for more flexible parameter configurations leveraging the designed quasi-convolution coding(QC),which completely gets rid of the dependence on FL.Unlike delay-based TDRC,encoded data in QC-based RC(QRC)enables temporal feature extraction,facilitating augmented memory capabilities.Thus,our proposed QRC is enabled to deal with time-related tasks or sequential data without the implementation of FL.Furthermore,we can implement this hardware with a low-power,easily integrable vertical-cavity surface-emitting laser for high-performance parallel processing.We illustrate the concept validation through simulation and experimental comparison of QRC and TDRC,wherein the simpler-structured QRC outperforms across various benchmark tasks.Our results may underscore an auspicious solution for the hardware implementation of deep neural networks. 展开更多
关键词 photonic reservoir computing machine learning vertical-cavity surface-emitting laser quasi-convolution coding augmented memory capabilities
在线阅读 下载PDF
Providing Robust and Low-Cost Edge Computing in Smart Grid:An Energy Harvesting Based Task Scheduling and Resource Management Framework
11
作者 Xie Zhigang Song Xin +1 位作者 Xu Siyang Cao Jing 《China Communications》 2025年第2期226-240,共15页
Recently,one of the main challenges facing the smart grid is insufficient computing resources and intermittent energy supply for various distributed components(such as monitoring systems for renewable energy power sta... Recently,one of the main challenges facing the smart grid is insufficient computing resources and intermittent energy supply for various distributed components(such as monitoring systems for renewable energy power stations).To solve the problem,we propose an energy harvesting based task scheduling and resource management framework to provide robust and low-cost edge computing services for smart grid.First,we formulate an energy consumption minimization problem with regard to task offloading,time switching,and resource allocation for mobile devices,which can be decoupled and transformed into a typical knapsack problem.Then,solutions are derived by two different algorithms.Furthermore,we deploy renewable energy and energy storage units at edge servers to tackle intermittency and instability problems.Finally,we design an energy management algorithm based on sampling average approximation for edge computing servers to derive the optimal charging/discharging strategies,number of energy storage units,and renewable energy utilization.The simulation results show the efficiency and superiority of our proposed framework. 展开更多
关键词 edge computing energy harvesting energy storage unit renewable energy sampling average approximation task scheduling
在线阅读 下载PDF
A Number Theoretic Function and Its Mean Value Computation 被引量:1
12
作者 李海龙 杨倩丽 《Chinese Quarterly Journal of Mathematics》 CSCD 2002年第3期53-56,共4页
Let p be a prime, n be any positiv e integer, α(n,p) denotes the power of p in the factorization of n! . In this paper, we give an exact computing formula of the mean value ∑ n<Nα(n,p).
关键词 number theoretic function mean value computing formula
在线阅读 下载PDF
视觉导航信息可观性分析的Matlab Symbolic Computation方法
13
作者 石莹 段广仁 孙德波 《宇航学报》 EI CAS CSCD 北大核心 2004年第6期686-689,共4页
提出了视觉导航信息估计的MatlabSymbolicComputation方法,计算了视觉导航系统中6个自由度变量的最优估计的误差协方差阵的秩、特征值和特征向量,并得到了导航信息可估的条件及结论。通过仿真不仅进一步验证了文献[2]中理论推导及结论... 提出了视觉导航信息估计的MatlabSymbolicComputation方法,计算了视觉导航系统中6个自由度变量的最优估计的误差协方差阵的秩、特征值和特征向量,并得到了导航信息可估的条件及结论。通过仿真不仅进一步验证了文献[2]中理论推导及结论的正确性,而且简化了理论分析过程。本文提出的方法具有实际应用价值。 展开更多
关键词 MatlabSymbolic computation工具箱 视觉导航信息估计 可观性分析
在线阅读 下载PDF
Distributed Computation Models for Data Fusion System Simulation
14
作者 张岩 曾涛 +1 位作者 龙腾 崔智社 《Journal of Beijing Institute of Technology》 EI CAS 2001年第3期291-297,共7页
An attempt has been made to develop a distributed software infrastructure model for onboard data fusion system simulation, which is also applied to netted radar systems, onboard distributed detection systems and advan... An attempt has been made to develop a distributed software infrastructure model for onboard data fusion system simulation, which is also applied to netted radar systems, onboard distributed detection systems and advanced C3I systems. Two architectures are provided and verified: one is based on pure TCP/IP protocol and C/S model, and implemented with Winsock, the other is based on CORBA (common object request broker architecture). The performance of data fusion simulation system, i.e. reliability, flexibility and scalability, is improved and enhanced by two models. The study of them makes valuable explore on incorporating the distributed computation concepts into radar system simulation techniques. 展开更多
关键词 radar system computer network data fusion SIMULATION distributed computation
在线阅读 下载PDF
Energy-Efficient Computation Offloading and Resource Allocation in Fog Computing for Internet of Everything 被引量:21
15
作者 Qiuping Li Junhui Zhao +1 位作者 Yi Gong Qingmiao Zhang 《China Communications》 SCIE CSCD 2019年第3期32-41,共10页
With the dawning of the Internet of Everything(IoE) era, more and more novel applications are being deployed. However, resource constrained devices cannot fulfill the resource-requirements of these applications. This ... With the dawning of the Internet of Everything(IoE) era, more and more novel applications are being deployed. However, resource constrained devices cannot fulfill the resource-requirements of these applications. This paper investigates the computation offloading problem of the coexistence and synergy between fog computing and cloud computing in IoE by jointly optimizing the offloading decisions, the allocation of computation resource and transmit power. Specifically, we propose an energy-efficient computation offloading and resource allocation(ECORA) scheme to minimize the system cost. The simulation results verify the proposed scheme can effectively decrease the system cost by up to 50% compared with the existing schemes, especially for the scenario that the computation resource of fog computing is relatively small or the number of devices increases. 展开更多
关键词 FOG computING cloud computING resource ALLOCATION computation OFFLOADING IoE
在线阅读 下载PDF
A Deep Learning Based Energy-Efficient Computational Offloading Method in Internet of Vehicles 被引量:15
16
作者 Xiaojie Wang Xiang Wei Lei Wang 《China Communications》 SCIE CSCD 2019年第3期81-91,共11页
With the emergence of advanced vehicular applications, the challenge of satisfying computational and communication demands of vehicles has become increasingly prominent. Fog computing is a potential solution to improv... With the emergence of advanced vehicular applications, the challenge of satisfying computational and communication demands of vehicles has become increasingly prominent. Fog computing is a potential solution to improve advanced vehicular services by enabling computational offloading at the edge of network. In this paper, we propose a fog-cloud computational offloading algorithm in Internet of Vehicles(IoV) to both minimize the power consumption of vehicles and that of the computational facilities. First, we establish the system model, and then formulate the offloading problem as an optimization problem, which is NP-hard. After that, we propose a heuristic algorithm to solve the offloading problem gradually. Specifically, we design a predictive combination transmission mode for vehicles, and establish a deep learning model for computational facilities to obtain the optimal workload allocation. Simulation results demonstrate the superiority of our algorithm in energy efficiency and network latency. 展开更多
关键词 computationAL OFFLOADING FOG computing deep learning internet of VEHICLES
在线阅读 下载PDF
Multi-scale computation methods:Their applications in lithium-ion battery research and development 被引量:36
17
作者 施思齐 高健 +5 位作者 刘悦 赵彦 武曲 琚王伟 欧阳楚英 肖睿娟 《Chinese Physics B》 SCIE EI CAS CSCD 2016年第1期174-197,共24页
Based upon advances in theoretical algorithms, modeling and simulations, and computer technologies, the rational design of materials, cells, devices, and packs in the field of lithium-ion batteries is being realized i... Based upon advances in theoretical algorithms, modeling and simulations, and computer technologies, the rational design of materials, cells, devices, and packs in the field of lithium-ion batteries is being realized incrementally and will at some point trigger a paradigm revolution by combining calculations and experiments linked by a big shared database, enabling accelerated development of the whole industrial chain. Theory and multi-scale modeling and simulation, as supplements to experimental efforts, can help greatly to close some of the current experimental and technological gaps, as well as predict path-independent properties and help to fundamentally understand path-independent performance in multiple spatial and temporal scales. 展开更多
关键词 multiscale computation lithium-ion battery material design
在线阅读 下载PDF
Deep Reinforcement Learning-Based Computation Offloading for 5G Vehicle-Aware Multi-Access Edge Computing Network 被引量:17
18
作者 Ziying Wu Danfeng Yan 《China Communications》 SCIE CSCD 2021年第11期26-41,共16页
Multi-access Edge Computing(MEC)is one of the key technologies of the future 5G network.By deploying edge computing centers at the edge of wireless access network,the computation tasks can be offloaded to edge servers... Multi-access Edge Computing(MEC)is one of the key technologies of the future 5G network.By deploying edge computing centers at the edge of wireless access network,the computation tasks can be offloaded to edge servers rather than the remote cloud server to meet the requirements of 5G low-latency and high-reliability application scenarios.Meanwhile,with the development of IOV(Internet of Vehicles)technology,various delay-sensitive and compute-intensive in-vehicle applications continue to appear.Compared with traditional Internet business,these computation tasks have higher processing priority and lower delay requirements.In this paper,we design a 5G-based vehicle-aware Multi-access Edge Computing network(VAMECN)and propose a joint optimization problem of minimizing total system cost.In view of the problem,a deep reinforcement learningbased joint computation offloading and task migration optimization(JCOTM)algorithm is proposed,considering the influences of multiple factors such as concurrent multiple computation tasks,system computing resources distribution,and network communication bandwidth.And,the mixed integer nonlinear programming problem is described as a Markov Decision Process.Experiments show that our proposed algorithm can effectively reduce task processing delay and equipment energy consumption,optimize computing offloading and resource allocation schemes,and improve system resource utilization,compared with other computing offloading policies. 展开更多
关键词 multi-access edge computing computation offloading 5G vehicle-aware deep reinforcement learning deep q-network
在线阅读 下载PDF
Influence of random phase modulation on the imaging quality of computational ghost imaging 被引量:4
19
作者 Chao Gao Xiao-Qian Wang +3 位作者 Hong-Ji Cai Jie Ren Ji-Yuan Liu Zhi-Hai Yao 《Chinese Physics B》 SCIE EI CAS CSCD 2019年第2期77-81,共5页
In this paper, we investigated phase modulation-based computational ghost imaging. According to the results of numerical simulations, we found that the range of the random phase affects the quality of the reconstructe... In this paper, we investigated phase modulation-based computational ghost imaging. According to the results of numerical simulations, we found that the range of the random phase affects the quality of the reconstructed image. Besides,compared with those amplitude modulation-based computational ghost imaging schemes, introducing random phase modulation into the computational ghost imaging scheme could significantly improve the spatial resolution of the reconstructed image, and also extend the field of view. 展开更多
关键词 computationAL GHOST IMAGING RANDOM PHASE MODULATION IMAGING quality
在线阅读 下载PDF
Modeling of gas-solid flow in a CFB riser based on computational particle fluid dynamics 被引量:7
20
作者 Zhang Yinghui Lan Xingying Gao Jinsen 《Petroleum Science》 SCIE CAS CSCD 2012年第4期535-543,共9页
A three-dimensional model for gas-solid flow in a circulating fluidized bed(CFB) riser was developed based on computational particle fluid dynamics(CPFD).The model was used to simulate the gas-solid flow behavior ... A three-dimensional model for gas-solid flow in a circulating fluidized bed(CFB) riser was developed based on computational particle fluid dynamics(CPFD).The model was used to simulate the gas-solid flow behavior inside a circulating fluidized bed riser operating at various superficial gas velocities and solids mass fluxes in two fluidization regimes,a dilute phase transport(DPT) regime and a fast fluidization(FF) regime.The simulation results were evaluated based on comparison with experimental data of solids velocity and holdup,obtained from non-invasive automated radioactive particle tracking and gamma-ray tomography techniques,respectively.The agreement of the predicted solids velocity and holdup with experimental data validated the CPFD model for the CFB riser.The model predicted the main features of the gas-solid flows in the two regimes;the uniform dilute phase in the DPT regime,and the coexistence of the dilute phase in the upper region and the dense phase in the lower region in the FF regime.The clustering and solids back mixing in the FF regime were stronger than those in the DPT regime. 展开更多
关键词 Gas-solid flow circulating fluidized bed computational particle fluid dynamics modeling HYDRODYNAMICS
在线阅读 下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部