Low earth orbit(LEO)satellites with wide coverage can carry the mobile edge computing(MEC)servers with powerful computing capabilities to form the LEO satellite edge computing system,providing computing services for t...Low earth orbit(LEO)satellites with wide coverage can carry the mobile edge computing(MEC)servers with powerful computing capabilities to form the LEO satellite edge computing system,providing computing services for the global ground users.In this paper,the computation offloading problem and resource allocation problem are formulated as a mixed integer nonlinear program(MINLP)problem.This paper proposes a computation offloading algorithm based on deep deterministic policy gradient(DDPG)to obtain the user offloading decisions and user uplink transmission power.This paper uses the convex optimization algorithm based on Lagrange multiplier method to obtain the optimal MEC server resource allocation scheme.In addition,the expression of suboptimal user local CPU cycles is derived by relaxation method.Simulation results show that the proposed algorithm can achieve excellent convergence effect,and the proposed algorithm significantly reduces the system utility values at considerable time cost compared with other algorithms.展开更多
Low Earth orbit(LEO)satellite networks have the advantages of low transmission delay and low deployment cost,playing an important role in providing reliable services to ground users.This paper studies an efficient int...Low Earth orbit(LEO)satellite networks have the advantages of low transmission delay and low deployment cost,playing an important role in providing reliable services to ground users.This paper studies an efficient inter-satellite cooperative computation offloading(ICCO)algorithm for LEO satellite networks.Specifically,an ICCO system model is constructed,which considers using neighboring satellites in the LEO satellite networks to collaboratively process tasks generated by ground user terminals,effectively improving resource utilization efficiency.Additionally,the optimization objective of minimizing the system task computation offloading delay and energy consumption is established,which is decoupled into two sub-problems.In terms of computational resource allocation,the convexity of the problem is proved through theoretical derivation,and the Lagrange multiplier method is used to obtain the optimal solution of computational resources.To deal with the task offloading decision,a dynamic sticky binary particle swarm optimization algorithm is designed to obtain the offloading decision by iteration.Simulation results show that the ICCO algorithm can effectively reduce the delay and energy consumption.展开更多
Over-the-air computation(AirComp)enables federated learning(FL)to rapidly aggregate local models at the central server using waveform superposition property of wireless channel.In this paper,a robust transmission sche...Over-the-air computation(AirComp)enables federated learning(FL)to rapidly aggregate local models at the central server using waveform superposition property of wireless channel.In this paper,a robust transmission scheme for an AirCompbased FL system with imperfect channel state information(CSI)is proposed.To model CSI uncertainty,an expectation-based error model is utilized.The main objective is to maximize the number of selected devices that meet mean-squared error(MSE)requirements for model broadcast and model aggregation.The problem is formulated as a combinatorial optimization problem and is solved in two steps.First,the priority order of devices is determined by a sparsity-inducing procedure.Then,a feasibility detection scheme is used to select the maximum number of devices to guarantee that the MSE requirements are met.An alternating optimization(AO)scheme is used to transform the resulting nonconvex problem into two convex subproblems.Numerical results illustrate the effectiveness and robustness of the proposed scheme.展开更多
Semisubmersible naval ships are versatile military crafts that combine the advantageous features of high-speed planing crafts and submarines.At-surface,these ships are designed to provide sufficient speed and maneuver...Semisubmersible naval ships are versatile military crafts that combine the advantageous features of high-speed planing crafts and submarines.At-surface,these ships are designed to provide sufficient speed and maneuverability.Additionally,they can perform shallow dives,offering low visual and acoustic detectability.Therefore,the hydrodynamic design of a semisubmersible naval ship should address at-surface and submerged conditions.In this study,Numerical analyses were performed using a semisubmersible hull form to analyze its hydrodynamic features,including resistance,powering,and maneuvering.The simulations were conducted with Star CCM+version 2302,a commercial package program that solves URANS equations using the SST k-ωturbulence model.The flow analysis was divided into two parts:at-surface simulations and shallowly submerged simulations.At-surface simulations cover the resistance,powering,trim,and sinkage at transition and planing regimes,with corresponding Froude numbers ranging from 0.42 to 1.69.Shallowly submerged simulations were performed at seven different submergence depths,ranging from D/LOA=0.0635 to D/LOA=0.635,and at two different speeds with Froude numbers of 0.21 and 0.33.The behaviors of the hydrodynamic forces and pitching moment for different operation depths were comprehensively analyzed.The results of the numerical analyses provide valuable insights into the hydrodynamic performance of semisubmersible naval ships,highlighting the critical factors influencing their resistance,powering,and maneuvering capabilities in both at-surface and submerged conditions.展开更多
In this article,the secure computation efficiency(SCE)problem is studied in a massive multipleinput multiple-output(mMIMO)-assisted mobile edge computing(MEC)network.We first derive the secure transmission rate based ...In this article,the secure computation efficiency(SCE)problem is studied in a massive multipleinput multiple-output(mMIMO)-assisted mobile edge computing(MEC)network.We first derive the secure transmission rate based on the mMIMO under imperfect channel state information.Based on this,the SCE maximization problem is formulated by jointly optimizing the local computation frequency,the offloading time,the downloading time,the users and the base station transmit power.Due to its difficulty to directly solve the formulated problem,we first transform the fractional objective function into the subtractive form one via the dinkelbach method.Next,the original problem is transformed into a convex one by applying the successive convex approximation technique,and an iteration algorithm is proposed to obtain the solutions.Finally,the stimulations are conducted to show that the performance of the proposed schemes is superior to that of the other schemes.展开更多
Photocatalysis,a critical strategy for harvesting sunlight to address energy demand and environmental concerns,is underpinned by the discovery of high-performance photocatalysts,thereby how to design photocatalysts is...Photocatalysis,a critical strategy for harvesting sunlight to address energy demand and environmental concerns,is underpinned by the discovery of high-performance photocatalysts,thereby how to design photocatalysts is now generating widespread interest in boosting the conversion effi-ciency of solar energy.In the past decade,computational technologies and theoretical simulations have led to a major leap in the development of high-throughput computational screening strategies for novel high-efficiency photocatalysts.In this viewpoint,we started with introducing the challenges of photocatalysis from the view of experimental practice,especially the inefficiency of the traditional“trial and error”method.Sub-sequently,a cross-sectional comparison between experimental and high-throughput computational screening for photocatalysis is presented and discussed in detail.On the basis of the current experimental progress in photocatalysis,we also exemplified the various challenges associated with high-throughput computational screening strategies.Finally,we offered a preferred high-throughput computational screening procedure for pho-tocatalysts from an experimental practice perspective(model construction and screening,standardized experiments,assessment and revision),with the aim of a better correlation of high-throughput simulations and experimental practices,motivating to search for better descriptors.展开更多
Owing to the complex lithology of unconventional reservoirs,field interpreters usually need to provide a basis for interpretation using logging simulation models.Among the various detection tools that use nuclear sour...Owing to the complex lithology of unconventional reservoirs,field interpreters usually need to provide a basis for interpretation using logging simulation models.Among the various detection tools that use nuclear sources,the detector response can reflect various types of information of the medium.The Monte Carlo method is one of the primary methods used to obtain nuclear detection responses in complex environments.However,this requires a computational process with extensive random sampling,consumes considerable resources,and does not provide real-time response results.Therefore,a novel fast forward computational method(FFCM)for nuclear measurement that uses volumetric detection constraints to rapidly calculate the detector response in various complex environments is proposed.First,the data library required for the FFCM is built by collecting the detection volume,detector counts,and flux sensitivity functions through a Monte Carlo simulation.Then,based on perturbation theory and the Rytov approximation,a model for the detector response is derived using the flux sensitivity function method and a one-group diffusion model.The environmental perturbation is constrained to optimize the model according to the tool structure and the impact of the formation and borehole within the effective detection volume.Finally,the method is applied to a neutron porosity tool for verification.In various complex simulation environments,the maximum relative error between the calculated porosity results of Monte Carlo and FFCM was 6.80%,with a rootmean-square error of 0.62 p.u.In field well applications,the formation porosity model obtained using FFCM was in good agreement with the model obtained by interpreters,which demonstrates the validity and accuracy of the proposed method.展开更多
Model-free,data-driven prediction of chaotic motions is a long-standing challenge in nonlinear science.Stimulated by the recent progress in machine learning,considerable attention has been given to the inference of ch...Model-free,data-driven prediction of chaotic motions is a long-standing challenge in nonlinear science.Stimulated by the recent progress in machine learning,considerable attention has been given to the inference of chaos by the technique of reservoir computing(RC).In particular,by incorporating a parameter-control channel into the standard RC,it is demonstrated that the machine is able to not only replicate the dynamics of the training states,but also infer new dynamics not included in the training set.The new machine-learning scheme,termed parameter-aware RC,opens up new avenues for data-based analysis of chaotic systems,and holds promise for predicting and controlling many real-world complex systems.Here,using typical chaotic systems as examples,we give a comprehensive introduction to this powerful machine-learning technique,including the algorithm,the implementation,the performance,and the open questions calling for further studies.展开更多
Federated learning(FL)is a distributed machine learning paradigm for edge cloud computing.FL can facilitate data-driven decision-making in tactical scenarios,effectively addressing both data volume and infrastructure ...Federated learning(FL)is a distributed machine learning paradigm for edge cloud computing.FL can facilitate data-driven decision-making in tactical scenarios,effectively addressing both data volume and infrastructure challenges in edge environments.However,the diversity of clients in edge cloud computing presents significant challenges for FL.Personalized federated learning(pFL)received considerable attention in recent years.One example of pFL involves exploiting the global and local information in the local model.Current pFL algorithms experience limitations such as slow convergence speed,catastrophic forgetting,and poor performance in complex tasks,which still have significant shortcomings compared to the centralized learning.To achieve high pFL performance,we propose FedCLCC:Federated Contrastive Learning and Conditional Computing.The core of FedCLCC is the use of contrastive learning and conditional computing.Contrastive learning determines the feature representation similarity to adjust the local model.Conditional computing separates the global and local information and feeds it to their corresponding heads for global and local handling.Our comprehensive experiments demonstrate that FedCLCC outperforms other state-of-the-art FL algorithms.展开更多
Photonic platforms are gradually emerging as a promising option to encounter the ever-growing demand for artificial intelligence,among which photonic time-delay reservoir computing(TDRC)is widely anticipated.While suc...Photonic platforms are gradually emerging as a promising option to encounter the ever-growing demand for artificial intelligence,among which photonic time-delay reservoir computing(TDRC)is widely anticipated.While such a computing paradigm can only employ a single photonic device as the nonlinear node for data processing,the performance highly relies on the fading memory provided by the delay feedback loop(FL),which sets a restriction on the extensibility of physical implementation,especially for highly integrated chips.Here,we present a simplified photonic scheme for more flexible parameter configurations leveraging the designed quasi-convolution coding(QC),which completely gets rid of the dependence on FL.Unlike delay-based TDRC,encoded data in QC-based RC(QRC)enables temporal feature extraction,facilitating augmented memory capabilities.Thus,our proposed QRC is enabled to deal with time-related tasks or sequential data without the implementation of FL.Furthermore,we can implement this hardware with a low-power,easily integrable vertical-cavity surface-emitting laser for high-performance parallel processing.We illustrate the concept validation through simulation and experimental comparison of QRC and TDRC,wherein the simpler-structured QRC outperforms across various benchmark tasks.Our results may underscore an auspicious solution for the hardware implementation of deep neural networks.展开更多
Recently,one of the main challenges facing the smart grid is insufficient computing resources and intermittent energy supply for various distributed components(such as monitoring systems for renewable energy power sta...Recently,one of the main challenges facing the smart grid is insufficient computing resources and intermittent energy supply for various distributed components(such as monitoring systems for renewable energy power stations).To solve the problem,we propose an energy harvesting based task scheduling and resource management framework to provide robust and low-cost edge computing services for smart grid.First,we formulate an energy consumption minimization problem with regard to task offloading,time switching,and resource allocation for mobile devices,which can be decoupled and transformed into a typical knapsack problem.Then,solutions are derived by two different algorithms.Furthermore,we deploy renewable energy and energy storage units at edge servers to tackle intermittency and instability problems.Finally,we design an energy management algorithm based on sampling average approximation for edge computing servers to derive the optimal charging/discharging strategies,number of energy storage units,and renewable energy utilization.The simulation results show the efficiency and superiority of our proposed framework.展开更多
Let p be a prime, n be any positiv e integer, α(n,p) denotes the power of p in the factorization of n! . In this paper, we give an exact computing formula of the mean value ∑ n<Nα(n,p).
An attempt has been made to develop a distributed software infrastructure model for onboard data fusion system simulation, which is also applied to netted radar systems, onboard distributed detection systems and advan...An attempt has been made to develop a distributed software infrastructure model for onboard data fusion system simulation, which is also applied to netted radar systems, onboard distributed detection systems and advanced C3I systems. Two architectures are provided and verified: one is based on pure TCP/IP protocol and C/S model, and implemented with Winsock, the other is based on CORBA (common object request broker architecture). The performance of data fusion simulation system, i.e. reliability, flexibility and scalability, is improved and enhanced by two models. The study of them makes valuable explore on incorporating the distributed computation concepts into radar system simulation techniques.展开更多
With the dawning of the Internet of Everything(IoE) era, more and more novel applications are being deployed. However, resource constrained devices cannot fulfill the resource-requirements of these applications. This ...With the dawning of the Internet of Everything(IoE) era, more and more novel applications are being deployed. However, resource constrained devices cannot fulfill the resource-requirements of these applications. This paper investigates the computation offloading problem of the coexistence and synergy between fog computing and cloud computing in IoE by jointly optimizing the offloading decisions, the allocation of computation resource and transmit power. Specifically, we propose an energy-efficient computation offloading and resource allocation(ECORA) scheme to minimize the system cost. The simulation results verify the proposed scheme can effectively decrease the system cost by up to 50% compared with the existing schemes, especially for the scenario that the computation resource of fog computing is relatively small or the number of devices increases.展开更多
With the emergence of advanced vehicular applications, the challenge of satisfying computational and communication demands of vehicles has become increasingly prominent. Fog computing is a potential solution to improv...With the emergence of advanced vehicular applications, the challenge of satisfying computational and communication demands of vehicles has become increasingly prominent. Fog computing is a potential solution to improve advanced vehicular services by enabling computational offloading at the edge of network. In this paper, we propose a fog-cloud computational offloading algorithm in Internet of Vehicles(IoV) to both minimize the power consumption of vehicles and that of the computational facilities. First, we establish the system model, and then formulate the offloading problem as an optimization problem, which is NP-hard. After that, we propose a heuristic algorithm to solve the offloading problem gradually. Specifically, we design a predictive combination transmission mode for vehicles, and establish a deep learning model for computational facilities to obtain the optimal workload allocation. Simulation results demonstrate the superiority of our algorithm in energy efficiency and network latency.展开更多
Based upon advances in theoretical algorithms, modeling and simulations, and computer technologies, the rational design of materials, cells, devices, and packs in the field of lithium-ion batteries is being realized i...Based upon advances in theoretical algorithms, modeling and simulations, and computer technologies, the rational design of materials, cells, devices, and packs in the field of lithium-ion batteries is being realized incrementally and will at some point trigger a paradigm revolution by combining calculations and experiments linked by a big shared database, enabling accelerated development of the whole industrial chain. Theory and multi-scale modeling and simulation, as supplements to experimental efforts, can help greatly to close some of the current experimental and technological gaps, as well as predict path-independent properties and help to fundamentally understand path-independent performance in multiple spatial and temporal scales.展开更多
Multi-access Edge Computing(MEC)is one of the key technologies of the future 5G network.By deploying edge computing centers at the edge of wireless access network,the computation tasks can be offloaded to edge servers...Multi-access Edge Computing(MEC)is one of the key technologies of the future 5G network.By deploying edge computing centers at the edge of wireless access network,the computation tasks can be offloaded to edge servers rather than the remote cloud server to meet the requirements of 5G low-latency and high-reliability application scenarios.Meanwhile,with the development of IOV(Internet of Vehicles)technology,various delay-sensitive and compute-intensive in-vehicle applications continue to appear.Compared with traditional Internet business,these computation tasks have higher processing priority and lower delay requirements.In this paper,we design a 5G-based vehicle-aware Multi-access Edge Computing network(VAMECN)and propose a joint optimization problem of minimizing total system cost.In view of the problem,a deep reinforcement learningbased joint computation offloading and task migration optimization(JCOTM)algorithm is proposed,considering the influences of multiple factors such as concurrent multiple computation tasks,system computing resources distribution,and network communication bandwidth.And,the mixed integer nonlinear programming problem is described as a Markov Decision Process.Experiments show that our proposed algorithm can effectively reduce task processing delay and equipment energy consumption,optimize computing offloading and resource allocation schemes,and improve system resource utilization,compared with other computing offloading policies.展开更多
In this paper, we investigated phase modulation-based computational ghost imaging. According to the results of numerical simulations, we found that the range of the random phase affects the quality of the reconstructe...In this paper, we investigated phase modulation-based computational ghost imaging. According to the results of numerical simulations, we found that the range of the random phase affects the quality of the reconstructed image. Besides,compared with those amplitude modulation-based computational ghost imaging schemes, introducing random phase modulation into the computational ghost imaging scheme could significantly improve the spatial resolution of the reconstructed image, and also extend the field of view.展开更多
A three-dimensional model for gas-solid flow in a circulating fluidized bed(CFB) riser was developed based on computational particle fluid dynamics(CPFD).The model was used to simulate the gas-solid flow behavior ...A three-dimensional model for gas-solid flow in a circulating fluidized bed(CFB) riser was developed based on computational particle fluid dynamics(CPFD).The model was used to simulate the gas-solid flow behavior inside a circulating fluidized bed riser operating at various superficial gas velocities and solids mass fluxes in two fluidization regimes,a dilute phase transport(DPT) regime and a fast fluidization(FF) regime.The simulation results were evaluated based on comparison with experimental data of solids velocity and holdup,obtained from non-invasive automated radioactive particle tracking and gamma-ray tomography techniques,respectively.The agreement of the predicted solids velocity and holdup with experimental data validated the CPFD model for the CFB riser.The model predicted the main features of the gas-solid flows in the two regimes;the uniform dilute phase in the DPT regime,and the coexistence of the dilute phase in the upper region and the dense phase in the lower region in the FF regime.The clustering and solids back mixing in the FF regime were stronger than those in the DPT regime.展开更多
基金supported by National Natural Science Foundation of China No.62231012Natural Science Foundation for Outstanding Young Scholars of Heilongjiang Province under Grant YQ2020F001Heilongjiang Province Postdoctoral General Foundation under Grant AUGA4110004923.
文摘Low earth orbit(LEO)satellites with wide coverage can carry the mobile edge computing(MEC)servers with powerful computing capabilities to form the LEO satellite edge computing system,providing computing services for the global ground users.In this paper,the computation offloading problem and resource allocation problem are formulated as a mixed integer nonlinear program(MINLP)problem.This paper proposes a computation offloading algorithm based on deep deterministic policy gradient(DDPG)to obtain the user offloading decisions and user uplink transmission power.This paper uses the convex optimization algorithm based on Lagrange multiplier method to obtain the optimal MEC server resource allocation scheme.In addition,the expression of suboptimal user local CPU cycles is derived by relaxation method.Simulation results show that the proposed algorithm can achieve excellent convergence effect,and the proposed algorithm significantly reduces the system utility values at considerable time cost compared with other algorithms.
基金supported in part by Sub Project of National Key Research and Development plan in 2020 NO.2020YFC1511704Beijing Information Science and Technology University NO.2020KYNH212,NO.2021CGZH302+1 种基金Beijing Science and Technology Project(Grant No.Z211100004421009)in part by the National Natural Science Foundation of China(Grant No.62301058).
文摘Low Earth orbit(LEO)satellite networks have the advantages of low transmission delay and low deployment cost,playing an important role in providing reliable services to ground users.This paper studies an efficient inter-satellite cooperative computation offloading(ICCO)algorithm for LEO satellite networks.Specifically,an ICCO system model is constructed,which considers using neighboring satellites in the LEO satellite networks to collaboratively process tasks generated by ground user terminals,effectively improving resource utilization efficiency.Additionally,the optimization objective of minimizing the system task computation offloading delay and energy consumption is established,which is decoupled into two sub-problems.In terms of computational resource allocation,the convexity of the problem is proved through theoretical derivation,and the Lagrange multiplier method is used to obtain the optimal solution of computational resources.To deal with the task offloading decision,a dynamic sticky binary particle swarm optimization algorithm is designed to obtain the offloading decision by iteration.Simulation results show that the ICCO algorithm can effectively reduce the delay and energy consumption.
文摘Over-the-air computation(AirComp)enables federated learning(FL)to rapidly aggregate local models at the central server using waveform superposition property of wireless channel.In this paper,a robust transmission scheme for an AirCompbased FL system with imperfect channel state information(CSI)is proposed.To model CSI uncertainty,an expectation-based error model is utilized.The main objective is to maximize the number of selected devices that meet mean-squared error(MSE)requirements for model broadcast and model aggregation.The problem is formulated as a combinatorial optimization problem and is solved in two steps.First,the priority order of devices is determined by a sparsity-inducing procedure.Then,a feasibility detection scheme is used to select the maximum number of devices to guarantee that the MSE requirements are met.An alternating optimization(AO)scheme is used to transform the resulting nonconvex problem into two convex subproblems.Numerical results illustrate the effectiveness and robustness of the proposed scheme.
文摘Semisubmersible naval ships are versatile military crafts that combine the advantageous features of high-speed planing crafts and submarines.At-surface,these ships are designed to provide sufficient speed and maneuverability.Additionally,they can perform shallow dives,offering low visual and acoustic detectability.Therefore,the hydrodynamic design of a semisubmersible naval ship should address at-surface and submerged conditions.In this study,Numerical analyses were performed using a semisubmersible hull form to analyze its hydrodynamic features,including resistance,powering,and maneuvering.The simulations were conducted with Star CCM+version 2302,a commercial package program that solves URANS equations using the SST k-ωturbulence model.The flow analysis was divided into two parts:at-surface simulations and shallowly submerged simulations.At-surface simulations cover the resistance,powering,trim,and sinkage at transition and planing regimes,with corresponding Froude numbers ranging from 0.42 to 1.69.Shallowly submerged simulations were performed at seven different submergence depths,ranging from D/LOA=0.0635 to D/LOA=0.635,and at two different speeds with Froude numbers of 0.21 and 0.33.The behaviors of the hydrodynamic forces and pitching moment for different operation depths were comprehensively analyzed.The results of the numerical analyses provide valuable insights into the hydrodynamic performance of semisubmersible naval ships,highlighting the critical factors influencing their resistance,powering,and maneuvering capabilities in both at-surface and submerged conditions.
基金The Natural Science Foundation of Henan Province(No.232300421097)the Program for Science&Technology Innovation Talents in Universities of Henan Province(No.23HASTIT019,24HASTIT038)+2 种基金the China Postdoctoral Science Foundation(No.2023T160596,2023M733251)the Open Research Fund of National Mobile Communications Research Laboratory,Southeast University(No.2023D11)the Song Shan Laboratory Foundation(No.YYJC022022003)。
文摘In this article,the secure computation efficiency(SCE)problem is studied in a massive multipleinput multiple-output(mMIMO)-assisted mobile edge computing(MEC)network.We first derive the secure transmission rate based on the mMIMO under imperfect channel state information.Based on this,the SCE maximization problem is formulated by jointly optimizing the local computation frequency,the offloading time,the downloading time,the users and the base station transmit power.Due to its difficulty to directly solve the formulated problem,we first transform the fractional objective function into the subtractive form one via the dinkelbach method.Next,the original problem is transformed into a convex one by applying the successive convex approximation technique,and an iteration algorithm is proposed to obtain the solutions.Finally,the stimulations are conducted to show that the performance of the proposed schemes is superior to that of the other schemes.
基金The authors are grateful for financial support from the National Key Projects for Fundamental Research and Development of China(2021YFA1500803)the National Natural Science Foundation of China(51825205,52120105002,22102202,22088102,U22A20391)+1 种基金the DNL Cooperation Fund,CAS(DNL202016)the CAS Project for Young Scientists in Basic Research(YSBR-004).
文摘Photocatalysis,a critical strategy for harvesting sunlight to address energy demand and environmental concerns,is underpinned by the discovery of high-performance photocatalysts,thereby how to design photocatalysts is now generating widespread interest in boosting the conversion effi-ciency of solar energy.In the past decade,computational technologies and theoretical simulations have led to a major leap in the development of high-throughput computational screening strategies for novel high-efficiency photocatalysts.In this viewpoint,we started with introducing the challenges of photocatalysis from the view of experimental practice,especially the inefficiency of the traditional“trial and error”method.Sub-sequently,a cross-sectional comparison between experimental and high-throughput computational screening for photocatalysis is presented and discussed in detail.On the basis of the current experimental progress in photocatalysis,we also exemplified the various challenges associated with high-throughput computational screening strategies.Finally,we offered a preferred high-throughput computational screening procedure for pho-tocatalysts from an experimental practice perspective(model construction and screening,standardized experiments,assessment and revision),with the aim of a better correlation of high-throughput simulations and experimental practices,motivating to search for better descriptors.
基金This work is supported by National Natural Science Foundation of China(Nos.U23B20151 and 52171253).
文摘Owing to the complex lithology of unconventional reservoirs,field interpreters usually need to provide a basis for interpretation using logging simulation models.Among the various detection tools that use nuclear sources,the detector response can reflect various types of information of the medium.The Monte Carlo method is one of the primary methods used to obtain nuclear detection responses in complex environments.However,this requires a computational process with extensive random sampling,consumes considerable resources,and does not provide real-time response results.Therefore,a novel fast forward computational method(FFCM)for nuclear measurement that uses volumetric detection constraints to rapidly calculate the detector response in various complex environments is proposed.First,the data library required for the FFCM is built by collecting the detection volume,detector counts,and flux sensitivity functions through a Monte Carlo simulation.Then,based on perturbation theory and the Rytov approximation,a model for the detector response is derived using the flux sensitivity function method and a one-group diffusion model.The environmental perturbation is constrained to optimize the model according to the tool structure and the impact of the formation and borehole within the effective detection volume.Finally,the method is applied to a neutron porosity tool for verification.In various complex simulation environments,the maximum relative error between the calculated porosity results of Monte Carlo and FFCM was 6.80%,with a rootmean-square error of 0.62 p.u.In field well applications,the formation porosity model obtained using FFCM was in good agreement with the model obtained by interpreters,which demonstrates the validity and accuracy of the proposed method.
基金Project supported by the National Natural Science Foundation of China(Grant No.12275165)XGW was also supported by the Fundamental Research Funds for the Central Universities(Grant No.GK202202003).
文摘Model-free,data-driven prediction of chaotic motions is a long-standing challenge in nonlinear science.Stimulated by the recent progress in machine learning,considerable attention has been given to the inference of chaos by the technique of reservoir computing(RC).In particular,by incorporating a parameter-control channel into the standard RC,it is demonstrated that the machine is able to not only replicate the dynamics of the training states,but also infer new dynamics not included in the training set.The new machine-learning scheme,termed parameter-aware RC,opens up new avenues for data-based analysis of chaotic systems,and holds promise for predicting and controlling many real-world complex systems.Here,using typical chaotic systems as examples,we give a comprehensive introduction to this powerful machine-learning technique,including the algorithm,the implementation,the performance,and the open questions calling for further studies.
基金supported by the Natural Science Foundation of Xinjiang Uygur Autonomous Region(Grant No.2022D01B 187)。
文摘Federated learning(FL)is a distributed machine learning paradigm for edge cloud computing.FL can facilitate data-driven decision-making in tactical scenarios,effectively addressing both data volume and infrastructure challenges in edge environments.However,the diversity of clients in edge cloud computing presents significant challenges for FL.Personalized federated learning(pFL)received considerable attention in recent years.One example of pFL involves exploiting the global and local information in the local model.Current pFL algorithms experience limitations such as slow convergence speed,catastrophic forgetting,and poor performance in complex tasks,which still have significant shortcomings compared to the centralized learning.To achieve high pFL performance,we propose FedCLCC:Federated Contrastive Learning and Conditional Computing.The core of FedCLCC is the use of contrastive learning and conditional computing.Contrastive learning determines the feature representation similarity to adjust the local model.Conditional computing separates the global and local information and feeds it to their corresponding heads for global and local handling.Our comprehensive experiments demonstrate that FedCLCC outperforms other state-of-the-art FL algorithms.
基金National Natural Science Foundation of China(62171305,62405206,62004135,62001317,62111530301)Natural Science Foundation of Jiangsu Province(BK20240778,BK20241917)+3 种基金State Key Laboratory of Advanced Optical Communication Systems and Networks,China(2023GZKF08)China Postdoctoral Science Foundation(2024M752314)Postdoctoral Fellowship Program of CPSF(GZC20231883)Innovative and Entrepreneurial Talent Program of Jiangsu Province(JSSCRC2021527).
文摘Photonic platforms are gradually emerging as a promising option to encounter the ever-growing demand for artificial intelligence,among which photonic time-delay reservoir computing(TDRC)is widely anticipated.While such a computing paradigm can only employ a single photonic device as the nonlinear node for data processing,the performance highly relies on the fading memory provided by the delay feedback loop(FL),which sets a restriction on the extensibility of physical implementation,especially for highly integrated chips.Here,we present a simplified photonic scheme for more flexible parameter configurations leveraging the designed quasi-convolution coding(QC),which completely gets rid of the dependence on FL.Unlike delay-based TDRC,encoded data in QC-based RC(QRC)enables temporal feature extraction,facilitating augmented memory capabilities.Thus,our proposed QRC is enabled to deal with time-related tasks or sequential data without the implementation of FL.Furthermore,we can implement this hardware with a low-power,easily integrable vertical-cavity surface-emitting laser for high-performance parallel processing.We illustrate the concept validation through simulation and experimental comparison of QRC and TDRC,wherein the simpler-structured QRC outperforms across various benchmark tasks.Our results may underscore an auspicious solution for the hardware implementation of deep neural networks.
基金supported in part by the National Natural Science Foundation of China under Grant No.61473066in part by the Natural Science Foundation of Hebei Province under Grant No.F2021501020+2 种基金in part by the S&T Program of Qinhuangdao under Grant No.202401A195in part by the Science Research Project of Hebei Education Department under Grant No.QN2025008in part by the Innovation Capability Improvement Plan Project of Hebei Province under Grant No.22567637H
文摘Recently,one of the main challenges facing the smart grid is insufficient computing resources and intermittent energy supply for various distributed components(such as monitoring systems for renewable energy power stations).To solve the problem,we propose an energy harvesting based task scheduling and resource management framework to provide robust and low-cost edge computing services for smart grid.First,we formulate an energy consumption minimization problem with regard to task offloading,time switching,and resource allocation for mobile devices,which can be decoupled and transformed into a typical knapsack problem.Then,solutions are derived by two different algorithms.Furthermore,we deploy renewable energy and energy storage units at edge servers to tackle intermittency and instability problems.Finally,we design an energy management algorithm based on sampling average approximation for edge computing servers to derive the optimal charging/discharging strategies,number of energy storage units,and renewable energy utilization.The simulation results show the efficiency and superiority of our proposed framework.
文摘Let p be a prime, n be any positiv e integer, α(n,p) denotes the power of p in the factorization of n! . In this paper, we give an exact computing formula of the mean value ∑ n<Nα(n,p).
文摘An attempt has been made to develop a distributed software infrastructure model for onboard data fusion system simulation, which is also applied to netted radar systems, onboard distributed detection systems and advanced C3I systems. Two architectures are provided and verified: one is based on pure TCP/IP protocol and C/S model, and implemented with Winsock, the other is based on CORBA (common object request broker architecture). The performance of data fusion simulation system, i.e. reliability, flexibility and scalability, is improved and enhanced by two models. The study of them makes valuable explore on incorporating the distributed computation concepts into radar system simulation techniques.
基金supported by the Fundamental Research Funds for the Central Universities (No. 2018YJS008)the National Natural Science Foundation of China (61471031, 61661021, 61531009)+4 种基金Beijing Natural Science Foundation (L182018)the Open Research Fund of National Mobile Communications Research Laboratory, Southeast University (No. 2017D14)the State Key Laboratory of Rail Traffi c Control and Safety (Contract No. RCS2017K009)Science and Technology Program of Jiangxi Province (20172BCB22016, 20171BBE50057)Shenzhen Science and Technology Program under Grant (No. JCYJ20170817110410346)
文摘With the dawning of the Internet of Everything(IoE) era, more and more novel applications are being deployed. However, resource constrained devices cannot fulfill the resource-requirements of these applications. This paper investigates the computation offloading problem of the coexistence and synergy between fog computing and cloud computing in IoE by jointly optimizing the offloading decisions, the allocation of computation resource and transmit power. Specifically, we propose an energy-efficient computation offloading and resource allocation(ECORA) scheme to minimize the system cost. The simulation results verify the proposed scheme can effectively decrease the system cost by up to 50% compared with the existing schemes, especially for the scenario that the computation resource of fog computing is relatively small or the number of devices increases.
基金supported by National Natural Science Foundation of China with No. 61733002 and 61842601National Key Research and Development Plan 2017YFC0821003-2the Fundamental Research Funds for the Central University with No. DUT17LAB16 and No. DUT2017TB02
文摘With the emergence of advanced vehicular applications, the challenge of satisfying computational and communication demands of vehicles has become increasingly prominent. Fog computing is a potential solution to improve advanced vehicular services by enabling computational offloading at the edge of network. In this paper, we propose a fog-cloud computational offloading algorithm in Internet of Vehicles(IoV) to both minimize the power consumption of vehicles and that of the computational facilities. First, we establish the system model, and then formulate the offloading problem as an optimization problem, which is NP-hard. After that, we propose a heuristic algorithm to solve the offloading problem gradually. Specifically, we design a predictive combination transmission mode for vehicles, and establish a deep learning model for computational facilities to obtain the optimal workload allocation. Simulation results demonstrate the superiority of our algorithm in energy efficiency and network latency.
基金supported by the National Natural Science Foundation of China(Grant Nos.51372228 and 11234013)the National High Technology Research and Development Program of China(Grant No.2015AA034201)Shanghai Pujiang Program,China(Grant No.14PJ1403900)
文摘Based upon advances in theoretical algorithms, modeling and simulations, and computer technologies, the rational design of materials, cells, devices, and packs in the field of lithium-ion batteries is being realized incrementally and will at some point trigger a paradigm revolution by combining calculations and experiments linked by a big shared database, enabling accelerated development of the whole industrial chain. Theory and multi-scale modeling and simulation, as supplements to experimental efforts, can help greatly to close some of the current experimental and technological gaps, as well as predict path-independent properties and help to fundamentally understand path-independent performance in multiple spatial and temporal scales.
基金supported in part by the National Key R&D Program of China under Grant 2018YFC0831502.
文摘Multi-access Edge Computing(MEC)is one of the key technologies of the future 5G network.By deploying edge computing centers at the edge of wireless access network,the computation tasks can be offloaded to edge servers rather than the remote cloud server to meet the requirements of 5G low-latency and high-reliability application scenarios.Meanwhile,with the development of IOV(Internet of Vehicles)technology,various delay-sensitive and compute-intensive in-vehicle applications continue to appear.Compared with traditional Internet business,these computation tasks have higher processing priority and lower delay requirements.In this paper,we design a 5G-based vehicle-aware Multi-access Edge Computing network(VAMECN)and propose a joint optimization problem of minimizing total system cost.In view of the problem,a deep reinforcement learningbased joint computation offloading and task migration optimization(JCOTM)algorithm is proposed,considering the influences of multiple factors such as concurrent multiple computation tasks,system computing resources distribution,and network communication bandwidth.And,the mixed integer nonlinear programming problem is described as a Markov Decision Process.Experiments show that our proposed algorithm can effectively reduce task processing delay and equipment energy consumption,optimize computing offloading and resource allocation schemes,and improve system resource utilization,compared with other computing offloading policies.
基金Project supported by the National Natural Science Foundation of China(Grant No.11305020)the Science and Technology Research Projects of the Education Department of Jilin Province,China(Grant No.2016-354)the Science and Technology Development Project of Jilin Province,China(Grant No.20180520165JH)
文摘In this paper, we investigated phase modulation-based computational ghost imaging. According to the results of numerical simulations, we found that the range of the random phase affects the quality of the reconstructed image. Besides,compared with those amplitude modulation-based computational ghost imaging schemes, introducing random phase modulation into the computational ghost imaging scheme could significantly improve the spatial resolution of the reconstructed image, and also extend the field of view.
基金support by the National Basic Research Program (Grant No. 2010CB226906,and 2012CB215000)
文摘A three-dimensional model for gas-solid flow in a circulating fluidized bed(CFB) riser was developed based on computational particle fluid dynamics(CPFD).The model was used to simulate the gas-solid flow behavior inside a circulating fluidized bed riser operating at various superficial gas velocities and solids mass fluxes in two fluidization regimes,a dilute phase transport(DPT) regime and a fast fluidization(FF) regime.The simulation results were evaluated based on comparison with experimental data of solids velocity and holdup,obtained from non-invasive automated radioactive particle tracking and gamma-ray tomography techniques,respectively.The agreement of the predicted solids velocity and holdup with experimental data validated the CPFD model for the CFB riser.The model predicted the main features of the gas-solid flows in the two regimes;the uniform dilute phase in the DPT regime,and the coexistence of the dilute phase in the upper region and the dense phase in the lower region in the FF regime.The clustering and solids back mixing in the FF regime were stronger than those in the DPT regime.