Federated learning(FL)is a distributed machine learning paradigm for edge cloud computing.FL can facilitate data-driven decision-making in tactical scenarios,effectively addressing both data volume and infrastructure ...Federated learning(FL)is a distributed machine learning paradigm for edge cloud computing.FL can facilitate data-driven decision-making in tactical scenarios,effectively addressing both data volume and infrastructure challenges in edge environments.However,the diversity of clients in edge cloud computing presents significant challenges for FL.Personalized federated learning(pFL)received considerable attention in recent years.One example of pFL involves exploiting the global and local information in the local model.Current pFL algorithms experience limitations such as slow convergence speed,catastrophic forgetting,and poor performance in complex tasks,which still have significant shortcomings compared to the centralized learning.To achieve high pFL performance,we propose FedCLCC:Federated Contrastive Learning and Conditional Computing.The core of FedCLCC is the use of contrastive learning and conditional computing.Contrastive learning determines the feature representation similarity to adjust the local model.Conditional computing separates the global and local information and feeds it to their corresponding heads for global and local handling.Our comprehensive experiments demonstrate that FedCLCC outperforms other state-of-the-art FL algorithms.展开更多
Cloud computing represents a novel computing model in the contemporary technology world. In a cloud system, the com- puting power of virtual machines (VMs) and network status can greatly affect the completion time o...Cloud computing represents a novel computing model in the contemporary technology world. In a cloud system, the com- puting power of virtual machines (VMs) and network status can greatly affect the completion time of data intensive tasks. How- ever, most of the current resource allocation policies focus only on network conditions and physical hosts. And the computing power of VMs is largely ignored. This paper proposes a comprehensive resource allocation policy which consists of a data intensive task scheduling algorithm that takes account of computing power of VMs and a VM allocation policy that considers bandwidth between storage nodes and hosts. The VM allocation policy includes VM placement and VM migration algorithms. Related simulations show that the proposed algorithms can greatly reduce the task comple- tion time and keep good load balance of physical hosts at the same time.展开更多
In the age of online workload explosion,cloud users are increasing exponentialy.Therefore,large scale data centers are required in cloud environment that leads to high energy consumption.Hence,optimal resource utiliza...In the age of online workload explosion,cloud users are increasing exponentialy.Therefore,large scale data centers are required in cloud environment that leads to high energy consumption.Hence,optimal resource utilization is essential to improve energy efficiency of cloud data center.Although,most of the existing literature focuses on virtual machine(VM)consolidation for increasing energy efficiency at the cost of service level agreement degradation.In order to improve the existing approaches,load aware three-gear THReshold(LATHR)as well as modified best fit decreasing(MBFD)algorithm is proposed for minimizing total energy consumption while improving the quality of service in terms of SLA.It offers promising results under dynamic workload and variable number of VMs(1-290)allocated on individual host.The outcomes of the proposed work are measured in terms of SLA,energy consumption,instruction energy ratio(IER)and the number of migrations against the varied numbers of VMs.From experimental results it has been concluded that the proposed technique reduced the SLA violations(55%,26%and 39%)and energy consumption(17%,12%and 6%)as compared to median absolute deviation(MAD),inter quartile range(IQR)and double threshold(THR)overload detection policies,respectively.展开更多
Cloud computing has emerged as a leading computing paradigm,with an increasing number of geographic information(geo-information) processing tasks now running on clouds.For this reason,geographic information system/rem...Cloud computing has emerged as a leading computing paradigm,with an increasing number of geographic information(geo-information) processing tasks now running on clouds.For this reason,geographic information system/remote sensing(GIS/RS) researchers rent more public clouds or establish more private clouds.However,a large proportion of these clouds are found to be underutilized,since users do not deal with big data every day.The low usage of cloud resources violates the original intention of cloud computing,which is to save resources by improving usage.In this work,a low-cost cloud computing solution was proposed for geo-information processing,especially for temporary processing tasks.The proposed solution adopted a hosted architecture and can be realized based on ordinary computers in a common GIS/RS laboratory.The usefulness and effectiveness of the proposed solution was demonstrated by using big data simplification as a case study.Compared to commercial public clouds and dedicated private clouds,the proposed solution is more low-cost and resource-saving,and is more suitable for GIS/RS applications.展开更多
Private clouds and public clouds are turning mutually into the open integrated cloud computing environment,which can aggregate and utilize WAN and LAN networks computing,storage,information and other hardware and soft...Private clouds and public clouds are turning mutually into the open integrated cloud computing environment,which can aggregate and utilize WAN and LAN networks computing,storage,information and other hardware and software resources sufficiently,but also bring a series of security,reliability and credibility problems.To solve these problems,a novel secure-agent-based trustworthy virtual private cloud model named SATVPC was proposed for the integrated and open cloud computing environment.Through the introduction of secure-agent technology,SATVPC provides an independent,safe and trustworthy computing virtual private platform for multi-tenant systems.In order to meet the needs of the credibility of SATVPC and mandate the trust relationship between each task execution agent and task executor node suitable for their security policies,a new dynamic composite credibility evaluation mechanism was presented,including the credit index computing algorithm and the credibility differentiation strategy.The experimental system shows that SATVPC and the credibility evaluation mechanism can ensure the security of open computing environments with feasibility.Experimental results and performance analysis also show that the credit indexes computing algorithm can evaluate the credibilities of task execution agents and task executor nodes quantitatively,correctly and operationally.展开更多
In order to lower the power consumption and improve the coefficient of resource utilization of current cloud computing systems, this paper proposes two resource pre-allocation algorithms based on the "shut down the r...In order to lower the power consumption and improve the coefficient of resource utilization of current cloud computing systems, this paper proposes two resource pre-allocation algorithms based on the "shut down the redundant, turn on the demanded" strategy here. Firstly, a green cloud computing model is presented, abstracting the task scheduling problem to the virtual machine deployment issue with the virtualization technology. Secondly, the future workloads of system need to be predicted: a cubic exponential smoothing algorithm based on the conservative control(CESCC) strategy is proposed, combining with the current state and resource distribution of system, in order to calculate the demand of resources for the next period of task requests. Then, a multi-objective constrained optimization model of power consumption and a low-energy resource allocation algorithm based on probabilistic matching(RA-PM) are proposed. In order to reduce the power consumption further, the resource allocation algorithm based on the improved simulated annealing(RA-ISA) is designed with the improved simulated annealing algorithm. Experimental results show that the prediction and conservative control strategy make resource pre-allocation catch up with demands, and improve the efficiency of real-time response and the stability of the system. Both RA-PM and RA-ISA can activate fewer hosts, achieve better load balance among the set of high applicable hosts, maximize the utilization of resources, and greatly reduce the power consumption of cloud computing systems.展开更多
In order to improve the energy efficiency of large-scale data centers, a virtual machine(VM) deployment algorithm called three-threshold energy saving algorithm(TESA), which is based on the linear relation between the...In order to improve the energy efficiency of large-scale data centers, a virtual machine(VM) deployment algorithm called three-threshold energy saving algorithm(TESA), which is based on the linear relation between the energy consumption and(processor) resource utilization, is proposed. In TESA, according to load, hosts in data centers are divided into four classes, that is,host with light load, host with proper load, host with middle load and host with heavy load. By defining TESA, VMs on lightly loaded host or VMs on heavily loaded host are migrated to another host with proper load; VMs on properly loaded host or VMs on middling loaded host are kept constant. Then, based on the TESA, five kinds of VM selection policies(minimization of migrations policy based on TESA(MIMT), maximization of migrations policy based on TESA(MAMT), highest potential growth policy based on TESA(HPGT), lowest potential growth policy based on TESA(LPGT) and random choice policy based on TESA(RCT)) are presented, and MIMT is chosen as the representative policy through experimental comparison. Finally, five research directions are put forward on future energy management. The results of simulation indicate that, as compared with single threshold(ST) algorithm and minimization of migrations(MM) algorithm, MIMT significantly improves the energy efficiency in data centers.展开更多
It is argued in this article to shed light on service supply chain issues associated with cloud computing by examining several interrelated questions:service supply chain architecture from service perspective; basic c...It is argued in this article to shed light on service supply chain issues associated with cloud computing by examining several interrelated questions:service supply chain architecture from service perspective; basic clouds of service supply chain and development of managerial insights into these clouds. In particular,to demonstrate how those services can be utilized and the processes involved in their utilization,a hypothetical meta-modeling service of cloud computing can be given. Moreover,the paper defines the architecture of cloud managed for SV or ISP in infrastructure of cloud computing in service supply chain:IT services,business services,business processes,which creates atomic and composite software services that are used to perform business processes with business service choreographies.展开更多
A dynamic multi-beam resource allocation algorithm for large low Earth orbit(LEO)constellation based on on-board distributed computing is proposed in this paper.The allocation is a combinatorial optimization process u...A dynamic multi-beam resource allocation algorithm for large low Earth orbit(LEO)constellation based on on-board distributed computing is proposed in this paper.The allocation is a combinatorial optimization process under a series of complex constraints,which is important for enhancing the matching between resources and requirements.A complex algorithm is not available because that the LEO on-board resources is limi-ted.The proposed genetic algorithm(GA)based on two-dimen-sional individual model and uncorrelated single paternal inheri-tance method is designed to support distributed computation to enhance the feasibility of on-board application.A distributed system composed of eight embedded devices is built to verify the algorithm.A typical scenario is built in the system to evalu-ate the resource allocation process,algorithm mathematical model,trigger strategy,and distributed computation architec-ture.According to the simulation and measurement results,the proposed algorithm can provide an allocation result for more than 1500 tasks in 14 s and the success rate is more than 91%in a typical scene.The response time is decreased by 40%com-pared with the conditional GA.展开更多
[Objective]Real-time monitoring of cow ruminant behavior is of paramount importance for promptly obtaining relevant information about cow health and predicting cow diseases.Currently,various strategies have been propo...[Objective]Real-time monitoring of cow ruminant behavior is of paramount importance for promptly obtaining relevant information about cow health and predicting cow diseases.Currently,various strategies have been proposed for monitoring cow ruminant behavior,including video surveillance,sound recognition,and sensor monitoring methods.How‐ever,the application of edge device gives rise to the issue of inadequate real-time performance.To reduce the volume of data transmission and cloud computing workload while achieving real-time monitoring of dairy cow rumination behavior,a real-time monitoring method was proposed for cow ruminant behavior based on edge computing.[Methods]Autono‐mously designed edge devices were utilized to collect and process six-axis acceleration signals from cows in real-time.Based on these six-axis data,two distinct strategies,federated edge intelligence and split edge intelligence,were investigat‐ed for the real-time recognition of cow ruminant behavior.Focused on the real-time recognition method for cow ruminant behavior leveraging federated edge intelligence,the CA-MobileNet v3 network was proposed by enhancing the MobileNet v3 network with a collaborative attention mechanism.Additionally,a federated edge intelligence model was designed uti‐lizing the CA-MobileNet v3 network and the FedAvg federated aggregation algorithm.In the study on split edge intelli‐gence,a split edge intelligence model named MobileNet-LSTM was designed by integrating the MobileNet v3 network with a fusion collaborative attention mechanism and the Bi-LSTM network.[Results and Discussions]Through compara‐tive experiments with MobileNet v3 and MobileNet-LSTM,the federated edge intelligence model based on CA-Mo‐bileNet v3 achieved an average Precision rate,Recall rate,F1-Score,Specificity,and Accuracy of 97.1%,97.9%,97.5%,98.3%,and 98.2%,respectively,yielding the best recognition performance.[Conclusions]It is provided a real-time and effective method for monitoring cow ruminant behavior,and the proposed federated edge intelligence model can be ap‐plied in practical settings.展开更多
In order to improve the efficiency of cloud-based web services,an improved plant growth simulation algorithm scheduling model.This model first used mathematical methods to describe the relationships between cloud-base...In order to improve the efficiency of cloud-based web services,an improved plant growth simulation algorithm scheduling model.This model first used mathematical methods to describe the relationships between cloud-based web services and the constraints of system resources.Then,a light-induced plant growth simulation algorithm was established.The performance of the algorithm was compared through several plant types,and the best plant model was selected as the setting for the system.Experimental results show that when the number of test cloud-based web services reaches 2048,the model being 2.14 times faster than PSO,2.8 times faster than the ant colony algorithm,2.9 times faster than the bee colony algorithm,and a remarkable 8.38 times faster than the genetic algorithm.展开更多
Mobile cloud computing(MCC) combines mobile Internet and cloud computing to improve the performance of mobile applications. However, MCC faces the problem of energy efficiency because of randomly varying channels. A...Mobile cloud computing(MCC) combines mobile Internet and cloud computing to improve the performance of mobile applications. However, MCC faces the problem of energy efficiency because of randomly varying channels. A scheduling algorithm is proposed by introducing the Lyapunov optimization, which can dynamically choose users to transmit data based on queue backlog and channel statistics. The Lyapunov analysis shows that the proposed scheduling algorithm can make a tradeoff between queue backlog and energy consumption in the channel-aware mobile cloud computing system. The simulation results verify the effectiveness of the proposed algorithm.展开更多
In recent years cloud computing is the subject of extensive research in the emerging field of information technology and has become a promising business.The reason behind this widespread interest is its abilityto incr...In recent years cloud computing is the subject of extensive research in the emerging field of information technology and has become a promising business.The reason behind this widespread interest is its abilityto increase the capacity and capability of enterprises,having no investment for new infrastructure,no software license requirement and no need of any training.Security concern is the main limitation factor in the growth of this new born technology.The security responsibilities of both,the provider and the consumer greatly differ between cloud service models.In this paper we discuss a variety of security risks,authentication issues,trust,and legal regularity in cloud environment with consumer perspective.Early research focused only on technical and business consequences of cloud computing and ignored consumer perspective.Therefore,this paper discusses the consumer security and privacy preferences.展开更多
Anti-jamming performance evaluation has recently received significant attention. For Link-16, the anti-jamming performance evaluation and selection of the optimal anti-jamming technologies are urgent problems to be so...Anti-jamming performance evaluation has recently received significant attention. For Link-16, the anti-jamming performance evaluation and selection of the optimal anti-jamming technologies are urgent problems to be solved. A comprehensive evaluation method is proposed, which combines grey relational analysis (GRA) and cloud model, to evaluate the anti-jamming performances of Link-16. Firstly, on the basis of establishing the anti-jamming performance evaluation indicator system of Link-16, the linear combination of analytic hierarchy process(AHP) and entropy weight method (EWM) are used to calculate the combined weight. Secondly, the qualitative and quantitative concept transformation model, i.e., the cloud model, is introduced to evaluate the anti-jamming abilities of Link-16 under each jamming scheme. In addition, GRA calculates the correlation degree between evaluation indicators and the anti-jamming performance of Link-16, and assesses the best anti-jamming technology. Finally, simulation results prove that the proposed evaluation model can achieve the objective of feasible and practical evaluation, which opens up a novel way for the research of anti-jamming performance evaluations of Link-16.展开更多
DNAN-based insensitive melt-cast explosives have been widely utilized in insensitive munition in recent years. When constrained DNAN-based melt-cast explosives are ignited under thermal stimulation, the base explosive...DNAN-based insensitive melt-cast explosives have been widely utilized in insensitive munition in recent years. When constrained DNAN-based melt-cast explosives are ignited under thermal stimulation, the base explosive exists in a molten liquid state, where high-temperature gases expand and react in the form of bubble clouds within the liquid explosive;this process is distinctly different from the dynamic crack propagation process observed in the case of solid explosives. In this study, a control model for the reaction evolution of burning-bubble clouds was established to describe the reaction process and quantify the reaction violence of DNAN-based melt-cast explosives, considering the size distribution and activation mechanism of the burning-bubble clouds. The feasibility of the model was verified through experimental results. The results revealed that under geometrically similar conditions, with identical confinement strength and aspect ratio, larger charge structures led to extended initial gas flow and surface burning processes, resulting in greater reaction equivalence and violence at the casing fracture.Under constant charge volume and size, a stronger casing confinement accelerated self-enhanced burning, increasing the internal pressure, reaction degree, and reaction violence. Under a constant casing thickness and radius, higher aspect ratios led to a greater reaction violence at the casing fracture.Moreover, under a constant charge volume and casing thickness, higher aspect ratios resulted in a higher internal pressure, increased reaction degree, and greater reaction violence at the casing fracture. Further,larger ullage volumes extended the reaction evolution time and increased the reaction violence under constant casing dimensions. Through a matching design of the opening threshold of the pressure relief holes and the relief structure area, a stable burning reaction could be maintained until completion,thereby achieving a control of the reaction violence. The proposed model could effectively reflect the effects of the intrinsic burning rate, casing confinement strength, charge size, ullage volume, and pressure relief structure on the reaction evolution process and reaction violence, providing a theoretical method for the thermal safety design and reaction violence evaluation of melt-cast explosives.展开更多
The flexibility of traditional image processing system is limited because those system are designed for specific applications. In this paper, a new TMS320C64x-based multi-DSP parallel computing architecture is present...The flexibility of traditional image processing system is limited because those system are designed for specific applications. In this paper, a new TMS320C64x-based multi-DSP parallel computing architecture is presented. It has many promising characteristics such as powerful computing capability, broad I/O bandwidth, topology flexibility, and expansibility. The parallel system performance is evaluated by practical experiment.展开更多
基金supported by the Natural Science Foundation of Xinjiang Uygur Autonomous Region(Grant No.2022D01B 187)。
文摘Federated learning(FL)is a distributed machine learning paradigm for edge cloud computing.FL can facilitate data-driven decision-making in tactical scenarios,effectively addressing both data volume and infrastructure challenges in edge environments.However,the diversity of clients in edge cloud computing presents significant challenges for FL.Personalized federated learning(pFL)received considerable attention in recent years.One example of pFL involves exploiting the global and local information in the local model.Current pFL algorithms experience limitations such as slow convergence speed,catastrophic forgetting,and poor performance in complex tasks,which still have significant shortcomings compared to the centralized learning.To achieve high pFL performance,we propose FedCLCC:Federated Contrastive Learning and Conditional Computing.The core of FedCLCC is the use of contrastive learning and conditional computing.Contrastive learning determines the feature representation similarity to adjust the local model.Conditional computing separates the global and local information and feeds it to their corresponding heads for global and local handling.Our comprehensive experiments demonstrate that FedCLCC outperforms other state-of-the-art FL algorithms.
基金supported by the National Natural Science Foundation of China(6120235461272422)the Scientific and Technological Support Project(Industry)of Jiangsu Province(BE2011189)
文摘Cloud computing represents a novel computing model in the contemporary technology world. In a cloud system, the com- puting power of virtual machines (VMs) and network status can greatly affect the completion time of data intensive tasks. How- ever, most of the current resource allocation policies focus only on network conditions and physical hosts. And the computing power of VMs is largely ignored. This paper proposes a comprehensive resource allocation policy which consists of a data intensive task scheduling algorithm that takes account of computing power of VMs and a VM allocation policy that considers bandwidth between storage nodes and hosts. The VM allocation policy includes VM placement and VM migration algorithms. Related simulations show that the proposed algorithms can greatly reduce the task comple- tion time and keep good load balance of physical hosts at the same time.
文摘In the age of online workload explosion,cloud users are increasing exponentialy.Therefore,large scale data centers are required in cloud environment that leads to high energy consumption.Hence,optimal resource utilization is essential to improve energy efficiency of cloud data center.Although,most of the existing literature focuses on virtual machine(VM)consolidation for increasing energy efficiency at the cost of service level agreement degradation.In order to improve the existing approaches,load aware three-gear THReshold(LATHR)as well as modified best fit decreasing(MBFD)algorithm is proposed for minimizing total energy consumption while improving the quality of service in terms of SLA.It offers promising results under dynamic workload and variable number of VMs(1-290)allocated on individual host.The outcomes of the proposed work are measured in terms of SLA,energy consumption,instruction energy ratio(IER)and the number of migrations against the varied numbers of VMs.From experimental results it has been concluded that the proposed technique reduced the SLA violations(55%,26%and 39%)and energy consumption(17%,12%and 6%)as compared to median absolute deviation(MAD),inter quartile range(IQR)and double threshold(THR)overload detection policies,respectively.
基金Project(41401434)supported by the National Natural Science Foundation of China
文摘Cloud computing has emerged as a leading computing paradigm,with an increasing number of geographic information(geo-information) processing tasks now running on clouds.For this reason,geographic information system/remote sensing(GIS/RS) researchers rent more public clouds or establish more private clouds.However,a large proportion of these clouds are found to be underutilized,since users do not deal with big data every day.The low usage of cloud resources violates the original intention of cloud computing,which is to save resources by improving usage.In this work,a low-cost cloud computing solution was proposed for geo-information processing,especially for temporary processing tasks.The proposed solution adopted a hosted architecture and can be realized based on ordinary computers in a common GIS/RS laboratory.The usefulness and effectiveness of the proposed solution was demonstrated by using big data simplification as a case study.Compared to commercial public clouds and dedicated private clouds,the proposed solution is more low-cost and resource-saving,and is more suitable for GIS/RS applications.
基金Projects(61202004,61272084)supported by the National Natural Science Foundation of ChinaProjects(2011M500095,2012T50514)supported by the China Postdoctoral Science Foundation+2 种基金Projects(BK2011754,BK2009426)supported by the Natural Science Foundation of Jiangsu Province,ChinaProject(12KJB520007)supported by the Natural Science Fund of Higher Education of Jiangsu Province,ChinaProject(yx002001)supported by the Priority Academic Program Development of Jiangsu Higher Education Institutions,China
文摘Private clouds and public clouds are turning mutually into the open integrated cloud computing environment,which can aggregate and utilize WAN and LAN networks computing,storage,information and other hardware and software resources sufficiently,but also bring a series of security,reliability and credibility problems.To solve these problems,a novel secure-agent-based trustworthy virtual private cloud model named SATVPC was proposed for the integrated and open cloud computing environment.Through the introduction of secure-agent technology,SATVPC provides an independent,safe and trustworthy computing virtual private platform for multi-tenant systems.In order to meet the needs of the credibility of SATVPC and mandate the trust relationship between each task execution agent and task executor node suitable for their security policies,a new dynamic composite credibility evaluation mechanism was presented,including the credit index computing algorithm and the credibility differentiation strategy.The experimental system shows that SATVPC and the credibility evaluation mechanism can ensure the security of open computing environments with feasibility.Experimental results and performance analysis also show that the credit indexes computing algorithm can evaluate the credibilities of task execution agents and task executor nodes quantitatively,correctly and operationally.
基金supported by the National Natural Science Foundation of China(6147219261202004)+1 种基金the Special Fund for Fast Sharing of Science Paper in Net Era by CSTD(2013116)the Natural Science Fund of Higher Education of Jiangsu Province(14KJB520014)
文摘In order to lower the power consumption and improve the coefficient of resource utilization of current cloud computing systems, this paper proposes two resource pre-allocation algorithms based on the "shut down the redundant, turn on the demanded" strategy here. Firstly, a green cloud computing model is presented, abstracting the task scheduling problem to the virtual machine deployment issue with the virtualization technology. Secondly, the future workloads of system need to be predicted: a cubic exponential smoothing algorithm based on the conservative control(CESCC) strategy is proposed, combining with the current state and resource distribution of system, in order to calculate the demand of resources for the next period of task requests. Then, a multi-objective constrained optimization model of power consumption and a low-energy resource allocation algorithm based on probabilistic matching(RA-PM) are proposed. In order to reduce the power consumption further, the resource allocation algorithm based on the improved simulated annealing(RA-ISA) is designed with the improved simulated annealing algorithm. Experimental results show that the prediction and conservative control strategy make resource pre-allocation catch up with demands, and improve the efficiency of real-time response and the stability of the system. Both RA-PM and RA-ISA can activate fewer hosts, achieve better load balance among the set of high applicable hosts, maximize the utilization of resources, and greatly reduce the power consumption of cloud computing systems.
基金Project(61272148) supported by the National Natural Science Foundation of ChinaProject(20120162110061) supported by the Doctoral Programs of Ministry of Education of China+1 种基金Project(CX2014B066) supported by the Hunan Provincial Innovation Foundation for Postgraduate,ChinaProject(2014zzts044) supported by the Fundamental Research Funds for the Central Universities,China
文摘In order to improve the energy efficiency of large-scale data centers, a virtual machine(VM) deployment algorithm called three-threshold energy saving algorithm(TESA), which is based on the linear relation between the energy consumption and(processor) resource utilization, is proposed. In TESA, according to load, hosts in data centers are divided into four classes, that is,host with light load, host with proper load, host with middle load and host with heavy load. By defining TESA, VMs on lightly loaded host or VMs on heavily loaded host are migrated to another host with proper load; VMs on properly loaded host or VMs on middling loaded host are kept constant. Then, based on the TESA, five kinds of VM selection policies(minimization of migrations policy based on TESA(MIMT), maximization of migrations policy based on TESA(MAMT), highest potential growth policy based on TESA(HPGT), lowest potential growth policy based on TESA(LPGT) and random choice policy based on TESA(RCT)) are presented, and MIMT is chosen as the representative policy through experimental comparison. Finally, five research directions are put forward on future energy management. The results of simulation indicate that, as compared with single threshold(ST) algorithm and minimization of migrations(MM) algorithm, MIMT significantly improves the energy efficiency in data centers.
基金funded by the National NaturalScience Foundation of China (No.70631003,No.70801024)
文摘It is argued in this article to shed light on service supply chain issues associated with cloud computing by examining several interrelated questions:service supply chain architecture from service perspective; basic clouds of service supply chain and development of managerial insights into these clouds. In particular,to demonstrate how those services can be utilized and the processes involved in their utilization,a hypothetical meta-modeling service of cloud computing can be given. Moreover,the paper defines the architecture of cloud managed for SV or ISP in infrastructure of cloud computing in service supply chain:IT services,business services,business processes,which creates atomic and composite software services that are used to perform business processes with business service choreographies.
基金This work was supported by the National Key Research and Development Program of China(2021YFB2900603)the National Natural Science Foundation of China(61831008).
文摘A dynamic multi-beam resource allocation algorithm for large low Earth orbit(LEO)constellation based on on-board distributed computing is proposed in this paper.The allocation is a combinatorial optimization process under a series of complex constraints,which is important for enhancing the matching between resources and requirements.A complex algorithm is not available because that the LEO on-board resources is limi-ted.The proposed genetic algorithm(GA)based on two-dimen-sional individual model and uncorrelated single paternal inheri-tance method is designed to support distributed computation to enhance the feasibility of on-board application.A distributed system composed of eight embedded devices is built to verify the algorithm.A typical scenario is built in the system to evalu-ate the resource allocation process,algorithm mathematical model,trigger strategy,and distributed computation architec-ture.According to the simulation and measurement results,the proposed algorithm can provide an allocation result for more than 1500 tasks in 14 s and the success rate is more than 91%in a typical scene.The response time is decreased by 40%com-pared with the conditional GA.
文摘[Objective]Real-time monitoring of cow ruminant behavior is of paramount importance for promptly obtaining relevant information about cow health and predicting cow diseases.Currently,various strategies have been proposed for monitoring cow ruminant behavior,including video surveillance,sound recognition,and sensor monitoring methods.How‐ever,the application of edge device gives rise to the issue of inadequate real-time performance.To reduce the volume of data transmission and cloud computing workload while achieving real-time monitoring of dairy cow rumination behavior,a real-time monitoring method was proposed for cow ruminant behavior based on edge computing.[Methods]Autono‐mously designed edge devices were utilized to collect and process six-axis acceleration signals from cows in real-time.Based on these six-axis data,two distinct strategies,federated edge intelligence and split edge intelligence,were investigat‐ed for the real-time recognition of cow ruminant behavior.Focused on the real-time recognition method for cow ruminant behavior leveraging federated edge intelligence,the CA-MobileNet v3 network was proposed by enhancing the MobileNet v3 network with a collaborative attention mechanism.Additionally,a federated edge intelligence model was designed uti‐lizing the CA-MobileNet v3 network and the FedAvg federated aggregation algorithm.In the study on split edge intelli‐gence,a split edge intelligence model named MobileNet-LSTM was designed by integrating the MobileNet v3 network with a fusion collaborative attention mechanism and the Bi-LSTM network.[Results and Discussions]Through compara‐tive experiments with MobileNet v3 and MobileNet-LSTM,the federated edge intelligence model based on CA-Mo‐bileNet v3 achieved an average Precision rate,Recall rate,F1-Score,Specificity,and Accuracy of 97.1%,97.9%,97.5%,98.3%,and 98.2%,respectively,yielding the best recognition performance.[Conclusions]It is provided a real-time and effective method for monitoring cow ruminant behavior,and the proposed federated edge intelligence model can be ap‐plied in practical settings.
基金Shanxi Province Higher Education Science and Technology Innovation Fund Project(2022-676)Shanxi Soft Science Program Research Fund Project(2016041008-6)。
文摘In order to improve the efficiency of cloud-based web services,an improved plant growth simulation algorithm scheduling model.This model first used mathematical methods to describe the relationships between cloud-based web services and the constraints of system resources.Then,a light-induced plant growth simulation algorithm was established.The performance of the algorithm was compared through several plant types,and the best plant model was selected as the setting for the system.Experimental results show that when the number of test cloud-based web services reaches 2048,the model being 2.14 times faster than PSO,2.8 times faster than the ant colony algorithm,2.9 times faster than the bee colony algorithm,and a remarkable 8.38 times faster than the genetic algorithm.
基金supported by the National Natural Science Foundation of China(61173017)the National High Technology Research and Development Program(863 Program)(2014AA01A701)
文摘Mobile cloud computing(MCC) combines mobile Internet and cloud computing to improve the performance of mobile applications. However, MCC faces the problem of energy efficiency because of randomly varying channels. A scheduling algorithm is proposed by introducing the Lyapunov optimization, which can dynamically choose users to transmit data based on queue backlog and channel statistics. The Lyapunov analysis shows that the proposed scheduling algorithm can make a tradeoff between queue backlog and energy consumption in the channel-aware mobile cloud computing system. The simulation results verify the effectiveness of the proposed algorithm.
文摘In recent years cloud computing is the subject of extensive research in the emerging field of information technology and has become a promising business.The reason behind this widespread interest is its abilityto increase the capacity and capability of enterprises,having no investment for new infrastructure,no software license requirement and no need of any training.Security concern is the main limitation factor in the growth of this new born technology.The security responsibilities of both,the provider and the consumer greatly differ between cloud service models.In this paper we discuss a variety of security risks,authentication issues,trust,and legal regularity in cloud environment with consumer perspective.Early research focused only on technical and business consequences of cloud computing and ignored consumer perspective.Therefore,this paper discusses the consumer security and privacy preferences.
基金Heilongjiang Provincial Natural Science Foundation of China (LH2021F009)。
文摘Anti-jamming performance evaluation has recently received significant attention. For Link-16, the anti-jamming performance evaluation and selection of the optimal anti-jamming technologies are urgent problems to be solved. A comprehensive evaluation method is proposed, which combines grey relational analysis (GRA) and cloud model, to evaluate the anti-jamming performances of Link-16. Firstly, on the basis of establishing the anti-jamming performance evaluation indicator system of Link-16, the linear combination of analytic hierarchy process(AHP) and entropy weight method (EWM) are used to calculate the combined weight. Secondly, the qualitative and quantitative concept transformation model, i.e., the cloud model, is introduced to evaluate the anti-jamming abilities of Link-16 under each jamming scheme. In addition, GRA calculates the correlation degree between evaluation indicators and the anti-jamming performance of Link-16, and assesses the best anti-jamming technology. Finally, simulation results prove that the proposed evaluation model can achieve the objective of feasible and practical evaluation, which opens up a novel way for the research of anti-jamming performance evaluations of Link-16.
基金supported by the National Natural Science Foundation of China (Grant No. 12002044)。
文摘DNAN-based insensitive melt-cast explosives have been widely utilized in insensitive munition in recent years. When constrained DNAN-based melt-cast explosives are ignited under thermal stimulation, the base explosive exists in a molten liquid state, where high-temperature gases expand and react in the form of bubble clouds within the liquid explosive;this process is distinctly different from the dynamic crack propagation process observed in the case of solid explosives. In this study, a control model for the reaction evolution of burning-bubble clouds was established to describe the reaction process and quantify the reaction violence of DNAN-based melt-cast explosives, considering the size distribution and activation mechanism of the burning-bubble clouds. The feasibility of the model was verified through experimental results. The results revealed that under geometrically similar conditions, with identical confinement strength and aspect ratio, larger charge structures led to extended initial gas flow and surface burning processes, resulting in greater reaction equivalence and violence at the casing fracture.Under constant charge volume and size, a stronger casing confinement accelerated self-enhanced burning, increasing the internal pressure, reaction degree, and reaction violence. Under a constant casing thickness and radius, higher aspect ratios led to a greater reaction violence at the casing fracture.Moreover, under a constant charge volume and casing thickness, higher aspect ratios resulted in a higher internal pressure, increased reaction degree, and greater reaction violence at the casing fracture. Further,larger ullage volumes extended the reaction evolution time and increased the reaction violence under constant casing dimensions. Through a matching design of the opening threshold of the pressure relief holes and the relief structure area, a stable burning reaction could be maintained until completion,thereby achieving a control of the reaction violence. The proposed model could effectively reflect the effects of the intrinsic burning rate, casing confinement strength, charge size, ullage volume, and pressure relief structure on the reaction evolution process and reaction violence, providing a theoretical method for the thermal safety design and reaction violence evaluation of melt-cast explosives.
基金This project was supported by the National Natural Science Foundation of China (60135020).
文摘The flexibility of traditional image processing system is limited because those system are designed for specific applications. In this paper, a new TMS320C64x-based multi-DSP parallel computing architecture is presented. It has many promising characteristics such as powerful computing capability, broad I/O bandwidth, topology flexibility, and expansibility. The parallel system performance is evaluated by practical experiment.