Federated learning(FL)is a distributed machine learning paradigm for edge cloud computing.FL can facilitate data-driven decision-making in tactical scenarios,effectively addressing both data volume and infrastructure ...Federated learning(FL)is a distributed machine learning paradigm for edge cloud computing.FL can facilitate data-driven decision-making in tactical scenarios,effectively addressing both data volume and infrastructure challenges in edge environments.However,the diversity of clients in edge cloud computing presents significant challenges for FL.Personalized federated learning(pFL)received considerable attention in recent years.One example of pFL involves exploiting the global and local information in the local model.Current pFL algorithms experience limitations such as slow convergence speed,catastrophic forgetting,and poor performance in complex tasks,which still have significant shortcomings compared to the centralized learning.To achieve high pFL performance,we propose FedCLCC:Federated Contrastive Learning and Conditional Computing.The core of FedCLCC is the use of contrastive learning and conditional computing.Contrastive learning determines the feature representation similarity to adjust the local model.Conditional computing separates the global and local information and feeds it to their corresponding heads for global and local handling.Our comprehensive experiments demonstrate that FedCLCC outperforms other state-of-the-art FL algorithms.展开更多
A dynamic multi-beam resource allocation algorithm for large low Earth orbit(LEO)constellation based on on-board distributed computing is proposed in this paper.The allocation is a combinatorial optimization process u...A dynamic multi-beam resource allocation algorithm for large low Earth orbit(LEO)constellation based on on-board distributed computing is proposed in this paper.The allocation is a combinatorial optimization process under a series of complex constraints,which is important for enhancing the matching between resources and requirements.A complex algorithm is not available because that the LEO on-board resources is limi-ted.The proposed genetic algorithm(GA)based on two-dimen-sional individual model and uncorrelated single paternal inheri-tance method is designed to support distributed computation to enhance the feasibility of on-board application.A distributed system composed of eight embedded devices is built to verify the algorithm.A typical scenario is built in the system to evalu-ate the resource allocation process,algorithm mathematical model,trigger strategy,and distributed computation architec-ture.According to the simulation and measurement results,the proposed algorithm can provide an allocation result for more than 1500 tasks in 14 s and the success rate is more than 91%in a typical scene.The response time is decreased by 40%com-pared with the conditional GA.展开更多
[Objective]Real-time monitoring of cow ruminant behavior is of paramount importance for promptly obtaining relevant information about cow health and predicting cow diseases.Currently,various strategies have been propo...[Objective]Real-time monitoring of cow ruminant behavior is of paramount importance for promptly obtaining relevant information about cow health and predicting cow diseases.Currently,various strategies have been proposed for monitoring cow ruminant behavior,including video surveillance,sound recognition,and sensor monitoring methods.How‐ever,the application of edge device gives rise to the issue of inadequate real-time performance.To reduce the volume of data transmission and cloud computing workload while achieving real-time monitoring of dairy cow rumination behavior,a real-time monitoring method was proposed for cow ruminant behavior based on edge computing.[Methods]Autono‐mously designed edge devices were utilized to collect and process six-axis acceleration signals from cows in real-time.Based on these six-axis data,two distinct strategies,federated edge intelligence and split edge intelligence,were investigat‐ed for the real-time recognition of cow ruminant behavior.Focused on the real-time recognition method for cow ruminant behavior leveraging federated edge intelligence,the CA-MobileNet v3 network was proposed by enhancing the MobileNet v3 network with a collaborative attention mechanism.Additionally,a federated edge intelligence model was designed uti‐lizing the CA-MobileNet v3 network and the FedAvg federated aggregation algorithm.In the study on split edge intelli‐gence,a split edge intelligence model named MobileNet-LSTM was designed by integrating the MobileNet v3 network with a fusion collaborative attention mechanism and the Bi-LSTM network.[Results and Discussions]Through compara‐tive experiments with MobileNet v3 and MobileNet-LSTM,the federated edge intelligence model based on CA-Mo‐bileNet v3 achieved an average Precision rate,Recall rate,F1-Score,Specificity,and Accuracy of 97.1%,97.9%,97.5%,98.3%,and 98.2%,respectively,yielding the best recognition performance.[Conclusions]It is provided a real-time and effective method for monitoring cow ruminant behavior,and the proposed federated edge intelligence model can be ap‐plied in practical settings.展开更多
An in-memory storage system provides submillisecond latency and improves the concurrency of user applications by caching data into memory from external storage.Fault tolerance of in-memory storage systems is essential...An in-memory storage system provides submillisecond latency and improves the concurrency of user applications by caching data into memory from external storage.Fault tolerance of in-memory storage systems is essential,as the loss of cached data requires access to data from external storage,which evidently increases the response latency.Typically,replication and erasure code(EC)are two fault-tolerant schemes that pose different trade-offs between access performance and storage usage.To help make the best performance and space trade-off,we design ElasticMem,a hybrid fault-tolerant distributed in-memory storage system that supports elastic redundancy transition to dynamically change the fault-tolerant scheme.ElasticMem exploits a novel EC-oriented replication(EOR)that carefully designs the data placement of replication according to the future data layout of EC to enhance the I/O efficiency of redundancy transition.ElasticMem solves the consistency problem caused by concurrent data accesses via a lightweight table-based scheme combined with data bypassing.It detects correlated read and write requests and serves subsequent read requests with local data.We implement a prototype that realizes ElasticMem based on Memcached.Experiments show that ElasticMem remarkably reduces the time of redundancy transition,the overall latency of correlated concurrent data accesses,and the latency of single data access among them.展开更多
The flexibility of traditional image processing system is limited because those system are designed for specific applications. In this paper, a new TMS320C64x-based multi-DSP parallel computing architecture is present...The flexibility of traditional image processing system is limited because those system are designed for specific applications. In this paper, a new TMS320C64x-based multi-DSP parallel computing architecture is presented. It has many promising characteristics such as powerful computing capability, broad I/O bandwidth, topology flexibility, and expansibility. The parallel system performance is evaluated by practical experiment.展开更多
This paper presents a kind of artificial intelligent system-generalized computing system (GCS for short), and introduces its mathematical description, implement problem and learning problem.
Private clouds and public clouds are turning mutually into the open integrated cloud computing environment,which can aggregate and utilize WAN and LAN networks computing,storage,information and other hardware and soft...Private clouds and public clouds are turning mutually into the open integrated cloud computing environment,which can aggregate and utilize WAN and LAN networks computing,storage,information and other hardware and software resources sufficiently,but also bring a series of security,reliability and credibility problems.To solve these problems,a novel secure-agent-based trustworthy virtual private cloud model named SATVPC was proposed for the integrated and open cloud computing environment.Through the introduction of secure-agent technology,SATVPC provides an independent,safe and trustworthy computing virtual private platform for multi-tenant systems.In order to meet the needs of the credibility of SATVPC and mandate the trust relationship between each task execution agent and task executor node suitable for their security policies,a new dynamic composite credibility evaluation mechanism was presented,including the credit index computing algorithm and the credibility differentiation strategy.The experimental system shows that SATVPC and the credibility evaluation mechanism can ensure the security of open computing environments with feasibility.Experimental results and performance analysis also show that the credit indexes computing algorithm can evaluate the credibilities of task execution agents and task executor nodes quantitatively,correctly and operationally.展开更多
Cloud computing represents a novel computing model in the contemporary technology world. In a cloud system, the com- puting power of virtual machines (VMs) and network status can greatly affect the completion time o...Cloud computing represents a novel computing model in the contemporary technology world. In a cloud system, the com- puting power of virtual machines (VMs) and network status can greatly affect the completion time of data intensive tasks. How- ever, most of the current resource allocation policies focus only on network conditions and physical hosts. And the computing power of VMs is largely ignored. This paper proposes a comprehensive resource allocation policy which consists of a data intensive task scheduling algorithm that takes account of computing power of VMs and a VM allocation policy that considers bandwidth between storage nodes and hosts. The VM allocation policy includes VM placement and VM migration algorithms. Related simulations show that the proposed algorithms can greatly reduce the task comple- tion time and keep good load balance of physical hosts at the same time.展开更多
The angular glint in the near field plays an important role on radar tracking errors. To predict it more efficiently for electrically large targets, a new method based on graphical electromagnetic computing (GRECO) ...The angular glint in the near field plays an important role on radar tracking errors. To predict it more efficiently for electrically large targets, a new method based on graphical electromagnetic computing (GRECO) is proposed. With the benefit of the graphic card, the GRECO prediction method is faster and more accurate than other methods. The proposed method at the first time considers the special case that the targets cannot be completely covered by radar beams, which makes the prediction of radar tracking errors more self-contained in practical circumstances. On the other hand, the process of the scattering center extraction is omitted, resulting in possible angular glint prediction in real time. Comparisons between the simulation results and the theoretical ones validate its correctness and value to academic research and engineering applications.展开更多
Cloud computing has emerged as a leading computing paradigm,with an increasing number of geographic information(geo-information) processing tasks now running on clouds.For this reason,geographic information system/rem...Cloud computing has emerged as a leading computing paradigm,with an increasing number of geographic information(geo-information) processing tasks now running on clouds.For this reason,geographic information system/remote sensing(GIS/RS) researchers rent more public clouds or establish more private clouds.However,a large proportion of these clouds are found to be underutilized,since users do not deal with big data every day.The low usage of cloud resources violates the original intention of cloud computing,which is to save resources by improving usage.In this work,a low-cost cloud computing solution was proposed for geo-information processing,especially for temporary processing tasks.The proposed solution adopted a hosted architecture and can be realized based on ordinary computers in a common GIS/RS laboratory.The usefulness and effectiveness of the proposed solution was demonstrated by using big data simplification as a case study.Compared to commercial public clouds and dedicated private clouds,the proposed solution is more low-cost and resource-saving,and is more suitable for GIS/RS applications.展开更多
Ubiquitous computing must incorporate a certain level of security. For the severely resource constrained applications, the energy-efficient and small size cryptography algorithm implementation is a critical problem. H...Ubiquitous computing must incorporate a certain level of security. For the severely resource constrained applications, the energy-efficient and small size cryptography algorithm implementation is a critical problem. Hardware implementations of the advanced encryption standard (AES) for authentication and encryption are presented. An energy consumption variable is derived to evaluate low-power design strategies for battery-powered devices. It proves that compact AES architectures fail to optimize the AES hardware energy, whereas reducing invalid switching activities and implementing power-optimized sub-modules are the reasonable methods. Implementations of different substitution box (S-Boxes) structures are presented with 0.25μm 1.8 V CMOS (complementary metal oxide semiconductor) standard cell library. The comparisons and trade-offs among area, security, and power are explored. The experimental results show that Galois field composite S-Boxes have smaller size and highest security but consume considerably more power, whereas decoder-switch-encoder S-Boxes have the best power characteristics with disadvantages in terms of size and security. The combination of these two type S-Boxes instead of homogeneous S-Boxes in AES circuit will lead to optimal schemes. The technique of latch-dividing data path is analyzed, and the quantitative simulation results demonstrate that this approach diminishes the glitches effectively at a very low hardware cost.展开更多
In the age of online workload explosion,cloud users are increasing exponentialy.Therefore,large scale data centers are required in cloud environment that leads to high energy consumption.Hence,optimal resource utiliza...In the age of online workload explosion,cloud users are increasing exponentialy.Therefore,large scale data centers are required in cloud environment that leads to high energy consumption.Hence,optimal resource utilization is essential to improve energy efficiency of cloud data center.Although,most of the existing literature focuses on virtual machine(VM)consolidation for increasing energy efficiency at the cost of service level agreement degradation.In order to improve the existing approaches,load aware three-gear THReshold(LATHR)as well as modified best fit decreasing(MBFD)algorithm is proposed for minimizing total energy consumption while improving the quality of service in terms of SLA.It offers promising results under dynamic workload and variable number of VMs(1-290)allocated on individual host.The outcomes of the proposed work are measured in terms of SLA,energy consumption,instruction energy ratio(IER)and the number of migrations against the varied numbers of VMs.From experimental results it has been concluded that the proposed technique reduced the SLA violations(55%,26%and 39%)and energy consumption(17%,12%and 6%)as compared to median absolute deviation(MAD),inter quartile range(IQR)and double threshold(THR)overload detection policies,respectively.展开更多
Peta-scale high-perfomlance computing systems are increasingly built with heterogeneous CPU and GPU nodes to achieve higher power efficiency and computation throughput. While providing unprecedented capabilities to co...Peta-scale high-perfomlance computing systems are increasingly built with heterogeneous CPU and GPU nodes to achieve higher power efficiency and computation throughput. While providing unprecedented capabilities to conduct computational experiments of historic significance, these systems are presently difficult to program. The users, who are domain experts rather than computer experts, prefer to use programming models closer to their domains (e.g., physics and biology) rather than MPI and OpenME This has led the development of domain-specific programming that provides domain-specific programming interfaces but abstracts away some performance-critical architecture details. Based on experience in designing large-scale computing systems, a hybrid programming framework for scientific computing on heterogeneous architectures is proposed in this work. Its design philosophy is to provide a collaborative mechanism for domain experts and computer experts so that both domain-specific knowledge and performance-critical architecture details can be adequately exploited. Two real-world scientific applications have been evaluated on TH-IA, a peta-scale CPU-GPU heterogeneous system that is currently the 5th fastest supercomputer in the world. The experimental results show that the proposed framework is well suited for developing large-scale scientific computing applications on peta-scale heterogeneous CPU/GPU systems.展开更多
In order to lower the power consumption and improve the coefficient of resource utilization of current cloud computing systems, this paper proposes two resource pre-allocation algorithms based on the "shut down the r...In order to lower the power consumption and improve the coefficient of resource utilization of current cloud computing systems, this paper proposes two resource pre-allocation algorithms based on the "shut down the redundant, turn on the demanded" strategy here. Firstly, a green cloud computing model is presented, abstracting the task scheduling problem to the virtual machine deployment issue with the virtualization technology. Secondly, the future workloads of system need to be predicted: a cubic exponential smoothing algorithm based on the conservative control(CESCC) strategy is proposed, combining with the current state and resource distribution of system, in order to calculate the demand of resources for the next period of task requests. Then, a multi-objective constrained optimization model of power consumption and a low-energy resource allocation algorithm based on probabilistic matching(RA-PM) are proposed. In order to reduce the power consumption further, the resource allocation algorithm based on the improved simulated annealing(RA-ISA) is designed with the improved simulated annealing algorithm. Experimental results show that the prediction and conservative control strategy make resource pre-allocation catch up with demands, and improve the efficiency of real-time response and the stability of the system. Both RA-PM and RA-ISA can activate fewer hosts, achieve better load balance among the set of high applicable hosts, maximize the utilization of resources, and greatly reduce the power consumption of cloud computing systems.展开更多
In order to solve the non-linear and high-dimensional optimization problems more effectively, an improved self-adaptive membrane computing(ISMC) optimization algorithm was proposed. The proposed ISMC algorithm applied...In order to solve the non-linear and high-dimensional optimization problems more effectively, an improved self-adaptive membrane computing(ISMC) optimization algorithm was proposed. The proposed ISMC algorithm applied improved self-adaptive crossover and mutation formulae that can provide appropriate crossover operator and mutation operator based on different functions of the objects and the number of iterations. The performance of ISMC was tested by the benchmark functions. The simulation results for residue hydrogenating kinetics model parameter estimation show that the proposed method is superior to the traditional intelligent algorithms in terms of convergence accuracy and stability in solving the complex parameter optimization problems.展开更多
In order to improve the energy efficiency of large-scale data centers, a virtual machine(VM) deployment algorithm called three-threshold energy saving algorithm(TESA), which is based on the linear relation between the...In order to improve the energy efficiency of large-scale data centers, a virtual machine(VM) deployment algorithm called three-threshold energy saving algorithm(TESA), which is based on the linear relation between the energy consumption and(processor) resource utilization, is proposed. In TESA, according to load, hosts in data centers are divided into four classes, that is,host with light load, host with proper load, host with middle load and host with heavy load. By defining TESA, VMs on lightly loaded host or VMs on heavily loaded host are migrated to another host with proper load; VMs on properly loaded host or VMs on middling loaded host are kept constant. Then, based on the TESA, five kinds of VM selection policies(minimization of migrations policy based on TESA(MIMT), maximization of migrations policy based on TESA(MAMT), highest potential growth policy based on TESA(HPGT), lowest potential growth policy based on TESA(LPGT) and random choice policy based on TESA(RCT)) are presented, and MIMT is chosen as the representative policy through experimental comparison. Finally, five research directions are put forward on future energy management. The results of simulation indicate that, as compared with single threshold(ST) algorithm and minimization of migrations(MM) algorithm, MIMT significantly improves the energy efficiency in data centers.展开更多
A scheme for general purposed FDTD visual scientific computing software is introduced in this paper using object-oriented design (OOD) method. By abstracting the parameters of FDTD grids to an individual class and sep...A scheme for general purposed FDTD visual scientific computing software is introduced in this paper using object-oriented design (OOD) method. By abstracting the parameters of FDTD grids to an individual class and separating from the iteration procedure, the visual software can be adapted to more comprehensive computing problems. Real-time gray degree graphic and wave curve of the results can be achieved using DirectX technique. The special difference equation and data structure in dispersive medium are considered, and the peculiarity of parameters in perfectly matched layer are also discussed.展开更多
A novel approach to compute the high frequency radar cross-section (RCS) of complex targets is described in this paper.From the three views or the sectional views of the target, target is geometrically modeled by non-...A novel approach to compute the high frequency radar cross-section (RCS) of complex targets is described in this paper.From the three views or the sectional views of the target, target is geometrically modeled by non-uniform rational B-spline (NURBS) parametric surfaces using the software CNFEOV developed by oneself which constructs NURBS representation of complex target from engineering orthographic views. RCS is obtained through PO, PTD, MEC and IBC techniques. When calculating RCS of the target, it is necessary to get the unit normal vector to surface illumi- nated by radar and the value Z which is the distance from the point on the surface to radar. ln this novel approach, the unit normal vector to the surface can be obtained either by the Phong rendering model, in which the color components (RGB) of every pixel on the image are equal to the coordinate components of the normal, or by the NURBS expressions. The value Z can be achieved by software or hardware Z-buffer. The effects of the size of image on the RCS of target are discussed and the correct method is recommended. The RCS of the perfect conducting sphere, cylinder and dihedral as well as the coated cylinder, as some examples, are computed. The accuracy of the method is verified by comparing the numerical results with those obtained by using other methods.展开更多
To solve job shop scheduling problem, a new approach-DNA computing is used in solving job shop scheduling problem. The approach using DNA computing to solve job shop scheduling is divided into three stands. Finally, o...To solve job shop scheduling problem, a new approach-DNA computing is used in solving job shop scheduling problem. The approach using DNA computing to solve job shop scheduling is divided into three stands. Finally, optimum solutions are obtained by sequencing A small job shop scheduling problem is solved in DNA computing, and the "operations" of the computation were performed with standard protocols, as ligation, synthesis, electrophoresis etc. This work represents further evidence for the ability of DNA computing to solve NP-complete search problems.展开更多
基金supported by the Natural Science Foundation of Xinjiang Uygur Autonomous Region(Grant No.2022D01B 187)。
文摘Federated learning(FL)is a distributed machine learning paradigm for edge cloud computing.FL can facilitate data-driven decision-making in tactical scenarios,effectively addressing both data volume and infrastructure challenges in edge environments.However,the diversity of clients in edge cloud computing presents significant challenges for FL.Personalized federated learning(pFL)received considerable attention in recent years.One example of pFL involves exploiting the global and local information in the local model.Current pFL algorithms experience limitations such as slow convergence speed,catastrophic forgetting,and poor performance in complex tasks,which still have significant shortcomings compared to the centralized learning.To achieve high pFL performance,we propose FedCLCC:Federated Contrastive Learning and Conditional Computing.The core of FedCLCC is the use of contrastive learning and conditional computing.Contrastive learning determines the feature representation similarity to adjust the local model.Conditional computing separates the global and local information and feeds it to their corresponding heads for global and local handling.Our comprehensive experiments demonstrate that FedCLCC outperforms other state-of-the-art FL algorithms.
基金This work was supported by the National Key Research and Development Program of China(2021YFB2900603)the National Natural Science Foundation of China(61831008).
文摘A dynamic multi-beam resource allocation algorithm for large low Earth orbit(LEO)constellation based on on-board distributed computing is proposed in this paper.The allocation is a combinatorial optimization process under a series of complex constraints,which is important for enhancing the matching between resources and requirements.A complex algorithm is not available because that the LEO on-board resources is limi-ted.The proposed genetic algorithm(GA)based on two-dimen-sional individual model and uncorrelated single paternal inheri-tance method is designed to support distributed computation to enhance the feasibility of on-board application.A distributed system composed of eight embedded devices is built to verify the algorithm.A typical scenario is built in the system to evalu-ate the resource allocation process,algorithm mathematical model,trigger strategy,and distributed computation architec-ture.According to the simulation and measurement results,the proposed algorithm can provide an allocation result for more than 1500 tasks in 14 s and the success rate is more than 91%in a typical scene.The response time is decreased by 40%com-pared with the conditional GA.
文摘[Objective]Real-time monitoring of cow ruminant behavior is of paramount importance for promptly obtaining relevant information about cow health and predicting cow diseases.Currently,various strategies have been proposed for monitoring cow ruminant behavior,including video surveillance,sound recognition,and sensor monitoring methods.How‐ever,the application of edge device gives rise to the issue of inadequate real-time performance.To reduce the volume of data transmission and cloud computing workload while achieving real-time monitoring of dairy cow rumination behavior,a real-time monitoring method was proposed for cow ruminant behavior based on edge computing.[Methods]Autono‐mously designed edge devices were utilized to collect and process six-axis acceleration signals from cows in real-time.Based on these six-axis data,two distinct strategies,federated edge intelligence and split edge intelligence,were investigat‐ed for the real-time recognition of cow ruminant behavior.Focused on the real-time recognition method for cow ruminant behavior leveraging federated edge intelligence,the CA-MobileNet v3 network was proposed by enhancing the MobileNet v3 network with a collaborative attention mechanism.Additionally,a federated edge intelligence model was designed uti‐lizing the CA-MobileNet v3 network and the FedAvg federated aggregation algorithm.In the study on split edge intelli‐gence,a split edge intelligence model named MobileNet-LSTM was designed by integrating the MobileNet v3 network with a fusion collaborative attention mechanism and the Bi-LSTM network.[Results and Discussions]Through compara‐tive experiments with MobileNet v3 and MobileNet-LSTM,the federated edge intelligence model based on CA-Mo‐bileNet v3 achieved an average Precision rate,Recall rate,F1-Score,Specificity,and Accuracy of 97.1%,97.9%,97.5%,98.3%,and 98.2%,respectively,yielding the best recognition performance.[Conclusions]It is provided a real-time and effective method for monitoring cow ruminant behavior,and the proposed federated edge intelligence model can be ap‐plied in practical settings.
基金supported by the Fundamental Research Funds for the Central Universities(WK2150110022)Anhui Provincial Natural Science Foundation(2208085QF189)National Natural Science Foundation of China(62202440).
文摘An in-memory storage system provides submillisecond latency and improves the concurrency of user applications by caching data into memory from external storage.Fault tolerance of in-memory storage systems is essential,as the loss of cached data requires access to data from external storage,which evidently increases the response latency.Typically,replication and erasure code(EC)are two fault-tolerant schemes that pose different trade-offs between access performance and storage usage.To help make the best performance and space trade-off,we design ElasticMem,a hybrid fault-tolerant distributed in-memory storage system that supports elastic redundancy transition to dynamically change the fault-tolerant scheme.ElasticMem exploits a novel EC-oriented replication(EOR)that carefully designs the data placement of replication according to the future data layout of EC to enhance the I/O efficiency of redundancy transition.ElasticMem solves the consistency problem caused by concurrent data accesses via a lightweight table-based scheme combined with data bypassing.It detects correlated read and write requests and serves subsequent read requests with local data.We implement a prototype that realizes ElasticMem based on Memcached.Experiments show that ElasticMem remarkably reduces the time of redundancy transition,the overall latency of correlated concurrent data accesses,and the latency of single data access among them.
基金This project was supported by the National Natural Science Foundation of China (60135020).
文摘The flexibility of traditional image processing system is limited because those system are designed for specific applications. In this paper, a new TMS320C64x-based multi-DSP parallel computing architecture is presented. It has many promising characteristics such as powerful computing capability, broad I/O bandwidth, topology flexibility, and expansibility. The parallel system performance is evaluated by practical experiment.
文摘This paper presents a kind of artificial intelligent system-generalized computing system (GCS for short), and introduces its mathematical description, implement problem and learning problem.
基金Projects(61202004,61272084)supported by the National Natural Science Foundation of ChinaProjects(2011M500095,2012T50514)supported by the China Postdoctoral Science Foundation+2 种基金Projects(BK2011754,BK2009426)supported by the Natural Science Foundation of Jiangsu Province,ChinaProject(12KJB520007)supported by the Natural Science Fund of Higher Education of Jiangsu Province,ChinaProject(yx002001)supported by the Priority Academic Program Development of Jiangsu Higher Education Institutions,China
文摘Private clouds and public clouds are turning mutually into the open integrated cloud computing environment,which can aggregate and utilize WAN and LAN networks computing,storage,information and other hardware and software resources sufficiently,but also bring a series of security,reliability and credibility problems.To solve these problems,a novel secure-agent-based trustworthy virtual private cloud model named SATVPC was proposed for the integrated and open cloud computing environment.Through the introduction of secure-agent technology,SATVPC provides an independent,safe and trustworthy computing virtual private platform for multi-tenant systems.In order to meet the needs of the credibility of SATVPC and mandate the trust relationship between each task execution agent and task executor node suitable for their security policies,a new dynamic composite credibility evaluation mechanism was presented,including the credit index computing algorithm and the credibility differentiation strategy.The experimental system shows that SATVPC and the credibility evaluation mechanism can ensure the security of open computing environments with feasibility.Experimental results and performance analysis also show that the credit indexes computing algorithm can evaluate the credibilities of task execution agents and task executor nodes quantitatively,correctly and operationally.
基金supported by the National Natural Science Foundation of China(6120235461272422)the Scientific and Technological Support Project(Industry)of Jiangsu Province(BE2011189)
文摘Cloud computing represents a novel computing model in the contemporary technology world. In a cloud system, the com- puting power of virtual machines (VMs) and network status can greatly affect the completion time of data intensive tasks. How- ever, most of the current resource allocation policies focus only on network conditions and physical hosts. And the computing power of VMs is largely ignored. This paper proposes a comprehensive resource allocation policy which consists of a data intensive task scheduling algorithm that takes account of computing power of VMs and a VM allocation policy that considers bandwidth between storage nodes and hosts. The VM allocation policy includes VM placement and VM migration algorithms. Related simulations show that the proposed algorithms can greatly reduce the task comple- tion time and keep good load balance of physical hosts at the same time.
基金supported by the National Natural Science Foundation of China (60871069)
文摘The angular glint in the near field plays an important role on radar tracking errors. To predict it more efficiently for electrically large targets, a new method based on graphical electromagnetic computing (GRECO) is proposed. With the benefit of the graphic card, the GRECO prediction method is faster and more accurate than other methods. The proposed method at the first time considers the special case that the targets cannot be completely covered by radar beams, which makes the prediction of radar tracking errors more self-contained in practical circumstances. On the other hand, the process of the scattering center extraction is omitted, resulting in possible angular glint prediction in real time. Comparisons between the simulation results and the theoretical ones validate its correctness and value to academic research and engineering applications.
基金Project(41401434)supported by the National Natural Science Foundation of China
文摘Cloud computing has emerged as a leading computing paradigm,with an increasing number of geographic information(geo-information) processing tasks now running on clouds.For this reason,geographic information system/remote sensing(GIS/RS) researchers rent more public clouds or establish more private clouds.However,a large proportion of these clouds are found to be underutilized,since users do not deal with big data every day.The low usage of cloud resources violates the original intention of cloud computing,which is to save resources by improving usage.In this work,a low-cost cloud computing solution was proposed for geo-information processing,especially for temporary processing tasks.The proposed solution adopted a hosted architecture and can be realized based on ordinary computers in a common GIS/RS laboratory.The usefulness and effectiveness of the proposed solution was demonstrated by using big data simplification as a case study.Compared to commercial public clouds and dedicated private clouds,the proposed solution is more low-cost and resource-saving,and is more suitable for GIS/RS applications.
基金the "863" High Technology Research and Development Program of China (2006AA01Z226)the Scientific Research Foundation of Huazhong University of Science and Technology (2006Z011B)the Program for New Century Excellent Talents in University (NCET-07-0328).
文摘Ubiquitous computing must incorporate a certain level of security. For the severely resource constrained applications, the energy-efficient and small size cryptography algorithm implementation is a critical problem. Hardware implementations of the advanced encryption standard (AES) for authentication and encryption are presented. An energy consumption variable is derived to evaluate low-power design strategies for battery-powered devices. It proves that compact AES architectures fail to optimize the AES hardware energy, whereas reducing invalid switching activities and implementing power-optimized sub-modules are the reasonable methods. Implementations of different substitution box (S-Boxes) structures are presented with 0.25μm 1.8 V CMOS (complementary metal oxide semiconductor) standard cell library. The comparisons and trade-offs among area, security, and power are explored. The experimental results show that Galois field composite S-Boxes have smaller size and highest security but consume considerably more power, whereas decoder-switch-encoder S-Boxes have the best power characteristics with disadvantages in terms of size and security. The combination of these two type S-Boxes instead of homogeneous S-Boxes in AES circuit will lead to optimal schemes. The technique of latch-dividing data path is analyzed, and the quantitative simulation results demonstrate that this approach diminishes the glitches effectively at a very low hardware cost.
文摘In the age of online workload explosion,cloud users are increasing exponentialy.Therefore,large scale data centers are required in cloud environment that leads to high energy consumption.Hence,optimal resource utilization is essential to improve energy efficiency of cloud data center.Although,most of the existing literature focuses on virtual machine(VM)consolidation for increasing energy efficiency at the cost of service level agreement degradation.In order to improve the existing approaches,load aware three-gear THReshold(LATHR)as well as modified best fit decreasing(MBFD)algorithm is proposed for minimizing total energy consumption while improving the quality of service in terms of SLA.It offers promising results under dynamic workload and variable number of VMs(1-290)allocated on individual host.The outcomes of the proposed work are measured in terms of SLA,energy consumption,instruction energy ratio(IER)and the number of migrations against the varied numbers of VMs.From experimental results it has been concluded that the proposed technique reduced the SLA violations(55%,26%and 39%)and energy consumption(17%,12%and 6%)as compared to median absolute deviation(MAD),inter quartile range(IQR)and double threshold(THR)overload detection policies,respectively.
基金Project(61170049) supported by the National Natural Science Foundation of ChinaProject(2012AA010903) supported by the National High Technology Research and Development Program of China
文摘Peta-scale high-perfomlance computing systems are increasingly built with heterogeneous CPU and GPU nodes to achieve higher power efficiency and computation throughput. While providing unprecedented capabilities to conduct computational experiments of historic significance, these systems are presently difficult to program. The users, who are domain experts rather than computer experts, prefer to use programming models closer to their domains (e.g., physics and biology) rather than MPI and OpenME This has led the development of domain-specific programming that provides domain-specific programming interfaces but abstracts away some performance-critical architecture details. Based on experience in designing large-scale computing systems, a hybrid programming framework for scientific computing on heterogeneous architectures is proposed in this work. Its design philosophy is to provide a collaborative mechanism for domain experts and computer experts so that both domain-specific knowledge and performance-critical architecture details can be adequately exploited. Two real-world scientific applications have been evaluated on TH-IA, a peta-scale CPU-GPU heterogeneous system that is currently the 5th fastest supercomputer in the world. The experimental results show that the proposed framework is well suited for developing large-scale scientific computing applications on peta-scale heterogeneous CPU/GPU systems.
基金supported by the National Natural Science Foundation of China(6147219261202004)+1 种基金the Special Fund for Fast Sharing of Science Paper in Net Era by CSTD(2013116)the Natural Science Fund of Higher Education of Jiangsu Province(14KJB520014)
文摘In order to lower the power consumption and improve the coefficient of resource utilization of current cloud computing systems, this paper proposes two resource pre-allocation algorithms based on the "shut down the redundant, turn on the demanded" strategy here. Firstly, a green cloud computing model is presented, abstracting the task scheduling problem to the virtual machine deployment issue with the virtualization technology. Secondly, the future workloads of system need to be predicted: a cubic exponential smoothing algorithm based on the conservative control(CESCC) strategy is proposed, combining with the current state and resource distribution of system, in order to calculate the demand of resources for the next period of task requests. Then, a multi-objective constrained optimization model of power consumption and a low-energy resource allocation algorithm based on probabilistic matching(RA-PM) are proposed. In order to reduce the power consumption further, the resource allocation algorithm based on the improved simulated annealing(RA-ISA) is designed with the improved simulated annealing algorithm. Experimental results show that the prediction and conservative control strategy make resource pre-allocation catch up with demands, and improve the efficiency of real-time response and the stability of the system. Both RA-PM and RA-ISA can activate fewer hosts, achieve better load balance among the set of high applicable hosts, maximize the utilization of resources, and greatly reduce the power consumption of cloud computing systems.
基金Projects(61203020,61403190)supported by the National Natural Science Foundation of ChinaProject(BK20141461)supported by the Jiangsu Province Natural Science Foundation,China
文摘In order to solve the non-linear and high-dimensional optimization problems more effectively, an improved self-adaptive membrane computing(ISMC) optimization algorithm was proposed. The proposed ISMC algorithm applied improved self-adaptive crossover and mutation formulae that can provide appropriate crossover operator and mutation operator based on different functions of the objects and the number of iterations. The performance of ISMC was tested by the benchmark functions. The simulation results for residue hydrogenating kinetics model parameter estimation show that the proposed method is superior to the traditional intelligent algorithms in terms of convergence accuracy and stability in solving the complex parameter optimization problems.
基金Project(61272148) supported by the National Natural Science Foundation of ChinaProject(20120162110061) supported by the Doctoral Programs of Ministry of Education of China+1 种基金Project(CX2014B066) supported by the Hunan Provincial Innovation Foundation for Postgraduate,ChinaProject(2014zzts044) supported by the Fundamental Research Funds for the Central Universities,China
文摘In order to improve the energy efficiency of large-scale data centers, a virtual machine(VM) deployment algorithm called three-threshold energy saving algorithm(TESA), which is based on the linear relation between the energy consumption and(processor) resource utilization, is proposed. In TESA, according to load, hosts in data centers are divided into four classes, that is,host with light load, host with proper load, host with middle load and host with heavy load. By defining TESA, VMs on lightly loaded host or VMs on heavily loaded host are migrated to another host with proper load; VMs on properly loaded host or VMs on middling loaded host are kept constant. Then, based on the TESA, five kinds of VM selection policies(minimization of migrations policy based on TESA(MIMT), maximization of migrations policy based on TESA(MAMT), highest potential growth policy based on TESA(HPGT), lowest potential growth policy based on TESA(LPGT) and random choice policy based on TESA(RCT)) are presented, and MIMT is chosen as the representative policy through experimental comparison. Finally, five research directions are put forward on future energy management. The results of simulation indicate that, as compared with single threshold(ST) algorithm and minimization of migrations(MM) algorithm, MIMT significantly improves the energy efficiency in data centers.
基金This project was supported by the National Natural Science Foundation (No. 69831020).
文摘A scheme for general purposed FDTD visual scientific computing software is introduced in this paper using object-oriented design (OOD) method. By abstracting the parameters of FDTD grids to an individual class and separating from the iteration procedure, the visual software can be adapted to more comprehensive computing problems. Real-time gray degree graphic and wave curve of the results can be achieved using DirectX technique. The special difference equation and data structure in dispersive medium are considered, and the peculiarity of parameters in perfectly matched layer are also discussed.
文摘A novel approach to compute the high frequency radar cross-section (RCS) of complex targets is described in this paper.From the three views or the sectional views of the target, target is geometrically modeled by non-uniform rational B-spline (NURBS) parametric surfaces using the software CNFEOV developed by oneself which constructs NURBS representation of complex target from engineering orthographic views. RCS is obtained through PO, PTD, MEC and IBC techniques. When calculating RCS of the target, it is necessary to get the unit normal vector to surface illumi- nated by radar and the value Z which is the distance from the point on the surface to radar. ln this novel approach, the unit normal vector to the surface can be obtained either by the Phong rendering model, in which the color components (RGB) of every pixel on the image are equal to the coordinate components of the normal, or by the NURBS expressions. The value Z can be achieved by software or hardware Z-buffer. The effects of the size of image on the RCS of target are discussed and the correct method is recommended. The RCS of the perfect conducting sphere, cylinder and dihedral as well as the coated cylinder, as some examples, are computed. The accuracy of the method is verified by comparing the numerical results with those obtained by using other methods.
基金This Project was supported by the National Nature Science Foundation (60274026 ,30570431) China Postdoctoral Sci-ence Foundation Natural +1 种基金Science Foundation of Educational Government of Anhui Province of China Excellent Youth Scienceand Technology Foundation of Anhui Province of China (06042088) and Doctoral Foundation of Anhui University of Scienceand Technology
文摘To solve job shop scheduling problem, a new approach-DNA computing is used in solving job shop scheduling problem. The approach using DNA computing to solve job shop scheduling is divided into three stands. Finally, optimum solutions are obtained by sequencing A small job shop scheduling problem is solved in DNA computing, and the "operations" of the computation were performed with standard protocols, as ligation, synthesis, electrophoresis etc. This work represents further evidence for the ability of DNA computing to solve NP-complete search problems.