Recently, high-precision trajectory prediction of ballistic missiles in the boost phase has become a research hotspot. This paper proposes a trajectory prediction algorithm driven by data and knowledge(DKTP) to solve ...Recently, high-precision trajectory prediction of ballistic missiles in the boost phase has become a research hotspot. This paper proposes a trajectory prediction algorithm driven by data and knowledge(DKTP) to solve this problem. Firstly, the complex dynamics characteristics of ballistic missile in the boost phase are analyzed in detail. Secondly, combining the missile dynamics model with the target gravity turning model, a knowledge-driven target three-dimensional turning(T3) model is derived. Then, the BP neural network is used to train the boost phase trajectory database in typical scenarios to obtain a datadriven state parameter mapping(SPM) model. On this basis, an online trajectory prediction framework driven by data and knowledge is established. Based on the SPM model, the three-dimensional turning coefficients of the target are predicted by using the current state of the target, and the state of the target at the next moment is obtained by combining the T3 model. Finally, simulation verification is carried out under various conditions. The simulation results show that the DKTP algorithm combines the advantages of data-driven and knowledge-driven, improves the interpretability of the algorithm, reduces the uncertainty, which can achieve high-precision trajectory prediction of ballistic missile in the boost phase.展开更多
The Circle algorithm was proposed for large datasets.The idea of the algorithm is to find a set of vertices that are close to each other and far from other vertices.This algorithm makes use of the connection between c...The Circle algorithm was proposed for large datasets.The idea of the algorithm is to find a set of vertices that are close to each other and far from other vertices.This algorithm makes use of the connection between clustering aggregation and the problem of correlation clustering.The best deterministic approximation algorithm was provided for the variation of the correlation of clustering problem,and showed how sampling can be used to scale the algorithms for large datasets.An extensive empirical evaluation was given for the usefulness of the problem and the solutions.The results show that this method achieves more than 50% reduction in the running time without sacrificing the quality of the clustering.展开更多
A specialized Hungarian algorithm was developed here for the maximum likelihood data association problem with two implementation versions due to presence of false alarms and missed detections. The maximum likelihood d...A specialized Hungarian algorithm was developed here for the maximum likelihood data association problem with two implementation versions due to presence of false alarms and missed detections. The maximum likelihood data association problem is formulated as a bipartite weighted matching problem. Its duality and the optimality conditions are given. The Hungarian algorithm with its computational steps, data structure and computational complexity is presented. The two implementation versions, Hungarian forest (HF) algorithm and Hungarian tree (HT) algorithm, and their combination with the naYve auction initialization are discussed. The computational results show that HT algorithm is slightly faster than HF algorithm and they are both superior to the classic Munkres algorithm.展开更多
Aiming at three-passive-sensor location system, a generalized 3-dimension (3-D) assignment model is constructed based on property information, and a multi-target programming model is proposed based on direction-find...Aiming at three-passive-sensor location system, a generalized 3-dimension (3-D) assignment model is constructed based on property information, and a multi-target programming model is proposed based on direction-finding and property fusion information. The multi-target programming model is transformed into a single target programming problem to resolve, and its data association result is compared with the results which are solved by using one kind of information only. Simulation experiments show the effectiveness of the multi-target programming algorithm with higher data association accuracy and less calculation.展开更多
Improved traditional ant colony algorithms,a data routing model used to the data remote exchange on WAN was presented.In the model,random heuristic factors were introduced to realize multi-path search.The updating mod...Improved traditional ant colony algorithms,a data routing model used to the data remote exchange on WAN was presented.In the model,random heuristic factors were introduced to realize multi-path search.The updating model of pheromone could adjust the pheromone concentration on the optimal path according to path load dynamically to make the system keep load balance.The simulation results show that the improved model has a higher performance on convergence and load balance.展开更多
To Meet the requirements of multi-sensor data fusion in diagnosis for complex equipment systems,a novel, fuzzy similarity-based data fusion algorithm is given. Based on fuzzy set theory, it calculates the fuzzy simila...To Meet the requirements of multi-sensor data fusion in diagnosis for complex equipment systems,a novel, fuzzy similarity-based data fusion algorithm is given. Based on fuzzy set theory, it calculates the fuzzy similarity among a certain sensor's measurement values and the multiple sensor's objective prediction values to determine the importance weigh of each sensor,and realizes the multi-sensor diagnosis parameter data fusion.According to the principle, its application software is also designed. The applied example proves that the algorithm can give priority to the high-stability and high -reliability sensors and it is laconic ,feasible and efficient to real-time circumstance measure and data processing in engine diagnosis.展开更多
Under the scenario of dense targets in clutter, a multi-layer optimal data correlation algorithm is proposed. This algorithm eliminates a large number of false location points from the assignment process by rough corr...Under the scenario of dense targets in clutter, a multi-layer optimal data correlation algorithm is proposed. This algorithm eliminates a large number of false location points from the assignment process by rough correlations before we calculate the correlation cost, so it avoids the operations for the target state estimate and the calculation of the correlation cost for the false correlation sets. In the meantime, with the elimination of these points in the rough correlation, the disturbance from the false correlations in the assignment process is decreased, so the data correlation accuracy is improved correspondingly. Complexity analyses of the new multi-layer optimal algorithm and the traditional optimal assignment algorithm are given. Simulation results show that the new algorithm is feasible and effective.展开更多
The Internet now is a large-scale platform with big data. Finding truth from a huge dataset has attracted extensive attention, which can maintain the quality of data collected by users and provide users with accurate ...The Internet now is a large-scale platform with big data. Finding truth from a huge dataset has attracted extensive attention, which can maintain the quality of data collected by users and provide users with accurate and efficient data. However, current truth finder algorithms are unsatisfying, because of their low accuracy and complication. This paper proposes a truth finder algorithm based on entity attributes (TFAEA). Based on the iterative computation of source reliability and fact accuracy, TFAEA considers the interactive degree among facts and the degree of dependence among sources, to simplify the typical truth finder algorithms. In order to improve the accuracy of them, TFAEA combines the one-way text similarity and the factual conflict to calculate the mutual support degree among facts. Furthermore, TFAEA utilizes the symmetric saturation of data sources to calculate the degree of dependence among sources. The experimental results show that TFAEA is not only more stable, but also more accurate than the typical truth finder algorithms.展开更多
The self-potential method is widely used in environmental and engineering geophysics. Four intelligent optimization algorithms are adopted to design the inversion to interpret self-potential data more accurately and e...The self-potential method is widely used in environmental and engineering geophysics. Four intelligent optimization algorithms are adopted to design the inversion to interpret self-potential data more accurately and efficiently: simulated annealing, genetic, particle swarm optimization, and ant colony optimization. Using both noise-free and noise-added synthetic data, it is demonstrated that all four intelligent algorithms can perform self-potential data inversion effectively. During the numerical experiments, the model distribution in search space, the relative errors of model parameters, and the elapsed time are recorded to evaluate the performance of the inversion. The results indicate that all the intelligent algorithms have good precision and tolerance to noise. Particle swarm optimization has the fastest convergence during iteration because of its good balanced searching capability between global and local minimisation.展开更多
A system model is formulated as the maximization of a total utility function to achieve fair downlink data scheduling in multiuser orthogonal frequency division multiplexing (OFDM) wireless networks. A dynamic subca...A system model is formulated as the maximization of a total utility function to achieve fair downlink data scheduling in multiuser orthogonal frequency division multiplexing (OFDM) wireless networks. A dynamic subcarrier allocation algorithm (DSAA) is proposed, to optimize the system model. The subcarrier allocation decision is made by the proposed DSAA according to the maximum value of total utility function with respect to the queue mean waiting time. Simulation results demonstrate that compared to the conventional algorithms, the proposed algorithm has better delay performance and can provide fairness under different loads by using different utility functions.展开更多
How to effectively reduce the energy consumption of large-scale data centers is a key issue in cloud computing. This paper presents a novel low-power task scheduling algorithm (L3SA) for large-scale cloud data cente...How to effectively reduce the energy consumption of large-scale data centers is a key issue in cloud computing. This paper presents a novel low-power task scheduling algorithm (L3SA) for large-scale cloud data centers. The winner tree is introduced to make the data nodes as the leaf nodes of the tree and the final winner on the purpose of reducing energy consumption is selected. The complexity of large-scale cloud data centers is fully consider, and the task comparson coefficient is defined to make task scheduling strategy more reasonable. Experiments and performance analysis show that the proposed algorithm can effectively improve the node utilization, and reduce the overall power consumption of the cloud data center.展开更多
Cloud data centers consume a multitude of power leading to the problem of high energy consumption. In order to solve this problem, an energy-efficient virtual machine(VM) consolidation algorithm named PVDE(prediction-...Cloud data centers consume a multitude of power leading to the problem of high energy consumption. In order to solve this problem, an energy-efficient virtual machine(VM) consolidation algorithm named PVDE(prediction-based VM deployment algorithm for energy efficiency) is presented. The proposed algorithm uses linear weighted method to predict the load of a host and classifies the hosts in the data center, based on the predicted host load, into four classes for the purpose of VMs migration. We also propose four types of VM selection algorithms for the purpose of determining potential VMs to be migrated. We performed extensive performance analysis of the proposed algorithms. Experimental results show that, in contrast to other energy-saving algorithms, the algorithm proposed in this work significantly reduces the energy consumption and maintains low service level agreement(SLA) violations.展开更多
For imbalanced datasets, the focus of classification is to identify samples of the minority class. The performance of current data mining algorithms is not good enough for processing imbalanced datasets. The synthetic...For imbalanced datasets, the focus of classification is to identify samples of the minority class. The performance of current data mining algorithms is not good enough for processing imbalanced datasets. The synthetic minority over-sampling technique(SMOTE) is specifically designed for learning from imbalanced datasets, generating synthetic minority class examples by interpolating between minority class examples nearby. However, the SMOTE encounters the overgeneralization problem. The densitybased spatial clustering of applications with noise(DBSCAN) is not rigorous when dealing with the samples near the borderline.We optimize the DBSCAN algorithm for this problem to make clustering more reasonable. This paper integrates the optimized DBSCAN and SMOTE, and proposes a density-based synthetic minority over-sampling technique(DSMOTE). First, the optimized DBSCAN is used to divide the samples of the minority class into three groups, including core samples, borderline samples and noise samples, and then the noise samples of minority class is removed to synthesize more effective samples. In order to make full use of the information of core samples and borderline samples,different strategies are used to over-sample core samples and borderline samples. Experiments show that DSMOTE can achieve better results compared with SMOTE and Borderline-SMOTE in terms of precision, recall and F-value.展开更多
For the accurate description of aerodynamic characteristics for aircraft,a wavelet neural network (WNN) aerodynamic modeling method from flight data,based on improved particle swarm optimization (PSO) algorithm with i...For the accurate description of aerodynamic characteristics for aircraft,a wavelet neural network (WNN) aerodynamic modeling method from flight data,based on improved particle swarm optimization (PSO) algorithm with information sharing strategy and velocity disturbance operator,is proposed.In improved PSO algorithm,an information sharing strategy is used to avoid the premature convergence as much as possible;the velocity disturbance operator is adopted to jump out of this position once falling into the premature convergence.Simulations on lateral and longitudinal aerodynamic modeling for ATTAS (advanced technologies testing aircraft system) indicate that the proposed method can achieve the accuracy improvement of an order of magnitude compared with SPSO-WNN,and can converge to a satisfactory precision by only 60 120 iterations in contrast to SPSO-WNN with 6 times precocities in 200 times repetitive experiments using Morlet and Mexican hat wavelet functions.Furthermore,it is proved that the proposed method is feasible and effective for aerodynamic modeling from flight data.展开更多
With appropriate geometry configuration, helicopter- borne rotating synthetic aperture radar (ROSAR) can break through the limitations of monostatic synthetic aperture radar (SAR) on forward-looking imaging. With ...With appropriate geometry configuration, helicopter- borne rotating synthetic aperture radar (ROSAR) can break through the limitations of monostatic synthetic aperture radar (SAR) on forward-looking imaging. With this capability, ROSAR has extensive potential applications, such as self-navigation and self-landing. Moreover, it has many advantages if combined with the frequency modulated continuous wave (FMCW) technology. A novel geometric configuration and an imaging algorithm for helicopter-borne FMCW-ROSAR are proposed. Firstly, by per- forming the equivalent phase center principle, the separated trans- mitting and receiving antenna system is equalized to the case of system configuration with antenna for both transmitting and receiving signals. Based on this, the accurate two-dimensional spectrum is obtained and the Doppler frequency shift effect in- duced by the continuous motion of the platform during the long pulse duration is compensated. Next, the impacts of the velocity approximation error on the imaging algorithm are analyzed in de- tail, and the system parameters selection and resolution analysis are presented. The well-focused SAR image is then obtained by using the improved Omega-K algorithm incorporating the accurate compensation method for the velocity approximation error. FJnally, correctness of the analysis and effectiveness of the proposed al- gorithm are demonstrated through simulation results.展开更多
Clustering is one of the unsupervised learning problems.It is a procedure which partitions data objects into groups.Many algorithms could not overcome the problems of morphology,overlapping and the large number of clu...Clustering is one of the unsupervised learning problems.It is a procedure which partitions data objects into groups.Many algorithms could not overcome the problems of morphology,overlapping and the large number of clusters at the same time.Many scientific communities have used the clustering algorithm from the perspective of density,which is one of the best methods in clustering.This study proposes a density-based spatial clustering of applications with noise(DBSCAN)algorithm based on the selected high-density areas by automatic fuzzy-DBSCAN(AFD)which works with the initialization of two parameters.AFD,by using fuzzy and DBSCAN features,is modeled by the selection of high-density areas and generates two parameters for merging and separating automatically.The two generated parameters provide a state of sub-cluster rules in the Cartesian coordinate system for the dataset.The model overcomes the problems of clustering such as morphology,overlapping,and the number of clusters in a dataset simultaneously.In the experiments,all algorithms are performed on eight data sets with 30 times of running.Three of them are related to overlapping real datasets and the rest are morphologic and synthetic datasets.It is demonstrated that the AFD algorithm outperforms other recently developed clustering algorithms.展开更多
To overcome the deficiencies of high computational complexity and low convergence speed in traditional neural networks, a novel bio-inspired machine learning algorithm named brain emotional learning (BEL) is introdu...To overcome the deficiencies of high computational complexity and low convergence speed in traditional neural networks, a novel bio-inspired machine learning algorithm named brain emotional learning (BEL) is introduced. BEL mimics the emotional learning mechanism in brain which has the superior features of fast learning and quick reacting. To further improve the performance of BEL in data analysis, genetic algorithm (GA) is adopted for optimally tuning the weights and biases of amygdala and orbitofrontal cortex in BEL neural network. The integrated algorithm named GA-BEL combines the advantages of the fast learning of BEL, and the global optimum solution of GA. GA-BEL has been tested on a real-world chaotic time series of geomagnetic activity index for prediction, eight benchmark datasets of university California at Irvine (UCI) and a functional magnetic resonance imaging (fMRI) dataset for classifications. The comparisons of experimental results have shown that the proposed GA-BEL algorithm is more accurate than the original BEL in prediction, and more effective when dealing with large-scale classification problems. Further, it outperforms most other traditional algorithms in terms of accuracy and execution speed in both prediction and classification applications.展开更多
As a combination of edge computing and artificial intelligence,edge intelligence has become a promising technique and provided its users with a series of fast,precise,and customized services.In edge intelligence,when ...As a combination of edge computing and artificial intelligence,edge intelligence has become a promising technique and provided its users with a series of fast,precise,and customized services.In edge intelligence,when learning agents are deployed on the edge side,the data aggregation from the end side to the designated edge devices is an important research topic.Considering the various importance of end devices,this paper studies the weighted data aggregation problem in a single hop end-to-edge communication network.Firstly,to make sure all the end devices with various weights are fairly treated in data aggregation,a distributed end-to-edge cooperative scheme is proposed.Then,to handle the massive contention on the wireless channel caused by end devices,a multi-armed bandit(MAB)algorithm is designed to help the end devices find their most appropriate update rates.Diffe-rent from the traditional data aggregation works,combining the MAB enables our algorithm a higher efficiency in data aggregation.With a theoretical analysis,we show that the efficiency of our algorithm is asymptotically optimal.Comparative experiments with previous works are also conducted to show the strength of our algorithm.展开更多
A system model based on joint layer mechanism is formulated for optimal data scheduling over fixed point-to-point links in OFDMA ad-hoc wireless networks. A distributed scheduling algorithm (DSA) for system model op...A system model based on joint layer mechanism is formulated for optimal data scheduling over fixed point-to-point links in OFDMA ad-hoc wireless networks. A distributed scheduling algorithm (DSA) for system model optimization is proposed that combines the randomly chosen subcarrier according to the channel condition of local subcarriers with link power control to limit interference caused by the reuse of subcarrier among links. For the global fairness improvement of algorithms, a global power control scheduling algorithm (GPCSA) based on the proposed DSA is presented and dynamically allocates global power according to difference between average carrier-noise-ratio of selected local links and system link protection ratio. Simulation results demonstrate that the proposed algorithms achieve better efficiency and fairness compared with other existing algorithms.展开更多
This paper proposes a robust method of parameter estimation and data classification for multiple-structural data based on the linear error in variable(EIV) model.The traditional EIV model fitting problem is analyzed...This paper proposes a robust method of parameter estimation and data classification for multiple-structural data based on the linear error in variable(EIV) model.The traditional EIV model fitting problem is analyzed and a robust growing algorithm is developed to extract the underlying linear structure of the observed data.Under the structural density assumption,the C-step technique borrowed from the Rousseeuw's robust MCD estimator is used to keep the algorithm robust and the mean-shift algorithm is adopted to ensure a good initialization.To eliminate the model ambiguities of the multiple-structural data,statistical hypotheses tests are used to refine the data classification and improve the accuracy of the model parameter estimation.Experiments show that the efficiency and robustness of the proposed algorithm.展开更多
基金the National Natural Science Foundation of China (Grants No. 12072090 and No.12302056) to provide fund for conducting experiments。
文摘Recently, high-precision trajectory prediction of ballistic missiles in the boost phase has become a research hotspot. This paper proposes a trajectory prediction algorithm driven by data and knowledge(DKTP) to solve this problem. Firstly, the complex dynamics characteristics of ballistic missile in the boost phase are analyzed in detail. Secondly, combining the missile dynamics model with the target gravity turning model, a knowledge-driven target three-dimensional turning(T3) model is derived. Then, the BP neural network is used to train the boost phase trajectory database in typical scenarios to obtain a datadriven state parameter mapping(SPM) model. On this basis, an online trajectory prediction framework driven by data and knowledge is established. Based on the SPM model, the three-dimensional turning coefficients of the target are predicted by using the current state of the target, and the state of the target at the next moment is obtained by combining the T3 model. Finally, simulation verification is carried out under various conditions. The simulation results show that the DKTP algorithm combines the advantages of data-driven and knowledge-driven, improves the interpretability of the algorithm, reduces the uncertainty, which can achieve high-precision trajectory prediction of ballistic missile in the boost phase.
基金Projects(60873265,60903222) supported by the National Natural Science Foundation of China Project(IRT0661) supported by the Program for Changjiang Scholars and Innovative Research Team in University of China
文摘The Circle algorithm was proposed for large datasets.The idea of the algorithm is to find a set of vertices that are close to each other and far from other vertices.This algorithm makes use of the connection between clustering aggregation and the problem of correlation clustering.The best deterministic approximation algorithm was provided for the variation of the correlation of clustering problem,and showed how sampling can be used to scale the algorithms for large datasets.An extensive empirical evaluation was given for the usefulness of the problem and the solutions.The results show that this method achieves more than 50% reduction in the running time without sacrificing the quality of the clustering.
基金This project was supported by the National Natural Science Foundation of China (60272024).
文摘A specialized Hungarian algorithm was developed here for the maximum likelihood data association problem with two implementation versions due to presence of false alarms and missed detections. The maximum likelihood data association problem is formulated as a bipartite weighted matching problem. Its duality and the optimality conditions are given. The Hungarian algorithm with its computational steps, data structure and computational complexity is presented. The two implementation versions, Hungarian forest (HF) algorithm and Hungarian tree (HT) algorithm, and their combination with the naYve auction initialization are discussed. The computational results show that HT algorithm is slightly faster than HF algorithm and they are both superior to the classic Munkres algorithm.
基金This project was supported by the National Natural Science Foundation of China (60172033) the Excellent Ph.D.PaperAuthor Foundation of China (200036 ,200237) .
文摘Aiming at three-passive-sensor location system, a generalized 3-dimension (3-D) assignment model is constructed based on property information, and a multi-target programming model is proposed based on direction-finding and property fusion information. The multi-target programming model is transformed into a single target programming problem to resolve, and its data association result is compared with the results which are solved by using one kind of information only. Simulation experiments show the effectiveness of the multi-target programming algorithm with higher data association accuracy and less calculation.
基金Sponsored by the National High Technology Research and Development Program of China(2006AA701306)the National Innovation Foundation of Enterprises(05C26212200378)
文摘Improved traditional ant colony algorithms,a data routing model used to the data remote exchange on WAN was presented.In the model,random heuristic factors were introduced to realize multi-path search.The updating model of pheromone could adjust the pheromone concentration on the optimal path according to path load dynamically to make the system keep load balance.The simulation results show that the improved model has a higher performance on convergence and load balance.
文摘To Meet the requirements of multi-sensor data fusion in diagnosis for complex equipment systems,a novel, fuzzy similarity-based data fusion algorithm is given. Based on fuzzy set theory, it calculates the fuzzy similarity among a certain sensor's measurement values and the multiple sensor's objective prediction values to determine the importance weigh of each sensor,and realizes the multi-sensor diagnosis parameter data fusion.According to the principle, its application software is also designed. The applied example proves that the algorithm can give priority to the high-stability and high -reliability sensors and it is laconic ,feasible and efficient to real-time circumstance measure and data processing in engine diagnosis.
基金This project was supported by the National Natural Science Foundation of China (60672139, 60672140)the Excellent Ph.D. Paper Author Foundation of China (200237)the Natural Science Foundation of Shandong (2005ZX01).
文摘Under the scenario of dense targets in clutter, a multi-layer optimal data correlation algorithm is proposed. This algorithm eliminates a large number of false location points from the assignment process by rough correlations before we calculate the correlation cost, so it avoids the operations for the target state estimate and the calculation of the correlation cost for the false correlation sets. In the meantime, with the elimination of these points in the rough correlation, the disturbance from the false correlations in the assignment process is decreased, so the data correlation accuracy is improved correspondingly. Complexity analyses of the new multi-layer optimal algorithm and the traditional optimal assignment algorithm are given. Simulation results show that the new algorithm is feasible and effective.
基金supported by the National Natural Science Foundation of China(61472192)the Scientific and Technological Support Project(Society)of Jiangsu Province(BE2016776)
文摘The Internet now is a large-scale platform with big data. Finding truth from a huge dataset has attracted extensive attention, which can maintain the quality of data collected by users and provide users with accurate and efficient data. However, current truth finder algorithms are unsatisfying, because of their low accuracy and complication. This paper proposes a truth finder algorithm based on entity attributes (TFAEA). Based on the iterative computation of source reliability and fact accuracy, TFAEA considers the interactive degree among facts and the degree of dependence among sources, to simplify the typical truth finder algorithms. In order to improve the accuracy of them, TFAEA combines the one-way text similarity and the factual conflict to calculate the mutual support degree among facts. Furthermore, TFAEA utilizes the symmetric saturation of data sources to calculate the degree of dependence among sources. The experimental results show that TFAEA is not only more stable, but also more accurate than the typical truth finder algorithms.
基金Project(41574123)supported by the National Natural Science Foundation of ChinaProject(2015zzts250)supported by the Fundamental Research Funds for the Central Universities,ChinaProject(2013FY110800)supported by the National Basic Research Scientific Program of China
文摘The self-potential method is widely used in environmental and engineering geophysics. Four intelligent optimization algorithms are adopted to design the inversion to interpret self-potential data more accurately and efficiently: simulated annealing, genetic, particle swarm optimization, and ant colony optimization. Using both noise-free and noise-added synthetic data, it is demonstrated that all four intelligent algorithms can perform self-potential data inversion effectively. During the numerical experiments, the model distribution in search space, the relative errors of model parameters, and the elapsed time are recorded to evaluate the performance of the inversion. The results indicate that all the intelligent algorithms have good precision and tolerance to noise. Particle swarm optimization has the fastest convergence during iteration because of its good balanced searching capability between global and local minimisation.
文摘A system model is formulated as the maximization of a total utility function to achieve fair downlink data scheduling in multiuser orthogonal frequency division multiplexing (OFDM) wireless networks. A dynamic subcarrier allocation algorithm (DSAA) is proposed, to optimize the system model. The subcarrier allocation decision is made by the proposed DSAA according to the maximum value of total utility function with respect to the queue mean waiting time. Simulation results demonstrate that compared to the conventional algorithms, the proposed algorithm has better delay performance and can provide fairness under different loads by using different utility functions.
基金supported by the National Natural Science Foundation of China(6120200461272084)+9 种基金the National Key Basic Research Program of China(973 Program)(2011CB302903)the Specialized Research Fund for the Doctoral Program of Higher Education(2009322312000120113223110003)the China Postdoctoral Science Foundation Funded Project(2011M5000952012T50514)the Natural Science Foundation of Jiangsu Province(BK2011754BK2009426)the Jiangsu Postdoctoral Science Foundation Funded Project(1102103C)the Natural Science Fund of Higher Education of Jiangsu Province(12KJB520007)the Project Funded by the Priority Academic Program Development of Jiangsu Higher Education Institutions(yx002001)
文摘How to effectively reduce the energy consumption of large-scale data centers is a key issue in cloud computing. This paper presents a novel low-power task scheduling algorithm (L3SA) for large-scale cloud data centers. The winner tree is introduced to make the data nodes as the leaf nodes of the tree and the final winner on the purpose of reducing energy consumption is selected. The complexity of large-scale cloud data centers is fully consider, and the task comparson coefficient is defined to make task scheduling strategy more reasonable. Experiments and performance analysis show that the proposed algorithm can effectively improve the node utilization, and reduce the overall power consumption of the cloud data center.
基金Projects(61572525,61272148)supported by the National Natural Science Foundation of ChinaProject(20120162110061)supported by the PhD Programs Foundation of Ministry of Education of China+1 种基金Project(CX2014B066)supported by the Hunan Provincial Innovation Foundation for Postgraduate,ChinaProject(2014zzts044)supported by the Fundamental Research Funds for the Central Universities,China
文摘Cloud data centers consume a multitude of power leading to the problem of high energy consumption. In order to solve this problem, an energy-efficient virtual machine(VM) consolidation algorithm named PVDE(prediction-based VM deployment algorithm for energy efficiency) is presented. The proposed algorithm uses linear weighted method to predict the load of a host and classifies the hosts in the data center, based on the predicted host load, into four classes for the purpose of VMs migration. We also propose four types of VM selection algorithms for the purpose of determining potential VMs to be migrated. We performed extensive performance analysis of the proposed algorithms. Experimental results show that, in contrast to other energy-saving algorithms, the algorithm proposed in this work significantly reduces the energy consumption and maintains low service level agreement(SLA) violations.
基金supported by the National Key Research and Development Program of China(2018YFB1003700)the Scientific and Technological Support Project(Society)of Jiangsu Province(BE2016776)+2 种基金the“333” project of Jiangsu Province(BRA2017228 BRA2017401)the Talent Project in Six Fields of Jiangsu Province(2015-JNHB-012)
文摘For imbalanced datasets, the focus of classification is to identify samples of the minority class. The performance of current data mining algorithms is not good enough for processing imbalanced datasets. The synthetic minority over-sampling technique(SMOTE) is specifically designed for learning from imbalanced datasets, generating synthetic minority class examples by interpolating between minority class examples nearby. However, the SMOTE encounters the overgeneralization problem. The densitybased spatial clustering of applications with noise(DBSCAN) is not rigorous when dealing with the samples near the borderline.We optimize the DBSCAN algorithm for this problem to make clustering more reasonable. This paper integrates the optimized DBSCAN and SMOTE, and proposes a density-based synthetic minority over-sampling technique(DSMOTE). First, the optimized DBSCAN is used to divide the samples of the minority class into three groups, including core samples, borderline samples and noise samples, and then the noise samples of minority class is removed to synthesize more effective samples. In order to make full use of the information of core samples and borderline samples,different strategies are used to over-sample core samples and borderline samples. Experiments show that DSMOTE can achieve better results compared with SMOTE and Borderline-SMOTE in terms of precision, recall and F-value.
文摘For the accurate description of aerodynamic characteristics for aircraft,a wavelet neural network (WNN) aerodynamic modeling method from flight data,based on improved particle swarm optimization (PSO) algorithm with information sharing strategy and velocity disturbance operator,is proposed.In improved PSO algorithm,an information sharing strategy is used to avoid the premature convergence as much as possible;the velocity disturbance operator is adopted to jump out of this position once falling into the premature convergence.Simulations on lateral and longitudinal aerodynamic modeling for ATTAS (advanced technologies testing aircraft system) indicate that the proposed method can achieve the accuracy improvement of an order of magnitude compared with SPSO-WNN,and can converge to a satisfactory precision by only 60 120 iterations in contrast to SPSO-WNN with 6 times precocities in 200 times repetitive experiments using Morlet and Mexican hat wavelet functions.Furthermore,it is proved that the proposed method is feasible and effective for aerodynamic modeling from flight data.
基金supported by the National Basic Research Program of China(2011CB707001)the Fundamental Research Funds for the Central Universities(106112015CDJXY500001CDJZR165505)
文摘With appropriate geometry configuration, helicopter- borne rotating synthetic aperture radar (ROSAR) can break through the limitations of monostatic synthetic aperture radar (SAR) on forward-looking imaging. With this capability, ROSAR has extensive potential applications, such as self-navigation and self-landing. Moreover, it has many advantages if combined with the frequency modulated continuous wave (FMCW) technology. A novel geometric configuration and an imaging algorithm for helicopter-borne FMCW-ROSAR are proposed. Firstly, by per- forming the equivalent phase center principle, the separated trans- mitting and receiving antenna system is equalized to the case of system configuration with antenna for both transmitting and receiving signals. Based on this, the accurate two-dimensional spectrum is obtained and the Doppler frequency shift effect in- duced by the continuous motion of the platform during the long pulse duration is compensated. Next, the impacts of the velocity approximation error on the imaging algorithm are analyzed in de- tail, and the system parameters selection and resolution analysis are presented. The well-focused SAR image is then obtained by using the improved Omega-K algorithm incorporating the accurate compensation method for the velocity approximation error. FJnally, correctness of the analysis and effectiveness of the proposed al- gorithm are demonstrated through simulation results.
文摘Clustering is one of the unsupervised learning problems.It is a procedure which partitions data objects into groups.Many algorithms could not overcome the problems of morphology,overlapping and the large number of clusters at the same time.Many scientific communities have used the clustering algorithm from the perspective of density,which is one of the best methods in clustering.This study proposes a density-based spatial clustering of applications with noise(DBSCAN)algorithm based on the selected high-density areas by automatic fuzzy-DBSCAN(AFD)which works with the initialization of two parameters.AFD,by using fuzzy and DBSCAN features,is modeled by the selection of high-density areas and generates two parameters for merging and separating automatically.The two generated parameters provide a state of sub-cluster rules in the Cartesian coordinate system for the dataset.The model overcomes the problems of clustering such as morphology,overlapping,and the number of clusters in a dataset simultaneously.In the experiments,all algorithms are performed on eight data sets with 30 times of running.Three of them are related to overlapping real datasets and the rest are morphologic and synthetic datasets.It is demonstrated that the AFD algorithm outperforms other recently developed clustering algorithms.
基金Project(61403422)supported by the National Natural Science Foundation of ChinaProject(17C1084)supported by Hunan Education Department Science Foundation of ChinaProject(17ZD02)supported by Hunan University of Arts and Science,China
文摘To overcome the deficiencies of high computational complexity and low convergence speed in traditional neural networks, a novel bio-inspired machine learning algorithm named brain emotional learning (BEL) is introduced. BEL mimics the emotional learning mechanism in brain which has the superior features of fast learning and quick reacting. To further improve the performance of BEL in data analysis, genetic algorithm (GA) is adopted for optimally tuning the weights and biases of amygdala and orbitofrontal cortex in BEL neural network. The integrated algorithm named GA-BEL combines the advantages of the fast learning of BEL, and the global optimum solution of GA. GA-BEL has been tested on a real-world chaotic time series of geomagnetic activity index for prediction, eight benchmark datasets of university California at Irvine (UCI) and a functional magnetic resonance imaging (fMRI) dataset for classifications. The comparisons of experimental results have shown that the proposed GA-BEL algorithm is more accurate than the original BEL in prediction, and more effective when dealing with large-scale classification problems. Further, it outperforms most other traditional algorithms in terms of accuracy and execution speed in both prediction and classification applications.
基金supported by the National Natural Science Foundation of China(NSFC)(62102232,62122042,61971269)Natural Science Foundation of Shandong Province Under(ZR2021QF064)。
文摘As a combination of edge computing and artificial intelligence,edge intelligence has become a promising technique and provided its users with a series of fast,precise,and customized services.In edge intelligence,when learning agents are deployed on the edge side,the data aggregation from the end side to the designated edge devices is an important research topic.Considering the various importance of end devices,this paper studies the weighted data aggregation problem in a single hop end-to-edge communication network.Firstly,to make sure all the end devices with various weights are fairly treated in data aggregation,a distributed end-to-edge cooperative scheme is proposed.Then,to handle the massive contention on the wireless channel caused by end devices,a multi-armed bandit(MAB)algorithm is designed to help the end devices find their most appropriate update rates.Diffe-rent from the traditional data aggregation works,combining the MAB enables our algorithm a higher efficiency in data aggregation.With a theoretical analysis,we show that the efficiency of our algorithm is asymptotically optimal.Comparative experiments with previous works are also conducted to show the strength of our algorithm.
文摘A system model based on joint layer mechanism is formulated for optimal data scheduling over fixed point-to-point links in OFDMA ad-hoc wireless networks. A distributed scheduling algorithm (DSA) for system model optimization is proposed that combines the randomly chosen subcarrier according to the channel condition of local subcarriers with link power control to limit interference caused by the reuse of subcarrier among links. For the global fairness improvement of algorithms, a global power control scheduling algorithm (GPCSA) based on the proposed DSA is presented and dynamically allocates global power according to difference between average carrier-noise-ratio of selected local links and system link protection ratio. Simulation results demonstrate that the proposed algorithms achieve better efficiency and fairness compared with other existing algorithms.
基金supported by the National High Technology Research and Development Program of China (863 Program) (2007AA04Z227)
文摘This paper proposes a robust method of parameter estimation and data classification for multiple-structural data based on the linear error in variable(EIV) model.The traditional EIV model fitting problem is analyzed and a robust growing algorithm is developed to extract the underlying linear structure of the observed data.Under the structural density assumption,the C-step technique borrowed from the Rousseeuw's robust MCD estimator is used to keep the algorithm robust and the mean-shift algorithm is adopted to ensure a good initialization.To eliminate the model ambiguities of the multiple-structural data,statistical hypotheses tests are used to refine the data classification and improve the accuracy of the model parameter estimation.Experiments show that the efficiency and robustness of the proposed algorithm.