Heterogeneous federated learning(HtFL)has gained significant attention due to its ability to accommodate diverse models and data from distributed combat units.The prototype-based HtFL methods were proposed to reduce t...Heterogeneous federated learning(HtFL)has gained significant attention due to its ability to accommodate diverse models and data from distributed combat units.The prototype-based HtFL methods were proposed to reduce the high communication cost of transmitting model parameters.These methods allow for the sharing of only class representatives between heterogeneous clients while maintaining privacy.However,existing prototype learning approaches fail to take the data distribution of clients into consideration,which results in suboptimal global prototype learning and insufficient client model personalization capabilities.To address these issues,we propose a fair trainable prototype federated learning(FedFTP)algorithm,which employs a fair sampling training prototype(FSTP)mechanism and a hyperbolic space constraints(HSC)mechanism to enhance the fairness and effectiveness of prototype learning on the server in heterogeneous environments.Furthermore,a local prototype stable update(LPSU)mechanism is proposed as a means of maintaining personalization while promoting global consistency,based on contrastive learning.Comprehensive experimental results demonstrate that FedFTP achieves state-of-the-art performance in HtFL scenarios.展开更多
This paper considers a time-constrained data collection problem from a network of ground sensors located on uneven terrain by an Unmanned Aerial Vehicle(UAV),a typical Unmanned Aerial System(UAS).The ground sensors ha...This paper considers a time-constrained data collection problem from a network of ground sensors located on uneven terrain by an Unmanned Aerial Vehicle(UAV),a typical Unmanned Aerial System(UAS).The ground sensors harvest renewable energy and are equipped with batteries and data buffers.The ground sensor model takes into account sensor data buffer and battery limitations.An asymptotically globally optimal method of joint UAV 3D trajectory optimization and data transmission schedule is developed.The developed method maximizes the amount of data transmitted to the UAV without losses and too long delays and minimizes the propulsion energy of the UAV.The developed algorithm of optimal trajectory optimization and transmission scheduling is based on dynamic programming.Computer simulations demonstrate the effectiveness of the proposed algorithm.展开更多
The Internet now is a large-scale platform with big data. Finding truth from a huge dataset has attracted extensive attention, which can maintain the quality of data collected by users and provide users with accurate ...The Internet now is a large-scale platform with big data. Finding truth from a huge dataset has attracted extensive attention, which can maintain the quality of data collected by users and provide users with accurate and efficient data. However, current truth finder algorithms are unsatisfying, because of their low accuracy and complication. This paper proposes a truth finder algorithm based on entity attributes (TFAEA). Based on the iterative computation of source reliability and fact accuracy, TFAEA considers the interactive degree among facts and the degree of dependence among sources, to simplify the typical truth finder algorithms. In order to improve the accuracy of them, TFAEA combines the one-way text similarity and the factual conflict to calculate the mutual support degree among facts. Furthermore, TFAEA utilizes the symmetric saturation of data sources to calculate the degree of dependence among sources. The experimental results show that TFAEA is not only more stable, but also more accurate than the typical truth finder algorithms.展开更多
For imbalanced datasets, the focus of classification is to identify samples of the minority class. The performance of current data mining algorithms is not good enough for processing imbalanced datasets. The synthetic...For imbalanced datasets, the focus of classification is to identify samples of the minority class. The performance of current data mining algorithms is not good enough for processing imbalanced datasets. The synthetic minority over-sampling technique(SMOTE) is specifically designed for learning from imbalanced datasets, generating synthetic minority class examples by interpolating between minority class examples nearby. However, the SMOTE encounters the overgeneralization problem. The densitybased spatial clustering of applications with noise(DBSCAN) is not rigorous when dealing with the samples near the borderline.We optimize the DBSCAN algorithm for this problem to make clustering more reasonable. This paper integrates the optimized DBSCAN and SMOTE, and proposes a density-based synthetic minority over-sampling technique(DSMOTE). First, the optimized DBSCAN is used to divide the samples of the minority class into three groups, including core samples, borderline samples and noise samples, and then the noise samples of minority class is removed to synthesize more effective samples. In order to make full use of the information of core samples and borderline samples,different strategies are used to over-sample core samples and borderline samples. Experiments show that DSMOTE can achieve better results compared with SMOTE and Borderline-SMOTE in terms of precision, recall and F-value.展开更多
On November 14,2016(NZDT),a 7.8 magnitude earthquake struck the northeast coast of the South Island in New Zealand.A tsunami swept onto the coastlines with wave-heights of 2.5 m at Kaikoura.This earthquake is the larg...On November 14,2016(NZDT),a 7.8 magnitude earthquake struck the northeast coast of the South Island in New Zealand.A tsunami swept onto the coastlines with wave-heights of 2.5 m at Kaikoura.This earthquake is the largest event in the region since a magnitude 7.5 earthquake that occurred 100 km to the northeast in October 1848.The days immediately following a natural disaster are particularly challenging for authorities and aid organisations who need to make decisions relating to deployment and distribution of resources.Rapid Damage Mapping(RDM)is a tool developed by Tonkin+Taylor International Limited(T+TI)whereby integrated disaster mapping information is assembled within the first 24 to 72 hrs of an event.The Committee on Data of the International Council for Science(CODATA)Task Group of Linked Open Data for Global Disaster Risk Research(LODGD)organized ChinaGEOSS portal to access TripleSat and JL-1 satellite images immediately following the devastating Kaikoura earthquake.An internet based Project Orbit portal was set up for use by all response and recovery organisations in New Zealand.While the recent RDM response work was largely reactive in nature,the data set compiled during this work provides a valuable resource,presenting opportunities to apply a more proactive and refined approach to similar RDM work in the future.The recent RDM work provides valuable insight into key vulnerabilities that evolved after the earthquake,and helped to identify more than 10,000 landslips in the area.展开更多
遥感图像目标检测在军事侦察、智慧农业等领域意义重大,特别是小目标检测一直获得持续关注。然而,遥感图像中的小目标面临特征信息不足、检测难度大等问题,成为困扰遥感检测应用发展的最大障碍。为此,提出YOLO-HF(you only look once-hy...遥感图像目标检测在军事侦察、智慧农业等领域意义重大,特别是小目标检测一直获得持续关注。然而,遥感图像中的小目标面临特征信息不足、检测难度大等问题,成为困扰遥感检测应用发展的最大障碍。为此,提出YOLO-HF(you only look once-hybrid feature)算法,该算法在传统YOLOv7模型的网络中,引入通道注意力和自注意力的混合注意力机制提取目标深层特征,并将浅层特征和深层特征进行融合,增加局部特征的丰富性;为进一步加强对全局信息的关注,在提取特征后为小尺度目标添加全局注意力机制,实现全局特征表达能力的提升;为避免传统损失函数对小目标位置偏差敏感,导致检测效果不佳,选择使用一种新的度量方式,将其嵌入边界框损失函数的计算中,从而加快损失函数的收敛,实现小目标检测精度的提升。实验结果表明:与传统YOLOv7算法相比,所提算法在RSOD和NWPU VHR-10数据集上均表现出优越性,特别地,在RSOD数据集上均值平均精度提升了2.90%,在NWPU VHR-10数据集上均值平均精度实现了3.61%的提升。展开更多
基金supported by the Natural Science Foundation of Xinjiang Uygur Autonomous Region(No.2022D01B187).
文摘Heterogeneous federated learning(HtFL)has gained significant attention due to its ability to accommodate diverse models and data from distributed combat units.The prototype-based HtFL methods were proposed to reduce the high communication cost of transmitting model parameters.These methods allow for the sharing of only class representatives between heterogeneous clients while maintaining privacy.However,existing prototype learning approaches fail to take the data distribution of clients into consideration,which results in suboptimal global prototype learning and insufficient client model personalization capabilities.To address these issues,we propose a fair trainable prototype federated learning(FedFTP)algorithm,which employs a fair sampling training prototype(FSTP)mechanism and a hyperbolic space constraints(HSC)mechanism to enhance the fairness and effectiveness of prototype learning on the server in heterogeneous environments.Furthermore,a local prototype stable update(LPSU)mechanism is proposed as a means of maintaining personalization while promoting global consistency,based on contrastive learning.Comprehensive experimental results demonstrate that FedFTP achieves state-of-the-art performance in HtFL scenarios.
基金funding from the Australian Government,via Grant No.AUSMURIB000001 associated with ONR MURI Grant No.N00014-19-1-2571。
文摘This paper considers a time-constrained data collection problem from a network of ground sensors located on uneven terrain by an Unmanned Aerial Vehicle(UAV),a typical Unmanned Aerial System(UAS).The ground sensors harvest renewable energy and are equipped with batteries and data buffers.The ground sensor model takes into account sensor data buffer and battery limitations.An asymptotically globally optimal method of joint UAV 3D trajectory optimization and data transmission schedule is developed.The developed method maximizes the amount of data transmitted to the UAV without losses and too long delays and minimizes the propulsion energy of the UAV.The developed algorithm of optimal trajectory optimization and transmission scheduling is based on dynamic programming.Computer simulations demonstrate the effectiveness of the proposed algorithm.
基金supported by the National Natural Science Foundation of China(61472192)the Scientific and Technological Support Project(Society)of Jiangsu Province(BE2016776)
文摘The Internet now is a large-scale platform with big data. Finding truth from a huge dataset has attracted extensive attention, which can maintain the quality of data collected by users and provide users with accurate and efficient data. However, current truth finder algorithms are unsatisfying, because of their low accuracy and complication. This paper proposes a truth finder algorithm based on entity attributes (TFAEA). Based on the iterative computation of source reliability and fact accuracy, TFAEA considers the interactive degree among facts and the degree of dependence among sources, to simplify the typical truth finder algorithms. In order to improve the accuracy of them, TFAEA combines the one-way text similarity and the factual conflict to calculate the mutual support degree among facts. Furthermore, TFAEA utilizes the symmetric saturation of data sources to calculate the degree of dependence among sources. The experimental results show that TFAEA is not only more stable, but also more accurate than the typical truth finder algorithms.
基金supported by the National Key Research and Development Program of China(2018YFB1003700)the Scientific and Technological Support Project(Society)of Jiangsu Province(BE2016776)+2 种基金the“333” project of Jiangsu Province(BRA2017228 BRA2017401)the Talent Project in Six Fields of Jiangsu Province(2015-JNHB-012)
文摘For imbalanced datasets, the focus of classification is to identify samples of the minority class. The performance of current data mining algorithms is not good enough for processing imbalanced datasets. The synthetic minority over-sampling technique(SMOTE) is specifically designed for learning from imbalanced datasets, generating synthetic minority class examples by interpolating between minority class examples nearby. However, the SMOTE encounters the overgeneralization problem. The densitybased spatial clustering of applications with noise(DBSCAN) is not rigorous when dealing with the samples near the borderline.We optimize the DBSCAN algorithm for this problem to make clustering more reasonable. This paper integrates the optimized DBSCAN and SMOTE, and proposes a density-based synthetic minority over-sampling technique(DSMOTE). First, the optimized DBSCAN is used to divide the samples of the minority class into three groups, including core samples, borderline samples and noise samples, and then the noise samples of minority class is removed to synthesize more effective samples. In order to make full use of the information of core samples and borderline samples,different strategies are used to over-sample core samples and borderline samples. Experiments show that DSMOTE can achieve better results compared with SMOTE and Borderline-SMOTE in terms of precision, recall and F-value.
文摘On November 14,2016(NZDT),a 7.8 magnitude earthquake struck the northeast coast of the South Island in New Zealand.A tsunami swept onto the coastlines with wave-heights of 2.5 m at Kaikoura.This earthquake is the largest event in the region since a magnitude 7.5 earthquake that occurred 100 km to the northeast in October 1848.The days immediately following a natural disaster are particularly challenging for authorities and aid organisations who need to make decisions relating to deployment and distribution of resources.Rapid Damage Mapping(RDM)is a tool developed by Tonkin+Taylor International Limited(T+TI)whereby integrated disaster mapping information is assembled within the first 24 to 72 hrs of an event.The Committee on Data of the International Council for Science(CODATA)Task Group of Linked Open Data for Global Disaster Risk Research(LODGD)organized ChinaGEOSS portal to access TripleSat and JL-1 satellite images immediately following the devastating Kaikoura earthquake.An internet based Project Orbit portal was set up for use by all response and recovery organisations in New Zealand.While the recent RDM response work was largely reactive in nature,the data set compiled during this work provides a valuable resource,presenting opportunities to apply a more proactive and refined approach to similar RDM work in the future.The recent RDM work provides valuable insight into key vulnerabilities that evolved after the earthquake,and helped to identify more than 10,000 landslips in the area.