Numerous works prove that existing neighbor-averaging graph neural networks(GNNs)cannot efficiently catch structure features,and many works show that injecting structure,distance,position,or spatial features can signi...Numerous works prove that existing neighbor-averaging graph neural networks(GNNs)cannot efficiently catch structure features,and many works show that injecting structure,distance,position,or spatial features can significantly improve the performance of GNNs,however,injecting high-level structure and distance into GNNs is an intuitive but untouched idea.This work sheds light on this issue and proposes a scheme to enhance graph attention networks(GATs)by encoding distance and hop-wise structure statistics.Firstly,the hop-wise structure and distributional distance information are extracted based on several hop-wise ego-nets of every target node.Secondly,the derived structure information,distance information,and intrinsic features are encoded into the same vector space and then added together to get initial embedding vectors.Thirdly,the derived embedding vectors are fed into GATs,such as GAT and adaptive graph diffusion network(AGDN)to get the soft labels.Fourthly,the soft labels are fed into correct and smooth(C&S)to conduct label propagation and get final predictions.Experiments show that the distance and hop-wise structures encoding enhanced graph attention networks(DHSEGATs)achieve a competitive result.展开更多
Ordering based search methods have advantages over graph based search methods for structure learning of Bayesian networks in terms on the efficiency. With the aim of further increasing the accuracy of ordering based s...Ordering based search methods have advantages over graph based search methods for structure learning of Bayesian networks in terms on the efficiency. With the aim of further increasing the accuracy of ordering based search methods, we first propose to increase the search space, which can facilitate escaping from the local optima. We present our search operators with majorizations, which are easy to implement. Experiments show that the proposed algorithm can obtain significantly more accurate results. With regard to the problem of the decrease on efficiency due to the increase of the search space, we then propose to add path priors as constraints into the swap process. We analyze the coefficient which may influence the performance of the proposed algorithm, the experiments show that the constraints can enhance the efficiency greatly, while has little effect on the accuracy. The final experiments show that, compared to other competitive methods, the proposed algorithm can find better solutions while holding high efficiency at the same time on both synthetic and real data sets.展开更多
文摘Numerous works prove that existing neighbor-averaging graph neural networks(GNNs)cannot efficiently catch structure features,and many works show that injecting structure,distance,position,or spatial features can significantly improve the performance of GNNs,however,injecting high-level structure and distance into GNNs is an intuitive but untouched idea.This work sheds light on this issue and proposes a scheme to enhance graph attention networks(GATs)by encoding distance and hop-wise structure statistics.Firstly,the hop-wise structure and distributional distance information are extracted based on several hop-wise ego-nets of every target node.Secondly,the derived structure information,distance information,and intrinsic features are encoded into the same vector space and then added together to get initial embedding vectors.Thirdly,the derived embedding vectors are fed into GATs,such as GAT and adaptive graph diffusion network(AGDN)to get the soft labels.Fourthly,the soft labels are fed into correct and smooth(C&S)to conduct label propagation and get final predictions.Experiments show that the distance and hop-wise structures encoding enhanced graph attention networks(DHSEGATs)achieve a competitive result.
基金supported by the National Natural Science Fundation of China(61573285)the Doctoral Fundation of China(2013ZC53037)
文摘Ordering based search methods have advantages over graph based search methods for structure learning of Bayesian networks in terms on the efficiency. With the aim of further increasing the accuracy of ordering based search methods, we first propose to increase the search space, which can facilitate escaping from the local optima. We present our search operators with majorizations, which are easy to implement. Experiments show that the proposed algorithm can obtain significantly more accurate results. With regard to the problem of the decrease on efficiency due to the increase of the search space, we then propose to add path priors as constraints into the swap process. We analyze the coefficient which may influence the performance of the proposed algorithm, the experiments show that the constraints can enhance the efficiency greatly, while has little effect on the accuracy. The final experiments show that, compared to other competitive methods, the proposed algorithm can find better solutions while holding high efficiency at the same time on both synthetic and real data sets.