To reduce the interference among small cells of Ultra-Dense Networks(UDN),an improved Clustering-Assisted Resource Allocation(CARA)scheme is proposed in this paper.The proposed scheme is divided into three steps.First...To reduce the interference among small cells of Ultra-Dense Networks(UDN),an improved Clustering-Assisted Resource Allocation(CARA)scheme is proposed in this paper.The proposed scheme is divided into three steps.First,an Interference-Limited Clustering Algorithm(ILCA)based on interference graph corresponding to the interference relationship between Femtocell Base Stations(FBSs),is proposed to group FBSs into disjoint clusters,in which a pre-threshold is set to constrain the sum of interference in each cluster,and a Cluster Head(CH)is selected for each cluster.Then,CH performs a twostage sub-channel allocation within its associated cluster,where the first stage assigns one sub-channel to each user of the cluster and the second stage assigns a second sub-channel to some users.Finally,a power allocation method is designed to maximize throughput for a given clustering and sub-channel configuration.Simulation results indicate that the proposed scheme distributes FBSs into each cluster more evenly,and significantly improves the system throughput compared with the existing schemes in the same scenario.展开更多
In order to meet the exponentially increasing demand on mobile data traffic, self-backhaul ultra-dense networks(UDNs) combined with millimeter wave(mm Wave) communications are expected to provide high spatial multiple...In order to meet the exponentially increasing demand on mobile data traffic, self-backhaul ultra-dense networks(UDNs) combined with millimeter wave(mm Wave) communications are expected to provide high spatial multiplexing gain and wide bandwidths for multi-gigabit peak data rates. In selfbackhaul UDNs, how to make the radio access rates of small cells match their backhaul rates by user association and how to dynamically allocate bandwidth for the access links and backhaul links to balance two-hop link resources are two key problems on improving the overall throughputs. Based on this, a joint scheme of user association and resource allocation is proposed in self-backhaul ultra-dense networks. Because of the combinatorial and nonconvex features of the original optimization problem, it has been divided into two subproblems. Firstly, to make the radio access rates of small base stations match their backhaul rates and maximize sum access rates per Hz of all small cells, a proportional constraint is introduced, and immune optimization algorithm(IOA) is adopted to optimize the association indicator variables and the boresight angles of between users and base stations. Then, the optimal backhaul and access bandwidths are calculated by differentiating the general expression of overall throughput. Simulation results indicatethat the proposed scheme increases the overall throughputs significantly compared to the traditional minimum-distance based association scheme.展开更多
In ultra-dense networks (UDN), the local precoding scheme for time-division duplex coordinated multiple point transmission (TDD-CoMP) can have a good performance with no feedback by using reciprocity between uplin...In ultra-dense networks (UDN), the local precoding scheme for time-division duplex coordinated multiple point transmission (TDD-CoMP) can have a good performance with no feedback by using reciprocity between uplink and dovallink. However, if channel is time-varying, the channel difference would cause codeword mismatch between transmitter and receiver, which leads to performance degradation. In this paper, a linear interpolation method is proposed for TDD-CoMP system to estimate the uplink channel at the receiver, which would reduce the channel difference caused by time delay and decrease the probability of codeword mismatch between both sides. Moreover, to mitigate severe inter-cell interference and increase the coverage and throughput of celledge users in UDN, a two-codebook scheme is used to strengthen cooperation between base stations (BSs), which can outperform the global precoding scheme with less overhead. Simulations show that the proposed scheme can significantly improve the link performance compared to the global precoding scheme.展开更多
In future 5G networks,a key scenario is the dense user distribution over some area,such as office,urban apartments,shopping mall,stadium,etc.,where the requirement for user-experienced rate at cell-edge can be
Blockage and imperfect beam alignment are two principal difficulties in high-frequency bands directional transmissions.In this paper,the coverage performance of downlink directional transmissions in ultra-dense networ...Blockage and imperfect beam alignment are two principal difficulties in high-frequency bands directional transmissions.In this paper,the coverage performance of downlink directional transmissions in ultra-dense networks is analyzed,with the consideration of beam alignment error and link blockage through stochastic geometry.Numerical experiments demonstrate that narrower beam leads to higher coverage probability with perfect beam alignment,but it is not the case with imperfect beam alignment.Therefore,the optimal beamwidth that maximize the coverage probability is characterized and a closed-form approximation of the optimal beamwidth is derived under imperfect beam alignment,accordingly.Furthermore,the optimal beamwidth is a monotonically increasing function of the standard deviation of the beam alignment error,and a monotonically decreasing function of the beamwidth of correspondent communication end,indicating that the beamwidth of the communication pairs ought to be jointly designed.展开更多
Cell discontinuous transmission(Cell DTx)is a key technology to mitigate inter-cell interference(ICI)in ultra-dense networks(UDNs).The aim of this work is to understand the impact of Cell DTx on physical-layer sum rat...Cell discontinuous transmission(Cell DTx)is a key technology to mitigate inter-cell interference(ICI)in ultra-dense networks(UDNs).The aim of this work is to understand the impact of Cell DTx on physical-layer sum rates of SBSs and link-layer quality-of-service(QoS)performance in multiuser UDNs.In this work,we develop a cross-layer framework for capacity analysis in multiuser UDNs with Cell DTx.In particular,we first extend the traditional one-dimensional effective capacity model to a new multidimensional effective capacity model to derive the sum rate and the effective capacity.Moreover,we propose a new iterative bisection search algorithm that is capable of approximating QoS performance.The convergence of this new algorithm to a unique QoS exponent vector is later proved.Finally,we apply this framework to the round-robin and the max-C/I scheduling policies.Simulation results show that our framework is accurate in approximating 1)queue length distribution,2)delay distribution and 3)sum rates under the above two scheduling policies,and further show that with the Cell DTx,systems have approximately 30% higher sum rate and 35% smaller average delay than those in full-buffer scenarios.展开更多
In this paper,a distributed chunkbased optimization algorithm is proposed for the resource allocation in broadband ultra-dense small cell networks.Based on the proposed algorithm,the power and subcarrier allocation pr...In this paper,a distributed chunkbased optimization algorithm is proposed for the resource allocation in broadband ultra-dense small cell networks.Based on the proposed algorithm,the power and subcarrier allocation problems are jointly optimized.In order to make the resource allocation suitable for large scale networks,the optimization problem is decomposed first based on an effective decomposition algorithm named optimal condition decomposition(OCD) algorithm.Furthermore,aiming at reducing implementation complexity,the subcarriers are divided into chunks and are allocated chunk by chunk.The simulation results show that the proposed algorithm achieves more superior performance than uniform power allocation scheme and Lagrange relaxation method,and then the proposed algorithm can strike a balance between the complexity and performance of the multi-carrier Ultra-Dense Networks.展开更多
Ultra-dense networking is widely accepted as a promising enabling technology to realize high power and spectrum efficient communications in future 5G communication systems. Although joint resource allocation schemes p...Ultra-dense networking is widely accepted as a promising enabling technology to realize high power and spectrum efficient communications in future 5G communication systems. Although joint resource allocation schemes promise huge performance improvement at the cost of cooperation among base stations,the large numbers of user equipment and base station make jointly optimizing the available resource very challenging and even prohibitive. How to decompose the resource allocation problem is a critical issue. In this paper,we exploit factor graphs to design a distributed resource allocation algorithm for ultra dense networks,which consists of power allocation,subcarrier allocation and cell association. The proposed factor graph based distributed algorithm can decompose the joint optimization problem of resource allocation into a series of low complexity subproblems with much lower dimensionality,and the original optimization problem can be efficiently solved via solving these subproblems iteratively. In addition,based on the proposed algorithm the amounts of exchanging information overhead between the resulting subprob-lems are also reduced. The proposed distributed algorithm can be understood as solving largely dimensional optimization problem in a soft manner,which is much preferred in practical scenarios. Finally,the performance of the proposed low complexity distributed algorithm is evaluated by several numerical results.展开更多
With the deployment of ultra-dense low earth orbit(LEO)satellite constellations,LEO satellite access network(LEO-SAN)is envisioned to achieve global Internet coverage.Meanwhile,the civil aviation communications have i...With the deployment of ultra-dense low earth orbit(LEO)satellite constellations,LEO satellite access network(LEO-SAN)is envisioned to achieve global Internet coverage.Meanwhile,the civil aviation communications have increased dramatically,especially for providing airborne Internet services.However,due to dynamic service demands and onboard LEO resources over time and space,it poses huge challenges in satellite-aircraft access and service management in ultra-dense LEO satellite networks(UDLSN).In this paper,we propose a deep reinforcement learning-based approach for ultra-dense LEO satellite-aircraft access and service management.Firstly,we develop an airborne Internet architecture based on UDLSN and design a management mechanism including medium earth orbit satellites to guarantee lightweight management.Secondly,considering latency-sensitive and latency-tolerant services,we formulate the problem of satellite-aircraft access and service management for civil aviation to ensure service continuity.Finally,we propose a proximal policy optimization-based access and service management algorithm to solve the formulated problem.Simulation results demonstrate the convergence and effectiveness of the proposed algorithm with satisfying the service continuity when applying to the UDLSN.展开更多
Friendship paradox states that individuals are likely to have fewer friends than their friends do,on average.Despite of its wide existence and appealing applications in real social networks,the mathematical understand...Friendship paradox states that individuals are likely to have fewer friends than their friends do,on average.Despite of its wide existence and appealing applications in real social networks,the mathematical understanding of friendship paradox is very limited.Only few works provide theoretical evidence of single-step and multi-step friendship paradoxes,given that the neighbors of interest are onehop and multi-hop away from the target node.However,they consider non-evolving networks,as opposed to the topology of real social networks that are constantly growing over time.We are thus motivated to present a first look into friendship paradox in evolving networks,where newly added nodes preferentially attach themselves to those with higher degrees.Our analytical verification of both single-step and multistep friendship paradoxes in evolving networks,along with comparison to the non-evolving counterparts,discloses that“friendship paradox is even more paradoxical in evolving networks”,primarily from three aspects:1)we demonstrate a strengthened effect of single-step friendship paradox in evolving networks,with a larger probability(more than 0.8)of a random node’s neighbors having higher average degree than the random node itself;2)we unravel higher effectiveness of multi-step friendship paradox in seeking for influential nodes in evolving networks,as the rate of reaching the max degree node can be improved by a factor of at least Θ(t^(2/3))with t being the network size;3)we empirically verify our findings through both synthetic and real datasets,which suggest high agreements of results and consolidate the reasonability of evolving model for real social networks.展开更多
Wireless Sensor Network(WSN)comprises a set of interconnected,compact,autonomous,and resource-constrained sensor nodes that are wirelessly linked to monitor and gather data from the physical environment.WSNs are commo...Wireless Sensor Network(WSN)comprises a set of interconnected,compact,autonomous,and resource-constrained sensor nodes that are wirelessly linked to monitor and gather data from the physical environment.WSNs are commonly used in various applications such as environmental monitoring,surveillance,healthcare,agriculture,and industrial automation.Despite the benefits of WSN,energy efficiency remains a challenging problem that needs to be addressed.Clustering and routing can be considered effective solutions to accomplish energy efficiency in WSNs.Recent studies have reported that metaheuristic algorithms can be applied to optimize cluster formation and routing decisions.This study introduces a new Northern Goshawk Optimization with boosted coati optimization algorithm for cluster-based routing(NGOBCO-CBR)method for WSN.The proposed NGOBCO-CBR method resolves the hot spot problem,uneven load balancing,and energy consumption in WSN.The NGOBCO-CBR technique comprises two major processes such as NGO based clustering and BCO-based routing.In the initial phase,the NGObased clustering method is designed for cluster head(CH)selection and cluster construction using five input variables such as residual energy(RE),node proximity,load balancing,network average energy,and distance to BS(DBS).Besides,the NGOBCO-CBR technique applies the BCO algorithm for the optimum selection of routes to BS.The experimental results of the NGOBCOCBR technique are studied under different scenarios,and the obtained results showcased the improved efficiency of the NGOBCO-CBR technique over recent approaches in terms of different measures.展开更多
This paper study the finite time internal synchronization and the external synchronization(hybrid synchronization)for duplex heterogeneous complex networks by time-varying intermittent control.There few study hybrid s...This paper study the finite time internal synchronization and the external synchronization(hybrid synchronization)for duplex heterogeneous complex networks by time-varying intermittent control.There few study hybrid synchronization of heterogeneous duplex complex networks.Therefore,we study the finite time hybrid synchronization of heterogeneous duplex networks,which employs the time-varying intermittent control to drive the duplex heterogeneous complex networks to achieve hybrid synchronization in finite time.To be specific,the switch frequency of the controllers can be changed with time by devise Lyapunov function and boundary function,the internal synchronization and external synchronization are achieved simultaneously in finite time.Finally,numerical examples are presented to illustrate the validness of theoretical results.展开更多
The low Earth orbit(LEO)satellite networks have outstanding advantages such as wide coverage area and not being limited by geographic environment,which can provide a broader range of communication services and has bec...The low Earth orbit(LEO)satellite networks have outstanding advantages such as wide coverage area and not being limited by geographic environment,which can provide a broader range of communication services and has become an essential supplement to the terrestrial network.However,the dynamic changes and uneven distribution of satellite network traffic inevitably bring challenges to multipath routing.Even worse,the harsh space environment often leads to incomplete collection of network state data for routing decision-making,which further complicates this challenge.To address this problem,this paper proposes a state-incomplete intelligent dynamic multipath routing algorithm(SIDMRA)to maximize network efficiency even with incomplete state data as input.Specifically,we model the multipath routing problem as a markov decision process(MDP)and then combine the deep deterministic policy gradient(DDPG)and the K shortest paths(KSP)algorithm to solve the optimal multipath routing policy.We use the temporal correlation of the satellite network state to fit the incomplete state data and then use the message passing neuron network(MPNN)for data enhancement.Simulation results show that the proposed algorithm outperforms baseline algorithms regarding average end-to-end delay and packet loss rate and performs stably under certain missing rates of state data.展开更多
Research on the self-similarity of multilayer networks is scarce, when compared to the extensive research conducted on the dynamics of these networks. In this paper, we use entropy to determine the edge weights in eac...Research on the self-similarity of multilayer networks is scarce, when compared to the extensive research conducted on the dynamics of these networks. In this paper, we use entropy to determine the edge weights in each sub-network,and apply the degree–degree distance to unify the weight values of connecting edges between different sub-networks, and unify the edges with different meanings in the multilayer network numerically. At this time, the multilayer network is compressed into a single-layer network, also known as the aggregated network. Furthermore, the self-similarity of the multilayer network is represented by analyzing the self-similarity of the aggregate network. The study of self-similarity was conducted on two classical fractal networks and a real-world multilayer network. The results show that multilayer networks exhibit more pronounced self-similarity, and the intensity of self-similarity in multilayer networks can vary with the connection mode of sub-networks.展开更多
Low Earth orbit(LEO)satellite networks have the advantages of low transmission delay and low deployment cost,playing an important role in providing reliable services to ground users.This paper studies an efficient int...Low Earth orbit(LEO)satellite networks have the advantages of low transmission delay and low deployment cost,playing an important role in providing reliable services to ground users.This paper studies an efficient inter-satellite cooperative computation offloading(ICCO)algorithm for LEO satellite networks.Specifically,an ICCO system model is constructed,which considers using neighboring satellites in the LEO satellite networks to collaboratively process tasks generated by ground user terminals,effectively improving resource utilization efficiency.Additionally,the optimization objective of minimizing the system task computation offloading delay and energy consumption is established,which is decoupled into two sub-problems.In terms of computational resource allocation,the convexity of the problem is proved through theoretical derivation,and the Lagrange multiplier method is used to obtain the optimal solution of computational resources.To deal with the task offloading decision,a dynamic sticky binary particle swarm optimization algorithm is designed to obtain the offloading decision by iteration.Simulation results show that the ICCO algorithm can effectively reduce the delay and energy consumption.展开更多
To support ubiquitous communication and enhance other 6G applications,the Space-Air-Ground Integrated Network(SAGIN)has become a research hotspot.Traditionally,satellite-ground fusion technologies integrate network en...To support ubiquitous communication and enhance other 6G applications,the Space-Air-Ground Integrated Network(SAGIN)has become a research hotspot.Traditionally,satellite-ground fusion technologies integrate network entities from space,aerial,and terrestrial domains.However,they face challenges such as spectrum scarcity and inefficient satellite handover.This paper explores the Channel-Aware Handover Management(CAHM)strategy in SAGIN for data allocation.Specifically,CAHM utilizes the data receiving capability of Low Earth Orbit(LEO)satellites,considering satellite-ground distance,free-space path loss,and channel gain.Furthermore,CAHM assesses LEO satellite data forwarding capability using signal-to-noise ratio,link duration and buffer queue length.Then,CAHM applies historical data on LEO satellite transmission successes and failures to effectively reduce overall interruption ratio.Simulation results show that CAHM outperforms baseline algorithms in terms of delivery ratio,latency,and interruption ratio.展开更多
As the first stage of the quantum Internet,quantum key distribution(QKD)networks hold the promise of providing long-term security for diverse users.Most existing QKD networks have been constructed based on independent...As the first stage of the quantum Internet,quantum key distribution(QKD)networks hold the promise of providing long-term security for diverse users.Most existing QKD networks have been constructed based on independent QKD protocols,and they commonly rely on the deployment of single-protocol trusted relay chains for long reach.Driven by the evolution of QKD protocols,large-scale QKD networking is expected to migrate from a single-protocol to a multi-protocol paradigm,during which some useful evolutionary elements for the later stages of the quantum Internet may be incorporated.In this work,we delve into a pivotal technique for large-scale QKD networking,namely,multi-protocol relay chaining.A multi-protocol relay chain is established by connecting a set of trusted/untrusted relays relying on multiple QKD protocols between a pair of QKD nodes.The structures of diverse multi-protocol relay chains are described,based on which the associated model is formulated and the policies are defined for the deployment of multi-protocol relay chains.Furthermore,we propose three multi-protocol relay chaining heuristics.Numerical simulations indicate that the designed heuristics can effectively reduce the number of trusted relays deployed and enhance the average security level versus the commonly used single-protocol trusted relay chaining methods on backbone network topologies.展开更多
As a key mode of transportation, urban metro networks have significantly enhanced urban traffic environments and travel efficiency, making the identification of critical stations within these networks increasingly ess...As a key mode of transportation, urban metro networks have significantly enhanced urban traffic environments and travel efficiency, making the identification of critical stations within these networks increasingly essential. This study presents a novel integrated topological-functional(ITF) algorithm for identifying critical nodes, combining topological metrics such as K-shell decomposition, node information entropy, and neighbor overlapping interaction with the functional attributes of passenger flow operations, while also considering the coupling effects between metro and bus networks. Using the Chengdu metro network as a case study, the effectiveness of the algorithm under different conditions is validated.The results indicate significant differences in passenger flow patterns between working and non-working days, leading to varying sets of critical nodes across these scenarios. Moreover, the ITF algorithm demonstrates a marked improvement in the accuracy of critical node identification compared to existing methods. This conclusion is supported by the analysis of changes in the overall network structure and relative global operational efficiency following targeted attacks on the identified critical nodes. The findings provide valuable insight into urban transportation planning, offering theoretical and practical guidance for improving metro network safety and resilience.展开更多
Physics informed neural networks(PINNs)are a deep learning approach designed to solve partial differential equations(PDEs).Accurately learning the initial conditions is crucial when employing PINNs to solve PDEs.Howev...Physics informed neural networks(PINNs)are a deep learning approach designed to solve partial differential equations(PDEs).Accurately learning the initial conditions is crucial when employing PINNs to solve PDEs.However,simply adjusting weights and imposing hard constraints may not always lead to better learning of the initial conditions;sometimes it even makes it difficult for the neural networks to converge.To enhance the accuracy of PINNs in learning the initial conditions,this paper proposes a novel strategy named causally enhanced initial conditions(CEICs).This strategy works by embedding a new loss in the loss function:the loss is constructed by the derivative of the initial condition and the derivative of the neural network at the initial condition.Furthermore,to respect the causality in learning the derivative,a novel causality coefficient is introduced for the training when selecting multiple derivatives.Additionally,because CEICs can provide more accurate pseudo-labels in the first subdomain,they are compatible with the temporal-marching strategy.Experimental results demonstrate that CEICs outperform hard constraints and improve the overall accuracy of pre-training PINNs.For the 1D-Korteweg–de Vries,reaction and convection equations,the CEIC method proposed in this paper reduces the relative error by at least 60%compared to the previous methods.展开更多
Low-earth-orbit(LEO)satellite network has become a critical component of the satelliteterrestrial integrated network(STIN)due to its superior signal quality and minimal communication latency.However,the highly dynamic...Low-earth-orbit(LEO)satellite network has become a critical component of the satelliteterrestrial integrated network(STIN)due to its superior signal quality and minimal communication latency.However,the highly dynamic nature of LEO satellites leads to limited and rapidly varying contact time between them and Earth stations(ESs),making it difficult to timely download massive communication and remote sensing data within the limited time window.To address this challenge in heterogeneous satellite networks with coexisting geostationary-earth-orbit(GEO)and LEO satellites,this paper proposes a dynamic collaborative inter-satellite data download strategy to optimize the long-term weighted energy consumption and data downloads within the constraints of on-board power,backlog stability and time-varying contact.Specifically,the Lyapunov optimization theory is applied to transform the long-term stochastic optimization problem,subject to time-varying contact time and on-board power constraints,into multiple deterministic single time slot problems,based on which online distributed algorithms are developed to enable each satellite to independently obtain the transmit power allocation and data processing decisions in closed-form.Finally,the simulation results demonstrate the superiority of the proposed scheme over benchmarks,e.g.,achieving asymptotic optimality of the weighted energy consumption and data downloads,while maintaining stability of the on-board backlog.展开更多
基金performed in the Project “Research on the Hierarchical Interference Elimination Technology for UDN Based on MIMO” supported by the Henan Scientific and Technological Research Project (172102210023)“Research on clustering and frequency band allocation in JT-Co MP supported by Department of Education of Henan Province (19A510013)”
文摘To reduce the interference among small cells of Ultra-Dense Networks(UDN),an improved Clustering-Assisted Resource Allocation(CARA)scheme is proposed in this paper.The proposed scheme is divided into three steps.First,an Interference-Limited Clustering Algorithm(ILCA)based on interference graph corresponding to the interference relationship between Femtocell Base Stations(FBSs),is proposed to group FBSs into disjoint clusters,in which a pre-threshold is set to constrain the sum of interference in each cluster,and a Cluster Head(CH)is selected for each cluster.Then,CH performs a twostage sub-channel allocation within its associated cluster,where the first stage assigns one sub-channel to each user of the cluster and the second stage assigns a second sub-channel to some users.Finally,a power allocation method is designed to maximize throughput for a given clustering and sub-channel configuration.Simulation results indicate that the proposed scheme distributes FBSs into each cluster more evenly,and significantly improves the system throughput compared with the existing schemes in the same scenario.
基金supported by NSFC under Grant 61471303EU FP7 QUICK project under Grant PIRSES-GA-2013-612652
文摘In order to meet the exponentially increasing demand on mobile data traffic, self-backhaul ultra-dense networks(UDNs) combined with millimeter wave(mm Wave) communications are expected to provide high spatial multiplexing gain and wide bandwidths for multi-gigabit peak data rates. In selfbackhaul UDNs, how to make the radio access rates of small cells match their backhaul rates by user association and how to dynamically allocate bandwidth for the access links and backhaul links to balance two-hop link resources are two key problems on improving the overall throughputs. Based on this, a joint scheme of user association and resource allocation is proposed in self-backhaul ultra-dense networks. Because of the combinatorial and nonconvex features of the original optimization problem, it has been divided into two subproblems. Firstly, to make the radio access rates of small base stations match their backhaul rates and maximize sum access rates per Hz of all small cells, a proportional constraint is introduced, and immune optimization algorithm(IOA) is adopted to optimize the association indicator variables and the boresight angles of between users and base stations. Then, the optimal backhaul and access bandwidths are calculated by differentiating the general expression of overall throughput. Simulation results indicatethat the proposed scheme increases the overall throughputs significantly compared to the traditional minimum-distance based association scheme.
文摘In ultra-dense networks (UDN), the local precoding scheme for time-division duplex coordinated multiple point transmission (TDD-CoMP) can have a good performance with no feedback by using reciprocity between uplink and dovallink. However, if channel is time-varying, the channel difference would cause codeword mismatch between transmitter and receiver, which leads to performance degradation. In this paper, a linear interpolation method is proposed for TDD-CoMP system to estimate the uplink channel at the receiver, which would reduce the channel difference caused by time delay and decrease the probability of codeword mismatch between both sides. Moreover, to mitigate severe inter-cell interference and increase the coverage and throughput of celledge users in UDN, a two-codebook scheme is used to strengthen cooperation between base stations (BSs), which can outperform the global precoding scheme with less overhead. Simulations show that the proposed scheme can significantly improve the link performance compared to the global precoding scheme.
文摘In future 5G networks,a key scenario is the dense user distribution over some area,such as office,urban apartments,shopping mall,stadium,etc.,where the requirement for user-experienced rate at cell-edge can be
基金This work is sponsored in part by the National Key R&D Program of China No.2020YFB1806605by the Nature Science Foundation of China(No.62022049,No.61871254,No.62111530197)by Open Research Fund Program of Beijing National Research Center for Information Science and Technology,and Hitachi Ltd.
文摘Blockage and imperfect beam alignment are two principal difficulties in high-frequency bands directional transmissions.In this paper,the coverage performance of downlink directional transmissions in ultra-dense networks is analyzed,with the consideration of beam alignment error and link blockage through stochastic geometry.Numerical experiments demonstrate that narrower beam leads to higher coverage probability with perfect beam alignment,but it is not the case with imperfect beam alignment.Therefore,the optimal beamwidth that maximize the coverage probability is characterized and a closed-form approximation of the optimal beamwidth is derived under imperfect beam alignment,accordingly.Furthermore,the optimal beamwidth is a monotonically increasing function of the standard deviation of the beam alignment error,and a monotonically decreasing function of the beamwidth of correspondent communication end,indicating that the beamwidth of the communication pairs ought to be jointly designed.
文摘Cell discontinuous transmission(Cell DTx)is a key technology to mitigate inter-cell interference(ICI)in ultra-dense networks(UDNs).The aim of this work is to understand the impact of Cell DTx on physical-layer sum rates of SBSs and link-layer quality-of-service(QoS)performance in multiuser UDNs.In this work,we develop a cross-layer framework for capacity analysis in multiuser UDNs with Cell DTx.In particular,we first extend the traditional one-dimensional effective capacity model to a new multidimensional effective capacity model to derive the sum rate and the effective capacity.Moreover,we propose a new iterative bisection search algorithm that is capable of approximating QoS performance.The convergence of this new algorithm to a unique QoS exponent vector is later proved.Finally,we apply this framework to the round-robin and the max-C/I scheduling policies.Simulation results show that our framework is accurate in approximating 1)queue length distribution,2)delay distribution and 3)sum rates under the above two scheduling policies,and further show that with the Cell DTx,systems have approximately 30% higher sum rate and 35% smaller average delay than those in full-buffer scenarios.
基金supported in part by Beijing Natural Science Foundation(4152047)the 863 project No.2014AA01A701+1 种基金111 Project of China under Grant B14010China Mobile Research Institute under grant[2014]451
文摘In this paper,a distributed chunkbased optimization algorithm is proposed for the resource allocation in broadband ultra-dense small cell networks.Based on the proposed algorithm,the power and subcarrier allocation problems are jointly optimized.In order to make the resource allocation suitable for large scale networks,the optimization problem is decomposed first based on an effective decomposition algorithm named optimal condition decomposition(OCD) algorithm.Furthermore,aiming at reducing implementation complexity,the subcarriers are divided into chunks and are allocated chunk by chunk.The simulation results show that the proposed algorithm achieves more superior performance than uniform power allocation scheme and Lagrange relaxation method,and then the proposed algorithm can strike a balance between the complexity and performance of the multi-carrier Ultra-Dense Networks.
基金supported by China Mobile Research Institute under grant [2014] 451National Natural Science Foundation of China under Grant No. 61176027+2 种基金Beijing Natural Science Foundation(4152047)the 863 project No.2014AA01A701111 Project of China under Grant B14010
文摘Ultra-dense networking is widely accepted as a promising enabling technology to realize high power and spectrum efficient communications in future 5G communication systems. Although joint resource allocation schemes promise huge performance improvement at the cost of cooperation among base stations,the large numbers of user equipment and base station make jointly optimizing the available resource very challenging and even prohibitive. How to decompose the resource allocation problem is a critical issue. In this paper,we exploit factor graphs to design a distributed resource allocation algorithm for ultra dense networks,which consists of power allocation,subcarrier allocation and cell association. The proposed factor graph based distributed algorithm can decompose the joint optimization problem of resource allocation into a series of low complexity subproblems with much lower dimensionality,and the original optimization problem can be efficiently solved via solving these subproblems iteratively. In addition,based on the proposed algorithm the amounts of exchanging information overhead between the resulting subprob-lems are also reduced. The proposed distributed algorithm can be understood as solving largely dimensional optimization problem in a soft manner,which is much preferred in practical scenarios. Finally,the performance of the proposed low complexity distributed algorithm is evaluated by several numerical results.
基金supported in part by the National Key R&D Program of China under Grant 2020YFB1806104in part by Innovation and Entrepreneurship of Jiangsu Province High-level Talent Program+1 种基金in part by Natural Sciences and Engineering Research Council of Canada (NSERC)the support from Huawei
文摘With the deployment of ultra-dense low earth orbit(LEO)satellite constellations,LEO satellite access network(LEO-SAN)is envisioned to achieve global Internet coverage.Meanwhile,the civil aviation communications have increased dramatically,especially for providing airborne Internet services.However,due to dynamic service demands and onboard LEO resources over time and space,it poses huge challenges in satellite-aircraft access and service management in ultra-dense LEO satellite networks(UDLSN).In this paper,we propose a deep reinforcement learning-based approach for ultra-dense LEO satellite-aircraft access and service management.Firstly,we develop an airborne Internet architecture based on UDLSN and design a management mechanism including medium earth orbit satellites to guarantee lightweight management.Secondly,considering latency-sensitive and latency-tolerant services,we formulate the problem of satellite-aircraft access and service management for civil aviation to ensure service continuity.Finally,we propose a proximal policy optimization-based access and service management algorithm to solve the formulated problem.Simulation results demonstrate the convergence and effectiveness of the proposed algorithm with satisfying the service continuity when applying to the UDLSN.
基金supported by NSF China(No.61960206002,62020106005,42050105,62061146002)Shanghai Pilot Program for Basic Research–Shanghai Jiao Tong University.
文摘Friendship paradox states that individuals are likely to have fewer friends than their friends do,on average.Despite of its wide existence and appealing applications in real social networks,the mathematical understanding of friendship paradox is very limited.Only few works provide theoretical evidence of single-step and multi-step friendship paradoxes,given that the neighbors of interest are onehop and multi-hop away from the target node.However,they consider non-evolving networks,as opposed to the topology of real social networks that are constantly growing over time.We are thus motivated to present a first look into friendship paradox in evolving networks,where newly added nodes preferentially attach themselves to those with higher degrees.Our analytical verification of both single-step and multistep friendship paradoxes in evolving networks,along with comparison to the non-evolving counterparts,discloses that“friendship paradox is even more paradoxical in evolving networks”,primarily from three aspects:1)we demonstrate a strengthened effect of single-step friendship paradox in evolving networks,with a larger probability(more than 0.8)of a random node’s neighbors having higher average degree than the random node itself;2)we unravel higher effectiveness of multi-step friendship paradox in seeking for influential nodes in evolving networks,as the rate of reaching the max degree node can be improved by a factor of at least Θ(t^(2/3))with t being the network size;3)we empirically verify our findings through both synthetic and real datasets,which suggest high agreements of results and consolidate the reasonability of evolving model for real social networks.
文摘Wireless Sensor Network(WSN)comprises a set of interconnected,compact,autonomous,and resource-constrained sensor nodes that are wirelessly linked to monitor and gather data from the physical environment.WSNs are commonly used in various applications such as environmental monitoring,surveillance,healthcare,agriculture,and industrial automation.Despite the benefits of WSN,energy efficiency remains a challenging problem that needs to be addressed.Clustering and routing can be considered effective solutions to accomplish energy efficiency in WSNs.Recent studies have reported that metaheuristic algorithms can be applied to optimize cluster formation and routing decisions.This study introduces a new Northern Goshawk Optimization with boosted coati optimization algorithm for cluster-based routing(NGOBCO-CBR)method for WSN.The proposed NGOBCO-CBR method resolves the hot spot problem,uneven load balancing,and energy consumption in WSN.The NGOBCO-CBR technique comprises two major processes such as NGO based clustering and BCO-based routing.In the initial phase,the NGObased clustering method is designed for cluster head(CH)selection and cluster construction using five input variables such as residual energy(RE),node proximity,load balancing,network average energy,and distance to BS(DBS).Besides,the NGOBCO-CBR technique applies the BCO algorithm for the optimum selection of routes to BS.The experimental results of the NGOBCOCBR technique are studied under different scenarios,and the obtained results showcased the improved efficiency of the NGOBCO-CBR technique over recent approaches in terms of different measures.
基金Project supported by Jilin Provincial Science and Technology Development Plan(Grant No.20220101137JC).
文摘This paper study the finite time internal synchronization and the external synchronization(hybrid synchronization)for duplex heterogeneous complex networks by time-varying intermittent control.There few study hybrid synchronization of heterogeneous duplex complex networks.Therefore,we study the finite time hybrid synchronization of heterogeneous duplex networks,which employs the time-varying intermittent control to drive the duplex heterogeneous complex networks to achieve hybrid synchronization in finite time.To be specific,the switch frequency of the controllers can be changed with time by devise Lyapunov function and boundary function,the internal synchronization and external synchronization are achieved simultaneously in finite time.Finally,numerical examples are presented to illustrate the validness of theoretical results.
文摘The low Earth orbit(LEO)satellite networks have outstanding advantages such as wide coverage area and not being limited by geographic environment,which can provide a broader range of communication services and has become an essential supplement to the terrestrial network.However,the dynamic changes and uneven distribution of satellite network traffic inevitably bring challenges to multipath routing.Even worse,the harsh space environment often leads to incomplete collection of network state data for routing decision-making,which further complicates this challenge.To address this problem,this paper proposes a state-incomplete intelligent dynamic multipath routing algorithm(SIDMRA)to maximize network efficiency even with incomplete state data as input.Specifically,we model the multipath routing problem as a markov decision process(MDP)and then combine the deep deterministic policy gradient(DDPG)and the K shortest paths(KSP)algorithm to solve the optimal multipath routing policy.We use the temporal correlation of the satellite network state to fit the incomplete state data and then use the message passing neuron network(MPNN)for data enhancement.Simulation results show that the proposed algorithm outperforms baseline algorithms regarding average end-to-end delay and packet loss rate and performs stably under certain missing rates of state data.
基金Project supported by the National Natural Science Foundation of China (Grant Nos. 61763009 and 72172025)。
文摘Research on the self-similarity of multilayer networks is scarce, when compared to the extensive research conducted on the dynamics of these networks. In this paper, we use entropy to determine the edge weights in each sub-network,and apply the degree–degree distance to unify the weight values of connecting edges between different sub-networks, and unify the edges with different meanings in the multilayer network numerically. At this time, the multilayer network is compressed into a single-layer network, also known as the aggregated network. Furthermore, the self-similarity of the multilayer network is represented by analyzing the self-similarity of the aggregate network. The study of self-similarity was conducted on two classical fractal networks and a real-world multilayer network. The results show that multilayer networks exhibit more pronounced self-similarity, and the intensity of self-similarity in multilayer networks can vary with the connection mode of sub-networks.
基金supported in part by Sub Project of National Key Research and Development plan in 2020 NO.2020YFC1511704Beijing Information Science and Technology University NO.2020KYNH212,NO.2021CGZH302+1 种基金Beijing Science and Technology Project(Grant No.Z211100004421009)in part by the National Natural Science Foundation of China(Grant No.62301058).
文摘Low Earth orbit(LEO)satellite networks have the advantages of low transmission delay and low deployment cost,playing an important role in providing reliable services to ground users.This paper studies an efficient inter-satellite cooperative computation offloading(ICCO)algorithm for LEO satellite networks.Specifically,an ICCO system model is constructed,which considers using neighboring satellites in the LEO satellite networks to collaboratively process tasks generated by ground user terminals,effectively improving resource utilization efficiency.Additionally,the optimization objective of minimizing the system task computation offloading delay and energy consumption is established,which is decoupled into two sub-problems.In terms of computational resource allocation,the convexity of the problem is proved through theoretical derivation,and the Lagrange multiplier method is used to obtain the optimal solution of computational resources.To deal with the task offloading decision,a dynamic sticky binary particle swarm optimization algorithm is designed to obtain the offloading decision by iteration.Simulation results show that the ICCO algorithm can effectively reduce the delay and energy consumption.
基金National Key Research and Development Program of China(2022YFE0139300)Hubei Province Key Research and Development Program(2024BAB051)+1 种基金Guangdong Basic and Applied Basic Research Foundation(2022B1515120067)Wuhan Key Research and Development Program(2024050702030136).
文摘To support ubiquitous communication and enhance other 6G applications,the Space-Air-Ground Integrated Network(SAGIN)has become a research hotspot.Traditionally,satellite-ground fusion technologies integrate network entities from space,aerial,and terrestrial domains.However,they face challenges such as spectrum scarcity and inefficient satellite handover.This paper explores the Channel-Aware Handover Management(CAHM)strategy in SAGIN for data allocation.Specifically,CAHM utilizes the data receiving capability of Low Earth Orbit(LEO)satellites,considering satellite-ground distance,free-space path loss,and channel gain.Furthermore,CAHM assesses LEO satellite data forwarding capability using signal-to-noise ratio,link duration and buffer queue length.Then,CAHM applies historical data on LEO satellite transmission successes and failures to effectively reduce overall interruption ratio.Simulation results show that CAHM outperforms baseline algorithms in terms of delivery ratio,latency,and interruption ratio.
基金supported in part by the National Natural Science Foundation of China(Grant Nos.62201276,62350001,U22B2026,and 62471248)Innovation Program for Quantum Science and Technology(Grant No.2021ZD0300701)+1 种基金the Key R&D Program(Industry Foresight and Key Core Technologies)of Jiangsu Province(Grant No.BE2022071)Natural Science Research of Jiangsu Higher Education Institutions of China(Grant No.22KJB510007)。
文摘As the first stage of the quantum Internet,quantum key distribution(QKD)networks hold the promise of providing long-term security for diverse users.Most existing QKD networks have been constructed based on independent QKD protocols,and they commonly rely on the deployment of single-protocol trusted relay chains for long reach.Driven by the evolution of QKD protocols,large-scale QKD networking is expected to migrate from a single-protocol to a multi-protocol paradigm,during which some useful evolutionary elements for the later stages of the quantum Internet may be incorporated.In this work,we delve into a pivotal technique for large-scale QKD networking,namely,multi-protocol relay chaining.A multi-protocol relay chain is established by connecting a set of trusted/untrusted relays relying on multiple QKD protocols between a pair of QKD nodes.The structures of diverse multi-protocol relay chains are described,based on which the associated model is formulated and the policies are defined for the deployment of multi-protocol relay chains.Furthermore,we propose three multi-protocol relay chaining heuristics.Numerical simulations indicate that the designed heuristics can effectively reduce the number of trusted relays deployed and enhance the average security level versus the commonly used single-protocol trusted relay chaining methods on backbone network topologies.
基金Project supported by the National Natural Science Foundation of China (Grant No. 71971150)the Project of Research Center for System Sciences and Enterprise Development (Grant No. Xq16B05)the Fundamental Research Funds for the Central Universities of China (Grant No. SXYPY202313)。
文摘As a key mode of transportation, urban metro networks have significantly enhanced urban traffic environments and travel efficiency, making the identification of critical stations within these networks increasingly essential. This study presents a novel integrated topological-functional(ITF) algorithm for identifying critical nodes, combining topological metrics such as K-shell decomposition, node information entropy, and neighbor overlapping interaction with the functional attributes of passenger flow operations, while also considering the coupling effects between metro and bus networks. Using the Chengdu metro network as a case study, the effectiveness of the algorithm under different conditions is validated.The results indicate significant differences in passenger flow patterns between working and non-working days, leading to varying sets of critical nodes across these scenarios. Moreover, the ITF algorithm demonstrates a marked improvement in the accuracy of critical node identification compared to existing methods. This conclusion is supported by the analysis of changes in the overall network structure and relative global operational efficiency following targeted attacks on the identified critical nodes. The findings provide valuable insight into urban transportation planning, offering theoretical and practical guidance for improving metro network safety and resilience.
基金supported by the National Natural Science Foundation of China(Grant Nos.1217211 and 12372244).
文摘Physics informed neural networks(PINNs)are a deep learning approach designed to solve partial differential equations(PDEs).Accurately learning the initial conditions is crucial when employing PINNs to solve PDEs.However,simply adjusting weights and imposing hard constraints may not always lead to better learning of the initial conditions;sometimes it even makes it difficult for the neural networks to converge.To enhance the accuracy of PINNs in learning the initial conditions,this paper proposes a novel strategy named causally enhanced initial conditions(CEICs).This strategy works by embedding a new loss in the loss function:the loss is constructed by the derivative of the initial condition and the derivative of the neural network at the initial condition.Furthermore,to respect the causality in learning the derivative,a novel causality coefficient is introduced for the training when selecting multiple derivatives.Additionally,because CEICs can provide more accurate pseudo-labels in the first subdomain,they are compatible with the temporal-marching strategy.Experimental results demonstrate that CEICs outperform hard constraints and improve the overall accuracy of pre-training PINNs.For the 1D-Korteweg–de Vries,reaction and convection equations,the CEIC method proposed in this paper reduces the relative error by at least 60%compared to the previous methods.
基金supported by the National Natural Science Foundation of China under Grant 62371098the National Key Laboratory ofWireless Communications Foundation under Grant IFN20230203the National Key Research and Development Program of China under Grant 2021YFB2900404.
文摘Low-earth-orbit(LEO)satellite network has become a critical component of the satelliteterrestrial integrated network(STIN)due to its superior signal quality and minimal communication latency.However,the highly dynamic nature of LEO satellites leads to limited and rapidly varying contact time between them and Earth stations(ESs),making it difficult to timely download massive communication and remote sensing data within the limited time window.To address this challenge in heterogeneous satellite networks with coexisting geostationary-earth-orbit(GEO)and LEO satellites,this paper proposes a dynamic collaborative inter-satellite data download strategy to optimize the long-term weighted energy consumption and data downloads within the constraints of on-board power,backlog stability and time-varying contact.Specifically,the Lyapunov optimization theory is applied to transform the long-term stochastic optimization problem,subject to time-varying contact time and on-board power constraints,into multiple deterministic single time slot problems,based on which online distributed algorithms are developed to enable each satellite to independently obtain the transmit power allocation and data processing decisions in closed-form.Finally,the simulation results demonstrate the superiority of the proposed scheme over benchmarks,e.g.,achieving asymptotic optimality of the weighted energy consumption and data downloads,while maintaining stability of the on-board backlog.