Channel prediction is critical to address the channel aging issue in mobile scenarios.Existing channel prediction techniques are mainly designed for discrete channel prediction,which can only predict the future channe...Channel prediction is critical to address the channel aging issue in mobile scenarios.Existing channel prediction techniques are mainly designed for discrete channel prediction,which can only predict the future channel in a fixed time slot per frame,while the other intra-frame channels are usually recovered by interpolation.However,these approaches suffer from a serious interpolation loss,especially for mobile millimeter-wave communications.To solve this challenging problem,we propose a tensor neural ordinary differential equation(TN-ODE)based continuous-time channel prediction scheme to realize the direct prediction of intra-frame channels.Specifically,inspired by the recently developed continuous mapping model named neural ODE in the field of machine learning,we first utilize the neural ODE model to predict future continuous-time channels.To improve the channel prediction accuracy and reduce computational complexity,we then propose the TN-ODE scheme to learn the structural characteristics of the high-dimensional channel by low-dimensional learnable transform.Simulation results show that the proposed scheme is able to achieve higher intra-frame channel prediction accuracy than existing schemes.展开更多
In this paper, a time-varying channel prediction method based on conditional generative adversarial network(CPcGAN) is proposed for time division duplexing/frequency division duplexing(TDD/FDD) systems. CPc GAN utiliz...In this paper, a time-varying channel prediction method based on conditional generative adversarial network(CPcGAN) is proposed for time division duplexing/frequency division duplexing(TDD/FDD) systems. CPc GAN utilizes a discriminator to calculate the divergence between the predicted downlink channel state information(CSI) and the real sample distributions under a conditional constraint that is previous uplink CSI. The generator of CPcGAN learns the function relationship between the conditional constraint and the predicted downlink CSI and reduces the divergence between predicted CSI and real CSI.The capability of CPcGAN fitting data distribution can capture the time-varying and multipath characteristics of the channel well. Considering the propagation characteristics of real channel, we further develop a channel prediction error indicator to determine whether the generator reaches the best state. Simulations show that the CPcGAN can obtain higher prediction accuracy and lower system bit error rate than the existing methods under the same user speeds.展开更多
Recently,whether the channel prediction can be achieved in diverse communication scenarios by directly utilizing the environment information gained lots of attention due to the environment impacting the propagation ch...Recently,whether the channel prediction can be achieved in diverse communication scenarios by directly utilizing the environment information gained lots of attention due to the environment impacting the propagation characteristics of the wireless channel.This paper presents an environment information-based channel prediction(EICP)method for connecting the environment with the channel assisted by the graph neural networks(GNN).Firstly,the effective scatterers(ESs)producing paths and the primary scatterers(PSs)generating single propagation paths are detected by building the scatterercentered communication environment graphs(SCCEGs),which can simultaneously preserve the structure information and highlight the pending scatterer.The GNN-based classification model is implemented to distinguish ESs and PSs from other scatterers.Secondly,large-scale parameters(LSP)and small-scale parameters(SSP)are predicted by employing the GNNs with multi-target architecture and the graphs of detected ESs and PSs.Simulation results show that the average normalized mean squared error(NMSE)of LSP and SSP predictions are 0.12 and 0.008,which outperforms the methods of linear data learning.展开更多
Traditional antenna calibration methods for time division duplex (TDD) systems asSume that the flee-space channel remains the same during calibration, which is unreasonable under the high-speed rail and other time-v...Traditional antenna calibration methods for time division duplex (TDD) systems asSume that the flee-space channel remains the same during calibration, which is unreasonable under the high-speed rail and other time-varying channel scenarios, and will cause calibration error due to time variability. This paper proposes an antenna calibration method for time-varying channels. In the proposed method, the transceiver first sequentially sends a pilot signal to ob- tain equivalent do^vnlink and uplink channel responses. Then, by predicting the downlink (uplink) channel response fed back from the receiver using the channel prediction algorithm, the transmitter obtains the channel response correspond- ing to the channel response on uplink (downlink). Finally, the transmitter calculates the transmission calibration factor through the prediction value. Compared with the traditional antenna calibration method, this method can improve the accuracy of the calibration factor. Simulation results show that the performance degradation of antenna calibration can be caused by time-varying channels and the proposed method can well compensate for the performance loss and sig- nificantly improve the antenna calibration performance for time-varying channels.展开更多
The accuracy of acquired channel state information(CSI)for beamforming design is essential for achievable performance in multiple-input multiple-output(MIMO)systems.However,in a high-speed moving scene with time-divis...The accuracy of acquired channel state information(CSI)for beamforming design is essential for achievable performance in multiple-input multiple-output(MIMO)systems.However,in a high-speed moving scene with time-division duplex(TDD)mode,the acquired CSI depending on the channel reciprocity is inevitably outdated,leading to outdated beamforming design and then performance degradation.In this paper,a robust beamforming design under channel prediction errors is proposed for a time-varying MIMO system to combat the degradation further,based on the channel prediction technique.Specifically,the statistical characteristics of historical channel prediction errors are exploited and modeled.Moreover,to deal with random error terms,deterministic equivalents are adopted to further explore potential beamforming gain through the statistical information and ultimately derive the robust design aiming at maximizing weighted sum-rate performance.Simulation results show that the proposed beamforming design can maintain outperformance during the downlink transmission time even when channels vary fast,compared with the traditional beamforming design.展开更多
A large amount of mobile data from growing high-speed train(HST)users makes intelligent HST communications enter the era of big data.The corresponding artificial intelligence(AI)based HST channel modeling becomes a tr...A large amount of mobile data from growing high-speed train(HST)users makes intelligent HST communications enter the era of big data.The corresponding artificial intelligence(AI)based HST channel modeling becomes a trend.This paper provides AI based channel characteristic prediction and scenario classification model for millimeter wave(mmWave)HST communications.Firstly,the ray tracing method verified by measurement data is applied to reconstruct four representative HST scenarios.By setting the positions of transmitter(Tx),receiver(Rx),and other parameters,the multi-scenarios wireless channel big data is acquired.Then,based on the obtained channel database,radial basis function neural network(RBF-NN)and back propagation neural network(BP-NN)are trained for channel characteristic prediction and scenario classification.Finally,the channel characteristic prediction and scenario classification capabilities of the network are evaluated by calculating the root mean square error(RMSE).The results show that RBF-NN can generally achieve better performance than BP-NN,and is more applicable to prediction of HST scenarios.展开更多
In this paper, the statistical properties of parameters of each path in wireless channel models are analyzed to prove that there is the static part in channel state information(CSI) which can be extracted from huge am...In this paper, the statistical properties of parameters of each path in wireless channel models are analyzed to prove that there is the static part in channel state information(CSI) which can be extracted from huge amounts of CSI data. Based on the analysis, the concept of the Tomographic Channel Model(TCM) is presented. With cluster algorithms, the static CSI database can be built in an off-line manner. The static CSI database can provide prior information to help pilot design to reduce overhead and improve accuracy in channel estimation. A new CSI prediction method and a new channel estimation method between different frequency bands are introduced based on the static CSI database. Using measurement data, the performance of the new channel prediction method is compared with that of the Auto Regression(AR) predictor. The results indicate that the prediction range of the new method is better than that of the AR method and the new method can predict with fewer pilot symbols. Using measurement data, the new channel estimation method between different frequency bands can estimate the CSI of one frequency band based on known CSI of another frequency band without any feedback.展开更多
Wireless communication systems that incorporate digital twin(DT)alongside artificial intelligence(AI)are expected to transform 6G networks by providing advanced features for predictive modeling and decision making.The...Wireless communication systems that incorporate digital twin(DT)alongside artificial intelligence(AI)are expected to transform 6G networks by providing advanced features for predictive modeling and decision making.The key component is the creation of DT channels,which form the basis for upcoming applications.However,the existing work of channel predictive generation only considers time dimension,distribution-oriented or multi-step slidingwindow prediction schemes,which is not accurate and efficient for real-time DT communication systems.Therefore,we propose the wireless channel generative adversarial network(WCGAN)to tackle the issue of generating authentic long-batch channels for DT applications.The generator based on convolutional neural networks(CNN)extracts features from both the time and frequency domains to better capture the correlation.The loss function is designed to ensure that the generated channels consistently match the physical channels over an extended period while sharing the same probability distributions.Meanwhile,the accumulating error from the slicing window has been alleviated.The simulation demonstrates that an accurate and efficient DT channel can be generated by employing our proposed WCGAN in various scenarios.展开更多
This paper investigates the channel prediction algorithm of the time-varying channels in underwater acoustic(UWA)communication systems using the long short-term memory(LSTM)model with the attention mechanism.AttLstmPr...This paper investigates the channel prediction algorithm of the time-varying channels in underwater acoustic(UWA)communication systems using the long short-term memory(LSTM)model with the attention mechanism.AttLstmPreNet is a deep learning model that combines an attention mechanism with LSTM-type models to capture temporal information with different scales from historical UWA channels.The attention mechanism is used to capture sparsity in the time-delay scales and coherence in the gep-time scale under the LSTM framework.The soft attention mechanism is introduced before the LSTM to support the model to focus on the features of input sequences and help improve the learning capacity of the proposed model.The performance of the proposed model is validated using different simulation time-varying UWA channels.Compared with the adaptive channel predictors and the plain LSTM model,the proposed model is better in terms of channel prediction accuracy.展开更多
基金supported in part by the National Key Research and Development Program of China(Grant No.2020YFB1805005)in part by the National Natural Science Foundation of China(Grant No.62031019)in part by the European Commission through the H2020-MSCA-ITN META WIRELESS Research Project under Grant 956256。
文摘Channel prediction is critical to address the channel aging issue in mobile scenarios.Existing channel prediction techniques are mainly designed for discrete channel prediction,which can only predict the future channel in a fixed time slot per frame,while the other intra-frame channels are usually recovered by interpolation.However,these approaches suffer from a serious interpolation loss,especially for mobile millimeter-wave communications.To solve this challenging problem,we propose a tensor neural ordinary differential equation(TN-ODE)based continuous-time channel prediction scheme to realize the direct prediction of intra-frame channels.Specifically,inspired by the recently developed continuous mapping model named neural ODE in the field of machine learning,we first utilize the neural ODE model to predict future continuous-time channels.To improve the channel prediction accuracy and reduce computational complexity,we then propose the TN-ODE scheme to learn the structural characteristics of the high-dimensional channel by low-dimensional learnable transform.Simulation results show that the proposed scheme is able to achieve higher intra-frame channel prediction accuracy than existing schemes.
基金supported in part by the National Science Fund for Distinguished Young Scholars under Grant 61925102in part by the National Natural Science Foundation of China(62201087&92167202&62101069&62201086)in part by the Beijing University of Posts and Telecommunications-China Mobile Research Institute Joint Innovation Center。
文摘In this paper, a time-varying channel prediction method based on conditional generative adversarial network(CPcGAN) is proposed for time division duplexing/frequency division duplexing(TDD/FDD) systems. CPc GAN utilizes a discriminator to calculate the divergence between the predicted downlink channel state information(CSI) and the real sample distributions under a conditional constraint that is previous uplink CSI. The generator of CPcGAN learns the function relationship between the conditional constraint and the predicted downlink CSI and reduces the divergence between predicted CSI and real CSI.The capability of CPcGAN fitting data distribution can capture the time-varying and multipath characteristics of the channel well. Considering the propagation characteristics of real channel, we further develop a channel prediction error indicator to determine whether the generator reaches the best state. Simulations show that the CPcGAN can obtain higher prediction accuracy and lower system bit error rate than the existing methods under the same user speeds.
基金supported by the National Science Fund for Distinguished Young Scholars(No.61925102)National Natural Science Foundation of China(No.62101069)+2 种基金National Natural Science Foundation of China(No.62031019)National Natural Science Foundation of China(No.92167202)BUPT-CMCC Joint Innovation Center.
文摘Recently,whether the channel prediction can be achieved in diverse communication scenarios by directly utilizing the environment information gained lots of attention due to the environment impacting the propagation characteristics of the wireless channel.This paper presents an environment information-based channel prediction(EICP)method for connecting the environment with the channel assisted by the graph neural networks(GNN).Firstly,the effective scatterers(ESs)producing paths and the primary scatterers(PSs)generating single propagation paths are detected by building the scatterercentered communication environment graphs(SCCEGs),which can simultaneously preserve the structure information and highlight the pending scatterer.The GNN-based classification model is implemented to distinguish ESs and PSs from other scatterers.Secondly,large-scale parameters(LSP)and small-scale parameters(SSP)are predicted by employing the GNNs with multi-target architecture and the graphs of detected ESs and PSs.Simulation results show that the average normalized mean squared error(NMSE)of LSP and SSP predictions are 0.12 and 0.008,which outperforms the methods of linear data learning.
基金supported by the National Natural Science Foundation of China(Nos.61032002,61101090 and 60902026)Chinese Important National Science & Technology Specific Projects(No.2011ZX03001-007-01)
文摘Traditional antenna calibration methods for time division duplex (TDD) systems asSume that the flee-space channel remains the same during calibration, which is unreasonable under the high-speed rail and other time-varying channel scenarios, and will cause calibration error due to time variability. This paper proposes an antenna calibration method for time-varying channels. In the proposed method, the transceiver first sequentially sends a pilot signal to ob- tain equivalent do^vnlink and uplink channel responses. Then, by predicting the downlink (uplink) channel response fed back from the receiver using the channel prediction algorithm, the transmitter obtains the channel response correspond- ing to the channel response on uplink (downlink). Finally, the transmitter calculates the transmission calibration factor through the prediction value. Compared with the traditional antenna calibration method, this method can improve the accuracy of the calibration factor. Simulation results show that the performance degradation of antenna calibration can be caused by time-varying channels and the proposed method can well compensate for the performance loss and sig- nificantly improve the antenna calibration performance for time-varying channels.
基金supported by the ZTE Industry⁃University⁃Institute Cooper⁃ation Funds under Grant No.2021ZTE01⁃03.
文摘The accuracy of acquired channel state information(CSI)for beamforming design is essential for achievable performance in multiple-input multiple-output(MIMO)systems.However,in a high-speed moving scene with time-division duplex(TDD)mode,the acquired CSI depending on the channel reciprocity is inevitably outdated,leading to outdated beamforming design and then performance degradation.In this paper,a robust beamforming design under channel prediction errors is proposed for a time-varying MIMO system to combat the degradation further,based on the channel prediction technique.Specifically,the statistical characteristics of historical channel prediction errors are exploited and modeled.Moreover,to deal with random error terms,deterministic equivalents are adopted to further explore potential beamforming gain through the statistical information and ultimately derive the robust design aiming at maximizing weighted sum-rate performance.Simulation results show that the proposed beamforming design can maintain outperformance during the downlink transmission time even when channels vary fast,compared with the traditional beamforming design.
基金supported by the National Key R&D Program of China under Grant 2021YFB1407001the National Natural Science Foundation of China (NSFC) under Grants 62001269 and 61960206006+2 种基金the State Key Laboratory of Rail Traffic Control and Safety (under Grants RCS2022K009)Beijing Jiaotong University, the Future Plan Program for Young Scholars of Shandong Universitythe EU H2020 RISE TESTBED2 project under Grant 872172
文摘A large amount of mobile data from growing high-speed train(HST)users makes intelligent HST communications enter the era of big data.The corresponding artificial intelligence(AI)based HST channel modeling becomes a trend.This paper provides AI based channel characteristic prediction and scenario classification model for millimeter wave(mmWave)HST communications.Firstly,the ray tracing method verified by measurement data is applied to reconstruct four representative HST scenarios.By setting the positions of transmitter(Tx),receiver(Rx),and other parameters,the multi-scenarios wireless channel big data is acquired.Then,based on the obtained channel database,radial basis function neural network(RBF-NN)and back propagation neural network(BP-NN)are trained for channel characteristic prediction and scenario classification.Finally,the channel characteristic prediction and scenario classification capabilities of the network are evaluated by calculating the root mean square error(RMSE).The results show that RBF-NN can generally achieve better performance than BP-NN,and is more applicable to prediction of HST scenarios.
基金supported by the National Natural Science Foundation of China (No.61631013)National Key Basic Research Program of China (973 Program)(No. 2013CB329002)National Major Project (NO. 2018ZX03001006003)
文摘In this paper, the statistical properties of parameters of each path in wireless channel models are analyzed to prove that there is the static part in channel state information(CSI) which can be extracted from huge amounts of CSI data. Based on the analysis, the concept of the Tomographic Channel Model(TCM) is presented. With cluster algorithms, the static CSI database can be built in an off-line manner. The static CSI database can provide prior information to help pilot design to reduce overhead and improve accuracy in channel estimation. A new CSI prediction method and a new channel estimation method between different frequency bands are introduced based on the static CSI database. Using measurement data, the performance of the new channel prediction method is compared with that of the Auto Regression(AR) predictor. The results indicate that the prediction range of the new method is better than that of the AR method and the new method can predict with fewer pilot symbols. Using measurement data, the new channel estimation method between different frequency bands can estimate the CSI of one frequency band based on known CSI of another frequency band without any feedback.
文摘Wireless communication systems that incorporate digital twin(DT)alongside artificial intelligence(AI)are expected to transform 6G networks by providing advanced features for predictive modeling and decision making.The key component is the creation of DT channels,which form the basis for upcoming applications.However,the existing work of channel predictive generation only considers time dimension,distribution-oriented or multi-step slidingwindow prediction schemes,which is not accurate and efficient for real-time DT communication systems.Therefore,we propose the wireless channel generative adversarial network(WCGAN)to tackle the issue of generating authentic long-batch channels for DT applications.The generator based on convolutional neural networks(CNN)extracts features from both the time and frequency domains to better capture the correlation.The loss function is designed to ensure that the generated channels consistently match the physical channels over an extended period while sharing the same probability distributions.Meanwhile,the accumulating error from the slicing window has been alleviated.The simulation demonstrates that an accurate and efficient DT channel can be generated by employing our proposed WCGAN in various scenarios.
基金Suppported by the National Keys Research and Development Program of China(No.2018YFE0110000)the National Natural Science Foundation of China(No.11274259,11574258)the Science and Technology Commission Foundation of Shanghai(21DZ1205500).
文摘This paper investigates the channel prediction algorithm of the time-varying channels in underwater acoustic(UWA)communication systems using the long short-term memory(LSTM)model with the attention mechanism.AttLstmPreNet is a deep learning model that combines an attention mechanism with LSTM-type models to capture temporal information with different scales from historical UWA channels.The attention mechanism is used to capture sparsity in the time-delay scales and coherence in the gep-time scale under the LSTM framework.The soft attention mechanism is introduced before the LSTM to support the model to focus on the features of input sequences and help improve the learning capacity of the proposed model.The performance of the proposed model is validated using different simulation time-varying UWA channels.Compared with the adaptive channel predictors and the plain LSTM model,the proposed model is better in terms of channel prediction accuracy.