Neuromorphic computing,inspired by the human brain,uses memristor devices for complex tasks.Recent studies show that self-organizing random nanowires can implement neuromorphic information processing,enabling data ana...Neuromorphic computing,inspired by the human brain,uses memristor devices for complex tasks.Recent studies show that self-organizing random nanowires can implement neuromorphic information processing,enabling data analysis.This paper presents a model based on these nanowire networks,with an improved conductance variation profile.We suggest using these networks for temporal information processing via a reservoir computing scheme and propose an efficient data encoding method using voltage pulses.The nanowire network layer generates dynamic behaviors for pulse voltages,allowing time series prediction analysis.Our experiment uses a double stochastic nanowire network architecture for processing multiple input signals,outperforming traditional reservoir computing in terms of fewer nodes,enriched dynamics and improved prediction accuracy.Experimental results confirm the high accuracy of this architecture on multiple real-time series datasets,making neuromorphic nanowire networks promising for physical implementation of reservoir computing.展开更多
Deep neural networks(DNN)are widely used in image recognition,image classification,and other fields.However,as the model size increases,the DNN hardware accelerators face the challenge of higher area overhead and ener...Deep neural networks(DNN)are widely used in image recognition,image classification,and other fields.However,as the model size increases,the DNN hardware accelerators face the challenge of higher area overhead and energy consumption.In recent years,stochastic computing(SC)has been considered a way to realize deep neural networks and reduce hardware consumption.A probabilistic compensation algorithm is proposed to solve the accuracy problem of stochastic calculation,and a fully parallel neural network accelerator based on a deterministic method is designed.The software simulation results show that the accuracy of the probability compensation algorithm on the CIFAR-10 data set is 95.32%,which is 14.98%higher than that of the traditional SC algorithm.The accuracy of the deterministic algorithm on the CIFAR-10 dataset is 95.06%,which is 14.72%higher than that of the traditional SC algorithm.The results of Very Large Scale Integration Circuit(VLSI)hardware tests show that the normalized energy efficiency of the fully parallel neural network accelerator based on the deterministic method is improved by 31%compared with the circuit based on binary computing.展开更多
For the mobile edge computing network consisting of multiple base stations and resourceconstrained user devices,network cost in terms of energy and delay will incur during task offloading from the user to the edge ser...For the mobile edge computing network consisting of multiple base stations and resourceconstrained user devices,network cost in terms of energy and delay will incur during task offloading from the user to the edge server.With the limitations imposed on transmission capacity,computing resource,and connection capacity,the per-slot online learning algorithm is first proposed to minimize the time-averaged network cost.In particular,by leveraging the theories of stochastic gradient descent and minimum cost maximum flow,the user association is jointly optimized with resource scheduling in each time slot.The theoretical analysis proves that the proposed approach can achieve asymptotic optimality without any prior knowledge of the network environment.Moreover,to alleviate the high network overhead incurred during user handover and task migration,a two-timescale optimization approach is proposed to avoid frequent changes in user association.With user association executed on a large timescale and the resource scheduling decided on the single time slot,the asymptotic optimality is preserved.Simulation results verify the effectiveness of the proposed online learning algorithms.展开更多
Peer-to-peer computation offloading has been a promising approach that enables resourcelimited Internet of Things(IoT)devices to offload their computation-intensive tasks to idle peer devices in proximity.Different fr...Peer-to-peer computation offloading has been a promising approach that enables resourcelimited Internet of Things(IoT)devices to offload their computation-intensive tasks to idle peer devices in proximity.Different from dedicated servers,the spare computation resources offered by peer devices are random and intermittent,which affects the offloading performance.The mutual interference caused by multiple simultaneous offloading requestors that share the same wireless channel further complicates the offloading decisions.In this work,we investigate the opportunistic peer-to-peer task offloading problem by jointly considering the stochastic task arrivals,dynamic interuser interference,and opportunistic availability of peer devices.Each requestor makes decisions on both local computation frequency and offloading transmission power to minimize its own expected long-term cost on tasks completion,which takes into consideration its energy consumption,task delay,and task loss due to buffer overflow.The dynamic decision process among multiple requestors is formulated as a stochastic game.By constructing the post-decision states,a decentralized online offloading algorithm is proposed,where each requestor as an independent learning agent learns to approach the optimal strategies with its local observations.Simulation results under different system parameter configurations demonstrate the proposed online algorithm achieves a better performance compared with some existing algorithms,especially in the scenarios with large task arrival probability or small helper availability probability.展开更多
As a viable component of 6G wireless communication architecture,satellite-terrestrial networks support efficient file delivery by leveraging the innate broadcast ability of satellite and the enhanced powerful file tra...As a viable component of 6G wireless communication architecture,satellite-terrestrial networks support efficient file delivery by leveraging the innate broadcast ability of satellite and the enhanced powerful file transmission approaches of multi-tier terrestrial networks.In the paper,we introduce edge computing technology into the satellite-terrestrial network and propose a partition-based cache and delivery strategy to make full use of the integrated resources and reducing the backhaul load.Focusing on the interference effect from varied nodes in different geographical distances,we derive the file successful transmission probability of the typical user and by utilizing the tool of stochastic geometry.Considering the constraint of nodes cache space and file sets parameters,we propose a near-optimal partition-based cache and delivery strategy by optimizing the asymptotic successful transmission probability of the typical user.The complex nonlinear programming problem is settled by jointly utilizing standard particle-based swarm optimization(PSO)method and greedy based multiple knapsack choice problem(MKCP)optimization method.Numerical results show that compared with the terrestrial only cache strategy,Ground Popular Strategy,Satellite Popular Strategy,and Independent and identically distributed popularity strategy,the performance of the proposed scheme improve by 30.5%,9.3%,12.5%and 13.7%.展开更多
In parametric cost estimating, objections to using statistical Cost Estimating Relationships (CERs) and parametric models include problems of low statistical significance due to limited data points, biases in the un...In parametric cost estimating, objections to using statistical Cost Estimating Relationships (CERs) and parametric models include problems of low statistical significance due to limited data points, biases in the underlying data, and lack of robustness. Soft Computing (SC) technologies are used for building intelligent cost models. The SC models are systemically evaluated based on their training and prediction of the historical cost data of airborne avionics systems. Results indicating the strengths and weakness of each model are presented. In general, the intelligent cost models have higher prediction precision, better data adaptability, and stronger self-learning capability than the regression CERs.展开更多
当物联网设备(Internet of Things Device,IoTD)面临随机到达且复杂度高的计算任务时,因自身计算资源和能力所限,无法进行实时高效的处理。为了应对此类问题,设计了一种两层无人机辅助的移动边缘计算(Mobile Edge Computing,MEC)模型。...当物联网设备(Internet of Things Device,IoTD)面临随机到达且复杂度高的计算任务时,因自身计算资源和能力所限,无法进行实时高效的处理。为了应对此类问题,设计了一种两层无人机辅助的移动边缘计算(Mobile Edge Computing,MEC)模型。在该模型中,考虑到IoTD处理随机计算任务时的局限性,引入多架配备MEC服务器的下层无人机和单架上层无人机进行协同处理。为了实现系统能耗最优化,提出了一种资源优化和多无人机位置部署方案,根据计算任务到达的随机性,应用李雅普诺夫优化方法将能耗最小化问题转化为一个确定性问题,应用差分进化(Differential Evolution,DE)算法进行多次变异、交叉和选择取得无人机的优化部署方案;采用深度确定性策略梯度(Depth Deterministic policy Gradient,DDPG)算法对带宽分配、计算资源分配、传输功率分配和任务卸载分配进行联合优化。实验结果表明,该算法相较于对比算法系统能耗降低35%,充分验证了其可行性和有效性。展开更多
数字图像边缘是具有明显亮度变化的像素集合,边缘检测是识别图像边缘的最佳方法。其中,二阶边缘检测算法具有很强的边缘定位能力,但在硬件实现上需要消耗大量资源,且易受到电路的内部噪声影响。文章提出拉普拉斯(Laplace)和高斯拉普拉斯...数字图像边缘是具有明显亮度变化的像素集合,边缘检测是识别图像边缘的最佳方法。其中,二阶边缘检测算法具有很强的边缘定位能力,但在硬件实现上需要消耗大量资源,且易受到电路的内部噪声影响。文章提出拉普拉斯(Laplace)和高斯拉普拉斯(Laplacian of Gaussian,LoG)2种常见二阶边缘检测算法的随机电路结构,并控制输入比特流的相关性来优化电路,进一步提高运行效率。实验结果表明,相比于传统的加权二进制实现,该电路消耗更少的功耗和电路面积,同时拥有更高的容错性。展开更多
基金Project supported by the National Natural Science Foundation of China (Grant Nos. U20A20227,62076208, and 62076207)Chongqing Talent Plan “Contract System” Project (Grant No. CQYC20210302257)+3 种基金National Key Laboratory of Smart Vehicle Safety Technology Open Fund Project (Grant No. IVSTSKL-202309)the Chongqing Technology Innovation and Application Development Special Major Project (Grant No. CSTB2023TIAD-STX0020)College of Artificial Intelligence, Southwest UniversityState Key Laboratory of Intelligent Vehicle Safety Technology
文摘Neuromorphic computing,inspired by the human brain,uses memristor devices for complex tasks.Recent studies show that self-organizing random nanowires can implement neuromorphic information processing,enabling data analysis.This paper presents a model based on these nanowire networks,with an improved conductance variation profile.We suggest using these networks for temporal information processing via a reservoir computing scheme and propose an efficient data encoding method using voltage pulses.The nanowire network layer generates dynamic behaviors for pulse voltages,allowing time series prediction analysis.Our experiment uses a double stochastic nanowire network architecture for processing multiple input signals,outperforming traditional reservoir computing in terms of fewer nodes,enriched dynamics and improved prediction accuracy.Experimental results confirm the high accuracy of this architecture on multiple real-time series datasets,making neuromorphic nanowire networks promising for physical implementation of reservoir computing.
文摘Deep neural networks(DNN)are widely used in image recognition,image classification,and other fields.However,as the model size increases,the DNN hardware accelerators face the challenge of higher area overhead and energy consumption.In recent years,stochastic computing(SC)has been considered a way to realize deep neural networks and reduce hardware consumption.A probabilistic compensation algorithm is proposed to solve the accuracy problem of stochastic calculation,and a fully parallel neural network accelerator based on a deterministic method is designed.The software simulation results show that the accuracy of the probability compensation algorithm on the CIFAR-10 data set is 95.32%,which is 14.98%higher than that of the traditional SC algorithm.The accuracy of the deterministic algorithm on the CIFAR-10 dataset is 95.06%,which is 14.72%higher than that of the traditional SC algorithm.The results of Very Large Scale Integration Circuit(VLSI)hardware tests show that the normalized energy efficiency of the fully parallel neural network accelerator based on the deterministic method is improved by 31%compared with the circuit based on binary computing.
基金the National Natural Science Foundation of China(61971066,61941114)the Beijing Natural Science Foundation(No.L182038)National Youth Top-notch Talent Support Program.
文摘For the mobile edge computing network consisting of multiple base stations and resourceconstrained user devices,network cost in terms of energy and delay will incur during task offloading from the user to the edge server.With the limitations imposed on transmission capacity,computing resource,and connection capacity,the per-slot online learning algorithm is first proposed to minimize the time-averaged network cost.In particular,by leveraging the theories of stochastic gradient descent and minimum cost maximum flow,the user association is jointly optimized with resource scheduling in each time slot.The theoretical analysis proves that the proposed approach can achieve asymptotic optimality without any prior knowledge of the network environment.Moreover,to alleviate the high network overhead incurred during user handover and task migration,a two-timescale optimization approach is proposed to avoid frequent changes in user association.With user association executed on a large timescale and the resource scheduling decided on the single time slot,the asymptotic optimality is preserved.Simulation results verify the effectiveness of the proposed online learning algorithms.
基金supported by National Natural Science Foundation of China (No. 62101601)
文摘Peer-to-peer computation offloading has been a promising approach that enables resourcelimited Internet of Things(IoT)devices to offload their computation-intensive tasks to idle peer devices in proximity.Different from dedicated servers,the spare computation resources offered by peer devices are random and intermittent,which affects the offloading performance.The mutual interference caused by multiple simultaneous offloading requestors that share the same wireless channel further complicates the offloading decisions.In this work,we investigate the opportunistic peer-to-peer task offloading problem by jointly considering the stochastic task arrivals,dynamic interuser interference,and opportunistic availability of peer devices.Each requestor makes decisions on both local computation frequency and offloading transmission power to minimize its own expected long-term cost on tasks completion,which takes into consideration its energy consumption,task delay,and task loss due to buffer overflow.The dynamic decision process among multiple requestors is formulated as a stochastic game.By constructing the post-decision states,a decentralized online offloading algorithm is proposed,where each requestor as an independent learning agent learns to approach the optimal strategies with its local observations.Simulation results under different system parameter configurations demonstrate the proposed online algorithm achieves a better performance compared with some existing algorithms,especially in the scenarios with large task arrival probability or small helper availability probability.
基金supported by the National Key Research and Development Program of China 2021YFB2900504,2020YFB1807900 and 2020YFB1807903by the National Science Foundation of China under Grant 62271062,62071063。
文摘As a viable component of 6G wireless communication architecture,satellite-terrestrial networks support efficient file delivery by leveraging the innate broadcast ability of satellite and the enhanced powerful file transmission approaches of multi-tier terrestrial networks.In the paper,we introduce edge computing technology into the satellite-terrestrial network and propose a partition-based cache and delivery strategy to make full use of the integrated resources and reducing the backhaul load.Focusing on the interference effect from varied nodes in different geographical distances,we derive the file successful transmission probability of the typical user and by utilizing the tool of stochastic geometry.Considering the constraint of nodes cache space and file sets parameters,we propose a near-optimal partition-based cache and delivery strategy by optimizing the asymptotic successful transmission probability of the typical user.The complex nonlinear programming problem is settled by jointly utilizing standard particle-based swarm optimization(PSO)method and greedy based multiple knapsack choice problem(MKCP)optimization method.Numerical results show that compared with the terrestrial only cache strategy,Ground Popular Strategy,Satellite Popular Strategy,and Independent and identically distributed popularity strategy,the performance of the proposed scheme improve by 30.5%,9.3%,12.5%and 13.7%.
文摘In parametric cost estimating, objections to using statistical Cost Estimating Relationships (CERs) and parametric models include problems of low statistical significance due to limited data points, biases in the underlying data, and lack of robustness. Soft Computing (SC) technologies are used for building intelligent cost models. The SC models are systemically evaluated based on their training and prediction of the historical cost data of airborne avionics systems. Results indicating the strengths and weakness of each model are presented. In general, the intelligent cost models have higher prediction precision, better data adaptability, and stronger self-learning capability than the regression CERs.
文摘当物联网设备(Internet of Things Device,IoTD)面临随机到达且复杂度高的计算任务时,因自身计算资源和能力所限,无法进行实时高效的处理。为了应对此类问题,设计了一种两层无人机辅助的移动边缘计算(Mobile Edge Computing,MEC)模型。在该模型中,考虑到IoTD处理随机计算任务时的局限性,引入多架配备MEC服务器的下层无人机和单架上层无人机进行协同处理。为了实现系统能耗最优化,提出了一种资源优化和多无人机位置部署方案,根据计算任务到达的随机性,应用李雅普诺夫优化方法将能耗最小化问题转化为一个确定性问题,应用差分进化(Differential Evolution,DE)算法进行多次变异、交叉和选择取得无人机的优化部署方案;采用深度确定性策略梯度(Depth Deterministic policy Gradient,DDPG)算法对带宽分配、计算资源分配、传输功率分配和任务卸载分配进行联合优化。实验结果表明,该算法相较于对比算法系统能耗降低35%,充分验证了其可行性和有效性。
文摘数字图像边缘是具有明显亮度变化的像素集合,边缘检测是识别图像边缘的最佳方法。其中,二阶边缘检测算法具有很强的边缘定位能力,但在硬件实现上需要消耗大量资源,且易受到电路的内部噪声影响。文章提出拉普拉斯(Laplace)和高斯拉普拉斯(Laplacian of Gaussian,LoG)2种常见二阶边缘检测算法的随机电路结构,并控制输入比特流的相关性来优化电路,进一步提高运行效率。实验结果表明,相比于传统的加权二进制实现,该电路消耗更少的功耗和电路面积,同时拥有更高的容错性。