期刊文献+
共找到4篇文章
< 1 >
每页显示 20 50 100
肺结核患者深喉唾液检测结果与肺部CT影像学特征的相关性研究
1
作者 武卓然 陈彦斌 刘荣梅 《中国医学前沿杂志(电子版)》 北大核心 2025年第3期9-14,共6页
目的 分析肺结核患者的胸部计算机断层扫描(computed tomography,CT)征象与采用深喉唾液(posterior oropharyngeal saliva,POS)标本检测结核分枝杆菌(mycobacterium tuberculosis,MTB)的效果之间的关系。方法 回顾性研究2022年6月至2023... 目的 分析肺结核患者的胸部计算机断层扫描(computed tomography,CT)征象与采用深喉唾液(posterior oropharyngeal saliva,POS)标本检测结核分枝杆菌(mycobacterium tuberculosis,MTB)的效果之间的关系。方法 回顾性研究2022年6月至2023年12月共126例初次诊断为肺结核且痰标本的抗酸杆菌(acid-fast bacillus,AFB)涂片镜检和X-Classic检测结果均为阴性的住院患者,并根据在POS标本中是否检测出MTB分为POS阳性组和POS阴性组。收集患者的临床资料及胸部CT的影像学表现并进行比较。结果 共纳入126例符合纳入排除标准的患者。行倾向性评分匹配法后,最终纳入88例患者进行分析。对于痰标本检查结果阴性的疑似肺结核患者,进行POS标本检测,可以提高肺结核患者的病原学阳性率。在临床症状方面,POS阴性组患者(40.9%)比POS阳性组患者(65.9%)更少报告咳痰(P=0.019)。POS阴性患者出现乏力(13.6%)的可能性也比POS阳性患者(34.1%)更低(P=0.024)。在胸部CT征象方面,POS阴性患者胸部X线片显示空洞的比例(15.9%)显著低于POS阳性患者(47.7%)(P=0.001)。结节(≥3 mm)是POS阴性患者(72.7%)和POS阳性患者(93.2%)的共同特征,但POS阴性患者相对少见(P=0.011)。POS阴性患者比POS阳性患者更少在CT上发现实变(18.2%比43.2%,P=0.011)和淋巴结肿大(11.4%比36.4%,P=0.006),并且POS阴性患者的支气管受累较少,支气管内病变(11.4%比31.8%,P=0.020)和细支气管扩张(4.5%比22.7%,P=0.013)均比POS阳性患者更少见。结论 与POS阳性患者相比,POS阴性患者在临床症状方面较少表现为咳痰和乏力,在肺部CT影像方面显示空洞、结节(≥3 mm)、实变、淋巴结肿大、支气管内病变和细支气管扩张的比例更低,活动性肺结核的CT表现更不典型。 展开更多
关键词 肺结核 深喉唾液 胸部CT
在线阅读 下载PDF
二甲双胍辅助治疗糖尿病合并耐多药结核病的研究进展
2
作者 武卓然 刘荣梅 《中国医学前沿杂志(电子版)》 CSCD 北大核心 2024年第7期62-68,共7页
由结核分枝杆菌(mycobacterium tuberculosis)引起的结核病(tuberculosis)是全球主要传染病。糖尿病(dia-betes mellitus,DM)会增加患结核病的风险并恶化结核病治疗结果,并使结核分枝杆菌有更大的耐药倾向。全球耐多药结核病(multidrug-... 由结核分枝杆菌(mycobacterium tuberculosis)引起的结核病(tuberculosis)是全球主要传染病。糖尿病(dia-betes mellitus,DM)会增加患结核病的风险并恶化结核病治疗结果,并使结核分枝杆菌有更大的耐药倾向。全球耐多药结核病(multidrug-resistant tuberculosis,MDR-TB)和广泛耐药结核病(extensively drug-resistant tuberculosis,XDR-TB)患者的治疗形势依然严峻,亟待寻找有效、安全并缩短治疗时间的新疗法。研究表明,可以对结核病患者的免疫反应进行治疗性调节,以促进结核分枝杆菌的根除并限制过度的病理反应,即宿主导向治疗(host-directed therapy,HDT)。二甲双胍(metformin)是一种批准用于治疗2型糖尿病的药物,最近大量研究显示二甲双胍对结核病患者的益处,或许可以补充对耐药结核分枝杆菌的新型疗法。本文讨论了在糖尿病与结核病共同流行的背景下,二甲双胍作为潜在的治疗结核病的HDT对于MDR-TB的治疗的研究进展。 展开更多
关键词 二甲双胍 结核病 2型糖尿病 药物治疗
在线阅读 下载PDF
A UAV collaborative defense scheme driven by DDPG algorithm 被引量:3
3
作者 ZHANG Yaozhong wu zhuoran +1 位作者 XIONG Zhenkai CHEN Long 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2023年第5期1211-1224,共14页
The deep deterministic policy gradient(DDPG)algo-rithm is an off-policy method that combines two mainstream reinforcement learning methods based on value iteration and policy iteration.Using the DDPG algorithm,agents ... The deep deterministic policy gradient(DDPG)algo-rithm is an off-policy method that combines two mainstream reinforcement learning methods based on value iteration and policy iteration.Using the DDPG algorithm,agents can explore and summarize the environment to achieve autonomous deci-sions in the continuous state space and action space.In this paper,a cooperative defense with DDPG via swarms of unmanned aerial vehicle(UAV)is developed and validated,which has shown promising practical value in the effect of defending.We solve the sparse rewards problem of reinforcement learning pair in a long-term task by building the reward function of UAV swarms and optimizing the learning process of artificial neural network based on the DDPG algorithm to reduce the vibration in the learning process.The experimental results show that the DDPG algorithm can guide the UAVs swarm to perform the defense task efficiently,meeting the requirements of a UAV swarm for non-centralization,autonomy,and promoting the intelligent development of UAVs swarm as well as the decision-making process. 展开更多
关键词 deep deterministic policy gradient(DDPG)algorithm unmanned aerial vehicles(UAVs)swarm task decision making deep reinforcement learning sparse reward problem
在线阅读 下载PDF
Deep reinforcement learning for UAV swarm rendezvous behavior 被引量:2
4
作者 ZHANG Yaozhong LI Yike +1 位作者 wu zhuoran XU Jialin 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2023年第2期360-373,共14页
The unmanned aerial vehicle(UAV)swarm technology is one of the research hotspots in recent years.With the continuous improvement of autonomous intelligence of UAV,the swarm technology of UAV will become one of the mai... The unmanned aerial vehicle(UAV)swarm technology is one of the research hotspots in recent years.With the continuous improvement of autonomous intelligence of UAV,the swarm technology of UAV will become one of the main trends of UAV development in the future.This paper studies the behavior decision-making process of UAV swarm rendezvous task based on the double deep Q network(DDQN)algorithm.We design a guided reward function to effectively solve the problem of algorithm convergence caused by the sparse return problem in deep reinforcement learning(DRL)for the long period task.We also propose the concept of temporary storage area,optimizing the memory playback unit of the traditional DDQN algorithm,improving the convergence speed of the algorithm,and speeding up the training process of the algorithm.Different from traditional task environment,this paper establishes a continuous state-space task environment model to improve the authentication process of UAV task environment.Based on the DDQN algorithm,the collaborative tasks of UAV swarm in different task scenarios are trained.The experimental results validate that the DDQN algorithm is efficient in terms of training UAV swarm to complete the given collaborative tasks while meeting the requirements of UAV swarm for centralization and autonomy,and improving the intelligence of UAV swarm collaborative task execution.The simulation results show that after training,the proposed UAV swarm can carry out the rendezvous task well,and the success rate of the mission reaches 90%. 展开更多
关键词 double deep Q network(DDQN)algorithms unmanned aerial vehicle(UAV)swarm task decision deep reinforcement learning(DRL) sparse returns
在线阅读 下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部