High-Entropy Alloys(HEAs)exhibit significant potential across multiple domains due to their unique properties.However,conventional research methodologies face limitations in composition design,property prediction,and ...High-Entropy Alloys(HEAs)exhibit significant potential across multiple domains due to their unique properties.However,conventional research methodologies face limitations in composition design,property prediction,and process optimization,characterized by low efficiency and high costs.The integration of Artificial Intelligence(AI)technologies has provided innovative solutions for HEAs research.This review presented a detailed overview of recent advancements in AI applications for structural modeling and mechanical property prediction of HEAs.Furthermore,it discussed the advantages of big data analytics in facilitating alloy composition design and screening,quality control,and defect prediction,as well as the construction and sharing of specialized material databases.The paper also addressed the existing challenges in current AI-driven HEAs research,including issues related to data quality,model interpretability,and cross-domain knowledge integration.Additionally,it proposed prospects for the synergistic development of AI-enhanced computational materials science and experimental validation systems.展开更多
Objective To observe the value of self-supervised deep learning artificial intelligence(AI)noise reduction technology based on the nearest adjacent layer applicated in ultra-low dose CT(ULDCT)for urinary calculi.Metho...Objective To observe the value of self-supervised deep learning artificial intelligence(AI)noise reduction technology based on the nearest adjacent layer applicated in ultra-low dose CT(ULDCT)for urinary calculi.Methods Eighty-eight urinary calculi patients were prospectively enrolled.Low dose CT(LDCT)and ULDCT scanning were performed,and the effective dose(ED)of each scanning protocol were calculated.The patients were then randomly divided into training set(n=75)and test set(n=13),and a self-supervised deep learning AI noise reduction system based on the nearest adjacent layer constructed with ULDCT images in training set was used for reducing noise of ULDCT images in test set.In test set,the quality of ULDCT images before and after AI noise reduction were compared with LDCT images,i.e.Blind/Referenceless Image Spatial Quality Evaluator(BRISQUE)scores,image noise(SD ROI)and signal-to-noise ratio(SNR).Results The tube current,the volume CT dose index and the dose length product of abdominal ULDCT scanning protocol were all lower compared with those of LDCT scanning protocol(all P<0.05),with a decrease of ED for approximately 82.66%.For 13 patients with urinary calculi in test set,BRISQUE score showed that the quality level of ULDCT images before AI noise reduction reached 54.42%level but raised to 95.76%level of LDCT images after AI noise reduction.Both ULDCT images after AI noise reduction and LDCT images had lower SD ROI and higher SNR than ULDCT images before AI noise reduction(all adjusted P<0.05),whereas no significant difference was found between the former two(both adjusted P>0.05).Conclusion Self-supervised learning AI noise reduction technology based on the nearest adjacent layer could effectively reduce noise and improve image quality of urinary calculi ULDCT images,being conducive for clinical application of ULDCT.展开更多
Artificial intelligence technology is introduced into the simulation of muzzle flow field to improve its simulation efficiency in this paper.A data-physical fusion driven framework is proposed.First,the known flow fie...Artificial intelligence technology is introduced into the simulation of muzzle flow field to improve its simulation efficiency in this paper.A data-physical fusion driven framework is proposed.First,the known flow field data is used to initialize the model parameters,so that the parameters to be trained are close to the optimal value.Then physical prior knowledge is introduced into the training process so that the prediction results not only meet the known flow field information but also meet the physical conservation laws.Through two examples,it is proved that the model under the fusion driven framework can solve the strongly nonlinear flow field problems,and has stronger generalization and expansion.The proposed model is used to solve a muzzle flow field,and the safety clearance behind the barrel side is divided.It is pointed out that the shape of the safety clearance under different launch speeds is roughly the same,and the pressure disturbance in the area within 9.2 m behind the muzzle section exceeds the safety threshold,which is a dangerous area.Comparison with the CFD results shows that the calculation efficiency of the proposed model is greatly improved under the condition of the same calculation accuracy.The proposed model can quickly and accurately simulate the muzzle flow field under various launch conditions.展开更多
Objective To observe the value of artificial intelligence(AI)models based on non-contrast chest CT for measuring bone mineral density(BMD).Methods Totally 380 subjects who underwent both non-contrast chest CT and quan...Objective To observe the value of artificial intelligence(AI)models based on non-contrast chest CT for measuring bone mineral density(BMD).Methods Totally 380 subjects who underwent both non-contrast chest CT and quantitative CT(QCT)BMD examination were retrospectively enrolled and divided into training set(n=304)and test set(n=76)at a ratio of 8∶2.The mean BMD of L1—L3 vertebrae were measured based on QCT.Spongy bones of T5—T10 vertebrae were segmented as ROI,radiomics(Rad)features were extracted,and machine learning(ML),Rad and deep learning(DL)models were constructed for classification of osteoporosis(OP)and evaluating BMD,respectively.Receiver operating characteristic curves were drawn,and area under the curves(AUC)were calculated to evaluate the efficacy of each model for classification of OP.Bland-Altman analysis and Pearson correlation analysis were performed to explore the consistency and correlation of each model with QCT for measuring BMD.Results Among ML and Rad models,ML Bagging-OP and Rad Bagging-OP had the best performances for classification of OP.In test set,AUC of ML Bagging-OP,Rad Bagging-OP and DL OP for classification of OP was 0.943,0.944 and 0.947,respectively,with no significant difference(all P>0.05).BMD obtained with all the above models had good consistency with those measured with QCT(most of the differences were within the range of Ax-G±1.96 s),which were highly positively correlated(r=0.910—0.974,all P<0.001).Conclusion AI models based on non-contrast chest CT had high efficacy for classification of OP,and good consistency of BMD measurements were found between AI models and QCT.展开更多
Artificial intelligence(AI)technology has been increasingly used in medical field with its rapid developments.Echocardiography is one of the best imaging methods for clinical diagnosis of heart diseases,and combining ...Artificial intelligence(AI)technology has been increasingly used in medical field with its rapid developments.Echocardiography is one of the best imaging methods for clinical diagnosis of heart diseases,and combining with AI could further improve its diagnostic efficiency.Though the applications of AI in echocardiography remained at a relatively early stage,a variety of automated quantitative and analytical techniques were rapidly emerging and initially entered clinical practice.The status of clinical applications of AI in echocardiography were reviewed in this article.展开更多
The use of artificial intelligence(AI)has increased since the middle of the 20th century,as evidenced by its applications to a wide range of engineering and science problems.Air traffic management(ATM)is becoming incr...The use of artificial intelligence(AI)has increased since the middle of the 20th century,as evidenced by its applications to a wide range of engineering and science problems.Air traffic management(ATM)is becoming increasingly automated and autonomous,making it lucrative for AI applications.This paper presents a systematic review of studies that employ AI techniques for improving ATM capability.A brief account of the history,structure,and advantages of these methods is provided,followed by the description of their applications to several representative ATM tasks,such as air traffic services(ATS),airspace management(AM),air traffic flow management(ATFM),and flight operations(FO).The major contribution of the current review is the professional survey of the AI application to ATM alongside with the description of their specific advantages:(i)these methods provide alternative approaches to conventional physical modeling techniques,(ii)these methods do not require knowing relevant internal system parameters,(iii)these methods are computationally more efficient,and(iv)these methods offer compact solutions to multivariable problems.In addition,this review offers a fresh outlook on future research.One is providing a clear rationale for the model type and structure selection for a given ATM mission.Another is to understand what makes a specific architecture or algorithm effective for a given ATM mission.These are among the most important issues that will continue to attract the attention of the AI research community and ATM work teams in the future.展开更多
The paper presents the coupling of artificial intelligence-AI and Object-oriented methodology applied for the construction of the model-based decision support system MBDSS.The MBDSS is designed for support the strate...The paper presents the coupling of artificial intelligence-AI and Object-oriented methodology applied for the construction of the model-based decision support system MBDSS.The MBDSS is designed for support the strategic decision making lead to the achievemellt of optimal path towardsmarket economy from the central planning situation in China. To meet user's various requirements,a series of innovations in software development have been carried out, such as system formalization with OBFRAMEs in an object-oriented paradigm for problem solving automation and techniques of modules intelligent cooperation, hybrid system of reasoning, connectionist framework utilization,etc. Integration technology has been highly emphasized and discussed in this article and an outlook to future software engineering is given in the conclusion section.展开更多
The history of educational technology in the last 50 years contains few instances of dramatic improvements in learning based on the adoption of a particular technology.An example involving artificial intelligence occu...The history of educational technology in the last 50 years contains few instances of dramatic improvements in learning based on the adoption of a particular technology.An example involving artificial intelligence occurred in the 1990s with the development of intelligent tutoring systems( ITSs). What happened with ITSs was that their success was limited to well-defined and relatively simple declarative and procedural learning tasks(e. g.,learning how to write a recursive function in LISP; doing multi-column addition),and improvements that were observed tended to be more limited than promised(e. g.,one standard deviation improvement at best rather than the promised standard deviation improvement).Still,there was some progress in terms of how to conceptualize learning. A seldom documented limitation was the notion of only viewing learning from only content and cognitive perspectives( i. e.,in terms of memory limitations,prior knowledge,bug libraries,learning hierarchies and sequences etc.). Little attention was paid to education conceived more broadly than developing specific cognitive skills with highly constrained problems. New technologies offer the potential to create dynamic and multi-dimensional models of a particular learner,and to track large data sets of learning activities,resources,interventions,and outcomes over a great many learners. Using those data to personalize learning for a particular learner developing knowledge,competence and understanding in a specific domain of inquiry is finally a real possibility. While the potential to make significant progress is clearly possible,the reality is less not so promising. There are many as yet unmet challenging some of which will be mentioned in this paper. A persistent worry is that educational technologists and computer scientists will again promise too much,too soon at too little cost and with too little effort and attention to the realities in schools and universities.展开更多
In order to optimize the sintering process, a real-time operation guide system with artificial intelligence was developed, mainly including the data acquisition online subsystem, the sinter chemical composition contro...In order to optimize the sintering process, a real-time operation guide system with artificial intelligence was developed, mainly including the data acquisition online subsystem, the sinter chemical composition controller, the sintering process state controller, and the abnormal conditions diagnosis subsystem. Knowledge base of the sintering process controlling was constructed, and inference engine of the system was established. Sinter chemical compositions were controlled by the strategies of self-adaptive prediction, internal optimization and center on basicity. And the state of sintering was stabilized centering on permeability. In order to meet the needs of process change and make the system clear, the system has learning ability and explanation function. The software of the system was developed in Visual C++ programming language. The application of the system shows that the hitting accuracy of sinter compositions and burning through point prediction are more than 85%; the first-grade rate of sinter chemical composition, stability rate of burning through point and stability rate of sintering process are increased by 3%, 9% and 4%, respectively.展开更多
I firmly believe that of systems engineering is the requirement-driven force for the progress ofsoftware engineering, artificial intelligence and electronic technologies. The development ofsoftware engineering, artifi...I firmly believe that of systems engineering is the requirement-driven force for the progress ofsoftware engineering, artificial intelligence and electronic technologies. The development ofsoftware engineering, artificial intelligence and electronic technologies is the technical supportfor the progress of systems engineering. INTEGRATION can be considered as "bridging" the ex-isting technologies and the People together into a coordinated SYSTEM.展开更多
文摘High-Entropy Alloys(HEAs)exhibit significant potential across multiple domains due to their unique properties.However,conventional research methodologies face limitations in composition design,property prediction,and process optimization,characterized by low efficiency and high costs.The integration of Artificial Intelligence(AI)technologies has provided innovative solutions for HEAs research.This review presented a detailed overview of recent advancements in AI applications for structural modeling and mechanical property prediction of HEAs.Furthermore,it discussed the advantages of big data analytics in facilitating alloy composition design and screening,quality control,and defect prediction,as well as the construction and sharing of specialized material databases.The paper also addressed the existing challenges in current AI-driven HEAs research,including issues related to data quality,model interpretability,and cross-domain knowledge integration.Additionally,it proposed prospects for the synergistic development of AI-enhanced computational materials science and experimental validation systems.
文摘Objective To observe the value of self-supervised deep learning artificial intelligence(AI)noise reduction technology based on the nearest adjacent layer applicated in ultra-low dose CT(ULDCT)for urinary calculi.Methods Eighty-eight urinary calculi patients were prospectively enrolled.Low dose CT(LDCT)and ULDCT scanning were performed,and the effective dose(ED)of each scanning protocol were calculated.The patients were then randomly divided into training set(n=75)and test set(n=13),and a self-supervised deep learning AI noise reduction system based on the nearest adjacent layer constructed with ULDCT images in training set was used for reducing noise of ULDCT images in test set.In test set,the quality of ULDCT images before and after AI noise reduction were compared with LDCT images,i.e.Blind/Referenceless Image Spatial Quality Evaluator(BRISQUE)scores,image noise(SD ROI)and signal-to-noise ratio(SNR).Results The tube current,the volume CT dose index and the dose length product of abdominal ULDCT scanning protocol were all lower compared with those of LDCT scanning protocol(all P<0.05),with a decrease of ED for approximately 82.66%.For 13 patients with urinary calculi in test set,BRISQUE score showed that the quality level of ULDCT images before AI noise reduction reached 54.42%level but raised to 95.76%level of LDCT images after AI noise reduction.Both ULDCT images after AI noise reduction and LDCT images had lower SD ROI and higher SNR than ULDCT images before AI noise reduction(all adjusted P<0.05),whereas no significant difference was found between the former two(both adjusted P>0.05).Conclusion Self-supervised learning AI noise reduction technology based on the nearest adjacent layer could effectively reduce noise and improve image quality of urinary calculi ULDCT images,being conducive for clinical application of ULDCT.
基金Supported by the Natural Science Foundation of Jiangsu Province of China(Grant No.BK20210347)Supported by the National Natural Science Foundation of China(Grant No.U2141246).
文摘Artificial intelligence technology is introduced into the simulation of muzzle flow field to improve its simulation efficiency in this paper.A data-physical fusion driven framework is proposed.First,the known flow field data is used to initialize the model parameters,so that the parameters to be trained are close to the optimal value.Then physical prior knowledge is introduced into the training process so that the prediction results not only meet the known flow field information but also meet the physical conservation laws.Through two examples,it is proved that the model under the fusion driven framework can solve the strongly nonlinear flow field problems,and has stronger generalization and expansion.The proposed model is used to solve a muzzle flow field,and the safety clearance behind the barrel side is divided.It is pointed out that the shape of the safety clearance under different launch speeds is roughly the same,and the pressure disturbance in the area within 9.2 m behind the muzzle section exceeds the safety threshold,which is a dangerous area.Comparison with the CFD results shows that the calculation efficiency of the proposed model is greatly improved under the condition of the same calculation accuracy.The proposed model can quickly and accurately simulate the muzzle flow field under various launch conditions.
文摘Objective To observe the value of artificial intelligence(AI)models based on non-contrast chest CT for measuring bone mineral density(BMD).Methods Totally 380 subjects who underwent both non-contrast chest CT and quantitative CT(QCT)BMD examination were retrospectively enrolled and divided into training set(n=304)and test set(n=76)at a ratio of 8∶2.The mean BMD of L1—L3 vertebrae were measured based on QCT.Spongy bones of T5—T10 vertebrae were segmented as ROI,radiomics(Rad)features were extracted,and machine learning(ML),Rad and deep learning(DL)models were constructed for classification of osteoporosis(OP)and evaluating BMD,respectively.Receiver operating characteristic curves were drawn,and area under the curves(AUC)were calculated to evaluate the efficacy of each model for classification of OP.Bland-Altman analysis and Pearson correlation analysis were performed to explore the consistency and correlation of each model with QCT for measuring BMD.Results Among ML and Rad models,ML Bagging-OP and Rad Bagging-OP had the best performances for classification of OP.In test set,AUC of ML Bagging-OP,Rad Bagging-OP and DL OP for classification of OP was 0.943,0.944 and 0.947,respectively,with no significant difference(all P>0.05).BMD obtained with all the above models had good consistency with those measured with QCT(most of the differences were within the range of Ax-G±1.96 s),which were highly positively correlated(r=0.910—0.974,all P<0.001).Conclusion AI models based on non-contrast chest CT had high efficacy for classification of OP,and good consistency of BMD measurements were found between AI models and QCT.
文摘Artificial intelligence(AI)technology has been increasingly used in medical field with its rapid developments.Echocardiography is one of the best imaging methods for clinical diagnosis of heart diseases,and combining with AI could further improve its diagnostic efficiency.Though the applications of AI in echocardiography remained at a relatively early stage,a variety of automated quantitative and analytical techniques were rapidly emerging and initially entered clinical practice.The status of clinical applications of AI in echocardiography were reviewed in this article.
基金supported by the National Natural Science Foundation of China(62073330)the Natural Science Foundation of Hunan Province(2020JJ4339)the Scientific Research Fund of Hunan Province Education Department(20B272).
文摘The use of artificial intelligence(AI)has increased since the middle of the 20th century,as evidenced by its applications to a wide range of engineering and science problems.Air traffic management(ATM)is becoming increasingly automated and autonomous,making it lucrative for AI applications.This paper presents a systematic review of studies that employ AI techniques for improving ATM capability.A brief account of the history,structure,and advantages of these methods is provided,followed by the description of their applications to several representative ATM tasks,such as air traffic services(ATS),airspace management(AM),air traffic flow management(ATFM),and flight operations(FO).The major contribution of the current review is the professional survey of the AI application to ATM alongside with the description of their specific advantages:(i)these methods provide alternative approaches to conventional physical modeling techniques,(ii)these methods do not require knowing relevant internal system parameters,(iii)these methods are computationally more efficient,and(iv)these methods offer compact solutions to multivariable problems.In addition,this review offers a fresh outlook on future research.One is providing a clear rationale for the model type and structure selection for a given ATM mission.Another is to understand what makes a specific architecture or algorithm effective for a given ATM mission.These are among the most important issues that will continue to attract the attention of the AI research community and ATM work teams in the future.
文摘The paper presents the coupling of artificial intelligence-AI and Object-oriented methodology applied for the construction of the model-based decision support system MBDSS.The MBDSS is designed for support the strategic decision making lead to the achievemellt of optimal path towardsmarket economy from the central planning situation in China. To meet user's various requirements,a series of innovations in software development have been carried out, such as system formalization with OBFRAMEs in an object-oriented paradigm for problem solving automation and techniques of modules intelligent cooperation, hybrid system of reasoning, connectionist framework utilization,etc. Integration technology has been highly emphasized and discussed in this article and an outlook to future software engineering is given in the conclusion section.
文摘The history of educational technology in the last 50 years contains few instances of dramatic improvements in learning based on the adoption of a particular technology.An example involving artificial intelligence occurred in the 1990s with the development of intelligent tutoring systems( ITSs). What happened with ITSs was that their success was limited to well-defined and relatively simple declarative and procedural learning tasks(e. g.,learning how to write a recursive function in LISP; doing multi-column addition),and improvements that were observed tended to be more limited than promised(e. g.,one standard deviation improvement at best rather than the promised standard deviation improvement).Still,there was some progress in terms of how to conceptualize learning. A seldom documented limitation was the notion of only viewing learning from only content and cognitive perspectives( i. e.,in terms of memory limitations,prior knowledge,bug libraries,learning hierarchies and sequences etc.). Little attention was paid to education conceived more broadly than developing specific cognitive skills with highly constrained problems. New technologies offer the potential to create dynamic and multi-dimensional models of a particular learner,and to track large data sets of learning activities,resources,interventions,and outcomes over a great many learners. Using those data to personalize learning for a particular learner developing knowledge,competence and understanding in a specific domain of inquiry is finally a real possibility. While the potential to make significant progress is clearly possible,the reality is less not so promising. There are many as yet unmet challenging some of which will be mentioned in this paper. A persistent worry is that educational technologists and computer scientists will again promise too much,too soon at too little cost and with too little effort and attention to the realities in schools and universities.
文摘In order to optimize the sintering process, a real-time operation guide system with artificial intelligence was developed, mainly including the data acquisition online subsystem, the sinter chemical composition controller, the sintering process state controller, and the abnormal conditions diagnosis subsystem. Knowledge base of the sintering process controlling was constructed, and inference engine of the system was established. Sinter chemical compositions were controlled by the strategies of self-adaptive prediction, internal optimization and center on basicity. And the state of sintering was stabilized centering on permeability. In order to meet the needs of process change and make the system clear, the system has learning ability and explanation function. The software of the system was developed in Visual C++ programming language. The application of the system shows that the hitting accuracy of sinter compositions and burning through point prediction are more than 85%; the first-grade rate of sinter chemical composition, stability rate of burning through point and stability rate of sintering process are increased by 3%, 9% and 4%, respectively.
文摘I firmly believe that of systems engineering is the requirement-driven force for the progress ofsoftware engineering, artificial intelligence and electronic technologies. The development ofsoftware engineering, artificial intelligence and electronic technologies is the technical supportfor the progress of systems engineering. INTEGRATION can be considered as "bridging" the ex-isting technologies and the People together into a coordinated SYSTEM.