Magnesium(Mg)is a promising alternative to lithium(Li)as an anode material in solid-state batteries due to its abundance and high theoretical volumetric capacity.However,the sluggish Mg-ion conduction in the lattice o...Magnesium(Mg)is a promising alternative to lithium(Li)as an anode material in solid-state batteries due to its abundance and high theoretical volumetric capacity.However,the sluggish Mg-ion conduction in the lattice of solidstate electrolytes(SSEs)is one of the key challenges that hamper the development of Mg-ion solid-state batteries.Though various Mg-ion SSEs have been reported in recent years,key insights are hard to be derived from a single literature report.Besides,the structure-performance relationships of Mg-ion SSEs need to be further unraveled to provide a more precise design guideline for SSEs.In this viewpoint article,we analyze the structural characteristics of the Mg-based SSEs with high ionic conductivity reported in the last four decades based upon data mining-we provide big-data-derived insights into the challenges and opportunities in developing next-generation Mg-ion SSEs.展开更多
According to the chaotic and non-linear characters of power load data,the time series matrix is established with the theory of phase-space reconstruction,and then Lyapunov exponents with chaotic time series are comput...According to the chaotic and non-linear characters of power load data,the time series matrix is established with the theory of phase-space reconstruction,and then Lyapunov exponents with chaotic time series are computed to determine the time delay and the embedding dimension.Due to different features of the data,data mining algorithm is conducted to classify the data into different groups.Redundant information is eliminated by the advantage of data mining technology,and the historical loads that have highly similar features with the forecasting day are searched by the system.As a result,the training data can be decreased and the computing speed can also be improved when constructing support vector machine(SVM) model.Then,SVM algorithm is used to predict power load with parameters that get in pretreatment.In order to prove the effectiveness of the new model,the calculation with data mining SVM algorithm is compared with that of single SVM and back propagation network.It can be seen that the new DSVM algorithm effectively improves the forecast accuracy by 0.75%,1.10% and 1.73% compared with SVM for two random dimensions of 11-dimension,14-dimension and BP network,respectively.This indicates that the DSVM gains perfect improvement effect in the short-term power load forecasting.展开更多
In order to construct the data mining frame for the generic project risk research, the basic definitions of the generic project risk element were given, and then a new model of the generic project risk element was pre...In order to construct the data mining frame for the generic project risk research, the basic definitions of the generic project risk element were given, and then a new model of the generic project risk element was presented with the definitions. From the model, data mining method was used to acquire the risk transmission matrix from the historical databases analysis. The quantitative calculation problem among the generic project risk elements was solved. This method deals with well the risk element transmission problems with limited states. And in order to get the limited states, fuzzy theory was used to discrete the historical data in historical databases. In an example, the controlling risk degree is chosen as P(Rs≥2) ≤0.1, it means that the probability of risk state which is not less than 2 in project is not more than 0.1, the risk element R3 is chosen to control the project, respectively. The result shows that three risk element transmission matrix can be acquired in 4 risk elements, and the frequency histogram and cumulative frequency histogram of each risk element are also given.展开更多
Objective speech quality is difficult to be measured without the input reference speech.Mapping methods using data mining are investigated and designed to improve the output-based speech quality assessment algorithm.T...Objective speech quality is difficult to be measured without the input reference speech.Mapping methods using data mining are investigated and designed to improve the output-based speech quality assessment algorithm.The degraded speech is firstly separated into three classes(unvoiced,voiced and silence),and then the consistency measurement between the degraded speech signal and the pre-trained reference model for each class is calculated and mapped to an objective speech quality score using data mining.Fuzzy Gaussian mixture model(GMM)is used to generate the artificial reference model trained on perceptual linear predictive(PLP)features.The mean opinion score(MOS)mapping methods including multivariate non-linear regression(MNLR),fuzzy neural network(FNN)and support vector regression(SVR)are designed and compared with the standard ITU-T P.563 method.Experimental results show that the assessment methods with data mining perform better than ITU-T P.563.Moreover,FNN and SVR are more efficient than MNLR,and FNN performs best with 14.50% increase in the correlation coefficient and 32.76% decrease in the root-mean-square MOS error.展开更多
It is difficult to detect the anomalies whose matching relationship among some data attributes is very different from others’ in a dataset. Aiming at this problem, an approach based on wavelet analysis for detecting ...It is difficult to detect the anomalies whose matching relationship among some data attributes is very different from others’ in a dataset. Aiming at this problem, an approach based on wavelet analysis for detecting and amending anomalous samples was proposed. Taking full advantage of wavelet analysis’ properties of multi-resolution and local analysis, this approach is able to detect and amend anomalous samples effectively. To realize the rapid numeric computation of wavelet translation for a discrete sequence, a modified algorithm based on Newton-Cores formula was also proposed. The experimental result shows that the approach is feasible with good result and good practicality.展开更多
The detection of outliers and change points from time series has become research focus in the area of time series data mining since it can be used for fraud detection, rare event discovery, event/trend change detectio...The detection of outliers and change points from time series has become research focus in the area of time series data mining since it can be used for fraud detection, rare event discovery, event/trend change detection, etc. In most previous works, outlier detection and change point detection have not been related explicitly and the change point detections did not consider the influence of outliers, in this work, a unified detection framework was presented to deal with both of them. The framework is based on ALARCON-AQUINO and BARRIA's change points detection method and adopts two-stage detection to divide the outliers and change points. The advantages of it lie in that: firstly, unified structure for change detection and outlier detection further reduces the computational complexity and make the detective procedure simple; Secondly, the detection strategy of outlier detection before change point detection avoids the influence of outliers to the change point detection, and thus improves the accuracy of the change point detection. The simulation experiments of the proposed method for both model data and actual application data have been made and gotten 100% detection accuracy. The comparisons between traditional detection method and the proposed method further demonstrate that the unified detection structure is more accurate when the time series are contaminated by outliers.展开更多
Low-rank matrix recovery is an important problem extensively studied in machine learning, data mining and computer vision communities. A novel method is proposed for low-rank matrix recovery, targeting at higher recov...Low-rank matrix recovery is an important problem extensively studied in machine learning, data mining and computer vision communities. A novel method is proposed for low-rank matrix recovery, targeting at higher recovery accuracy and stronger theoretical guarantee. Specifically, the proposed method is based on a nonconvex optimization model, by solving the low-rank matrix which can be recovered from the noisy observation. To solve the model, an effective algorithm is derived by minimizing over the variables alternately. It is proved theoretically that this algorithm has stronger theoretical guarantee than the existing work. In natural image denoising experiments, the proposed method achieves lower recovery error than the two compared methods. The proposed low-rank matrix recovery method is also applied to solve two real-world problems, i.e., removing noise from verification code and removing watermark from images, in which the images recovered by the proposed method are less noisy than those of the two compared methods.展开更多
The rapid developments in the fields of telecommunication, sensor data, financial applications, analyzing of data streams, and so on, increase the rate of data arrival, among which the data mining technique is conside...The rapid developments in the fields of telecommunication, sensor data, financial applications, analyzing of data streams, and so on, increase the rate of data arrival, among which the data mining technique is considered a vital process. The data analysis process consists of different tasks, among which the data stream classification approaches face more challenges than the other commonly used techniques. Even though the classification is a continuous process, it requires a design that can adapt the classification model so as to adjust the concept change or the boundary change between the classes. Hence, we design a novel fuzzy classifier known as THRFuzzy to classify new incoming data streams. Rough set theory along with tangential holoentropy function helps in the designing the dynamic classification model. The classification approach uses kernel fuzzy c-means(FCM) clustering for the generation of the rules and tangential holoentropy function to update the membership function. The performance of the proposed THRFuzzy method is verified using three datasets, namely skin segmentation, localization, and breast cancer datasets, and the evaluated metrics, accuracy and time, comparing its performance with HRFuzzy and adaptive k-NN classifiers. The experimental results conclude that THRFuzzy classifier shows better classification results providing a maximum accuracy consuming a minimal time than the existing classifiers.展开更多
A novel binary particle swarm optimization for frequent item sets mining from high-dimensional dataset(BPSO-HD) was proposed, where two improvements were joined. Firstly, the dimensionality reduction of initial partic...A novel binary particle swarm optimization for frequent item sets mining from high-dimensional dataset(BPSO-HD) was proposed, where two improvements were joined. Firstly, the dimensionality reduction of initial particles was designed to ensure the reasonable initial fitness, and then, the dynamically dimensionality cutting of dataset was built to decrease the search space. Based on four high-dimensional datasets, BPSO-HD was compared with Apriori to test its reliability, and was compared with the ordinary BPSO and quantum swarm evolutionary(QSE) to prove its advantages. The experiments show that the results given by BPSO-HD is reliable and better than the results generated by BPSO and QSE.展开更多
A fast generation method of fuzzy rules for flux optimization decision-making was proposed in order to extract the linguistic knowledge from numerical data in the process of matter converting. The fuzzy if-then rules ...A fast generation method of fuzzy rules for flux optimization decision-making was proposed in order to extract the linguistic knowledge from numerical data in the process of matter converting. The fuzzy if-then rules with consequent real number were extracted from numerical data, and a linguistic representation method for deriving linguistic rules from fuzzy if-then rules with consequent real numbers was developed. The linguistic representation consisted of The simulat two linguistic variables with the degree of certainty and the storage structure of rule base was described. on results show that the method involves neither the time-consuming iterative learning procedure nor the complicated rule generation mechanisms, and can approximate complex system. The method was applied to determine the flux amount of copper converting furnace in the process of matter converting. The real result shows that the mass fraction of Cu in slag is reduced by 0.5 %.展开更多
Modular technology can effectively support the rapid design of products, and it is one of the key technologies to realize mass customization design. With the application of product lifecycle management(PLM) system in ...Modular technology can effectively support the rapid design of products, and it is one of the key technologies to realize mass customization design. With the application of product lifecycle management(PLM) system in enterprises, the product lifecycle data have been effectively managed. However, these data have not been fully utilized in module division, especially for complex machinery products. To solve this problem, a product module mining method for the PLM database is proposed to improve the effect of module division. Firstly, product data are extracted from the PLM database by data extraction algorithm. Then, data normalization and structure logical inspection are used to preprocess the extracted defective data. The preprocessed product data are analyzed and expressed in a matrix for module mining. Finally, the fuzzy c-means clustering(FCM) algorithm is used to generate product modules, which are stored in product module library after module marking and post-processing. The feasibility and effectiveness of the proposed method are verified by a case study of high pressure valve.展开更多
基金supported by the Ensemble Grant for Early Career Researchers 2022-2023 and the 2023 Ensemble Continuation Grant of Tohoku University,the Hirose Foundation,and the AIMR Fusion Research Grantsupported by JSPS KAKENHI Nos.JP23K13599,JP23K13703,JP22H01803,JP18H05513,and JP23K13542.F.Y.and Q.W.acknowledge the China Scholarship Council(CSC)to support their studies in Japan.
文摘Magnesium(Mg)is a promising alternative to lithium(Li)as an anode material in solid-state batteries due to its abundance and high theoretical volumetric capacity.However,the sluggish Mg-ion conduction in the lattice of solidstate electrolytes(SSEs)is one of the key challenges that hamper the development of Mg-ion solid-state batteries.Though various Mg-ion SSEs have been reported in recent years,key insights are hard to be derived from a single literature report.Besides,the structure-performance relationships of Mg-ion SSEs need to be further unraveled to provide a more precise design guideline for SSEs.In this viewpoint article,we analyze the structural characteristics of the Mg-based SSEs with high ionic conductivity reported in the last four decades based upon data mining-we provide big-data-derived insights into the challenges and opportunities in developing next-generation Mg-ion SSEs.
基金Project(70671039) supported by the National Natural Science Foundation of China
文摘According to the chaotic and non-linear characters of power load data,the time series matrix is established with the theory of phase-space reconstruction,and then Lyapunov exponents with chaotic time series are computed to determine the time delay and the embedding dimension.Due to different features of the data,data mining algorithm is conducted to classify the data into different groups.Redundant information is eliminated by the advantage of data mining technology,and the historical loads that have highly similar features with the forecasting day are searched by the system.As a result,the training data can be decreased and the computing speed can also be improved when constructing support vector machine(SVM) model.Then,SVM algorithm is used to predict power load with parameters that get in pretreatment.In order to prove the effectiveness of the new model,the calculation with data mining SVM algorithm is compared with that of single SVM and back propagation network.It can be seen that the new DSVM algorithm effectively improves the forecast accuracy by 0.75%,1.10% and 1.73% compared with SVM for two random dimensions of 11-dimension,14-dimension and BP network,respectively.This indicates that the DSVM gains perfect improvement effect in the short-term power load forecasting.
基金Project(70572090) supported by the National Natural Science Foundation of China
文摘In order to construct the data mining frame for the generic project risk research, the basic definitions of the generic project risk element were given, and then a new model of the generic project risk element was presented with the definitions. From the model, data mining method was used to acquire the risk transmission matrix from the historical databases analysis. The quantitative calculation problem among the generic project risk elements was solved. This method deals with well the risk element transmission problems with limited states. And in order to get the limited states, fuzzy theory was used to discrete the historical data in historical databases. In an example, the controlling risk degree is chosen as P(Rs≥2) ≤0.1, it means that the probability of risk state which is not less than 2 in project is not more than 0.1, the risk element R3 is chosen to control the project, respectively. The result shows that three risk element transmission matrix can be acquired in 4 risk elements, and the frequency histogram and cumulative frequency histogram of each risk element are also given.
基金Projects(61001188,1161140319)supported by the National Natural Science Foundation of ChinaProject(2012ZX03001034)supported by the National Science and Technology Major ProjectProject(YETP1202)supported by Beijing Higher Education Young Elite Teacher Project,China
文摘Objective speech quality is difficult to be measured without the input reference speech.Mapping methods using data mining are investigated and designed to improve the output-based speech quality assessment algorithm.The degraded speech is firstly separated into three classes(unvoiced,voiced and silence),and then the consistency measurement between the degraded speech signal and the pre-trained reference model for each class is calculated and mapped to an objective speech quality score using data mining.Fuzzy Gaussian mixture model(GMM)is used to generate the artificial reference model trained on perceptual linear predictive(PLP)features.The mean opinion score(MOS)mapping methods including multivariate non-linear regression(MNLR),fuzzy neural network(FNN)and support vector regression(SVR)are designed and compared with the standard ITU-T P.563 method.Experimental results show that the assessment methods with data mining perform better than ITU-T P.563.Moreover,FNN and SVR are more efficient than MNLR,and FNN performs best with 14.50% increase in the correlation coefficient and 32.76% decrease in the root-mean-square MOS error.
基金Project(50374079) supported by the National Natural Science Foundation of China
文摘It is difficult to detect the anomalies whose matching relationship among some data attributes is very different from others’ in a dataset. Aiming at this problem, an approach based on wavelet analysis for detecting and amending anomalous samples was proposed. Taking full advantage of wavelet analysis’ properties of multi-resolution and local analysis, this approach is able to detect and amend anomalous samples effectively. To realize the rapid numeric computation of wavelet translation for a discrete sequence, a modified algorithm based on Newton-Cores formula was also proposed. The experimental result shows that the approach is feasible with good result and good practicality.
基金Project(2011AA040603) supported by the National High Technology Ressarch & Development Program of ChinaProject(201202226) supported by the Natural Science Foundation of Liaoning Province, China
文摘The detection of outliers and change points from time series has become research focus in the area of time series data mining since it can be used for fraud detection, rare event discovery, event/trend change detection, etc. In most previous works, outlier detection and change point detection have not been related explicitly and the change point detections did not consider the influence of outliers, in this work, a unified detection framework was presented to deal with both of them. The framework is based on ALARCON-AQUINO and BARRIA's change points detection method and adopts two-stage detection to divide the outliers and change points. The advantages of it lie in that: firstly, unified structure for change detection and outlier detection further reduces the computational complexity and make the detective procedure simple; Secondly, the detection strategy of outlier detection before change point detection avoids the influence of outliers to the change point detection, and thus improves the accuracy of the change point detection. The simulation experiments of the proposed method for both model data and actual application data have been made and gotten 100% detection accuracy. The comparisons between traditional detection method and the proposed method further demonstrate that the unified detection structure is more accurate when the time series are contaminated by outliers.
基金Projects(61173122,61262032) supported by the National Natural Science Foundation of ChinaProjects(11JJ3067,12JJ2038) supported by the Natural Science Foundation of Hunan Province,China
文摘Low-rank matrix recovery is an important problem extensively studied in machine learning, data mining and computer vision communities. A novel method is proposed for low-rank matrix recovery, targeting at higher recovery accuracy and stronger theoretical guarantee. Specifically, the proposed method is based on a nonconvex optimization model, by solving the low-rank matrix which can be recovered from the noisy observation. To solve the model, an effective algorithm is derived by minimizing over the variables alternately. It is proved theoretically that this algorithm has stronger theoretical guarantee than the existing work. In natural image denoising experiments, the proposed method achieves lower recovery error than the two compared methods. The proposed low-rank matrix recovery method is also applied to solve two real-world problems, i.e., removing noise from verification code and removing watermark from images, in which the images recovered by the proposed method are less noisy than those of the two compared methods.
基金supported by proposal No.OSD/BCUD/392/197 Board of Colleges and University Development,Savitribai Phule Pune University,Pune
文摘The rapid developments in the fields of telecommunication, sensor data, financial applications, analyzing of data streams, and so on, increase the rate of data arrival, among which the data mining technique is considered a vital process. The data analysis process consists of different tasks, among which the data stream classification approaches face more challenges than the other commonly used techniques. Even though the classification is a continuous process, it requires a design that can adapt the classification model so as to adjust the concept change or the boundary change between the classes. Hence, we design a novel fuzzy classifier known as THRFuzzy to classify new incoming data streams. Rough set theory along with tangential holoentropy function helps in the designing the dynamic classification model. The classification approach uses kernel fuzzy c-means(FCM) clustering for the generation of the rules and tangential holoentropy function to update the membership function. The performance of the proposed THRFuzzy method is verified using three datasets, namely skin segmentation, localization, and breast cancer datasets, and the evaluated metrics, accuracy and time, comparing its performance with HRFuzzy and adaptive k-NN classifiers. The experimental results conclude that THRFuzzy classifier shows better classification results providing a maximum accuracy consuming a minimal time than the existing classifiers.
文摘A novel binary particle swarm optimization for frequent item sets mining from high-dimensional dataset(BPSO-HD) was proposed, where two improvements were joined. Firstly, the dimensionality reduction of initial particles was designed to ensure the reasonable initial fitness, and then, the dynamically dimensionality cutting of dataset was built to decrease the search space. Based on four high-dimensional datasets, BPSO-HD was compared with Apriori to test its reliability, and was compared with the ordinary BPSO and quantum swarm evolutionary(QSE) to prove its advantages. The experiments show that the results given by BPSO-HD is reliable and better than the results generated by BPSO and QSE.
基金Project(50374079) supported bythe National Natural Science Foundation of China project(2002cB312200) supported bythe State Key Fundamental Research and Development Programof China
文摘A fast generation method of fuzzy rules for flux optimization decision-making was proposed in order to extract the linguistic knowledge from numerical data in the process of matter converting. The fuzzy if-then rules with consequent real number were extracted from numerical data, and a linguistic representation method for deriving linguistic rules from fuzzy if-then rules with consequent real numbers was developed. The linguistic representation consisted of The simulat two linguistic variables with the degree of certainty and the storage structure of rule base was described. on results show that the method involves neither the time-consuming iterative learning procedure nor the complicated rule generation mechanisms, and can approximate complex system. The method was applied to determine the flux amount of copper converting furnace in the process of matter converting. The real result shows that the mass fraction of Cu in slag is reduced by 0.5 %.
基金Project(51275362)supported by the National Natural Science Foundation of ChinaProject(2013M542055)supported by China Postdoctoral Science Foundation Funded
文摘Modular technology can effectively support the rapid design of products, and it is one of the key technologies to realize mass customization design. With the application of product lifecycle management(PLM) system in enterprises, the product lifecycle data have been effectively managed. However, these data have not been fully utilized in module division, especially for complex machinery products. To solve this problem, a product module mining method for the PLM database is proposed to improve the effect of module division. Firstly, product data are extracted from the PLM database by data extraction algorithm. Then, data normalization and structure logical inspection are used to preprocess the extracted defective data. The preprocessed product data are analyzed and expressed in a matrix for module mining. Finally, the fuzzy c-means clustering(FCM) algorithm is used to generate product modules, which are stored in product module library after module marking and post-processing. The feasibility and effectiveness of the proposed method are verified by a case study of high pressure valve.