Purpose: The ability to identify the scholarship of individual authors is essential for performance evaluation. A number of factors hinder this endeavor. Common and similarly spelled surnames make it difficult to isol...Purpose: The ability to identify the scholarship of individual authors is essential for performance evaluation. A number of factors hinder this endeavor. Common and similarly spelled surnames make it difficult to isolate the scholarship of individual authors indexed on large databases. Variations in name spelling of individual scholars further complicates matters. Common family names in scientific powerhouses like China make it problematic to distinguish between authors possessing ubiquitous and/or anglicized surnames(as well as the same or similar first names). The assignment of unique author identifiers provides a major step toward resolving these difficulties. We maintain, however, that in and of themselves, author identifiers are not sufficient to fully address the author uncertainty problem. In this study we build on the author identifier approach by considering commonalities in fielded data between authors containing the same surname and first initial of their first name. We illustrate our approach using three case studies.Design/methodology/approach: The approach we advance in this study is based on commonalities among fielded data in search results. We cast a broad initial net—i.e., a Web of Science(WOS) search for a given author's last name, followed by a comma, followed by the first initial of his or her first name(e.g., a search for ‘John Doe' would assume the form: ‘Doe, J'). Results for this search typically contain all of the scholarship legitimately belonging to this author in the given database(i.e., all of his or her true positives), along with a large amount of noise, or scholarship not belonging to this author(i.e., a large number of false positives). From this corpus we proceed to iteratively weed out false positives and retain true positives. Author identifiers provide a good starting point—e.g., if ‘Doe, J' and ‘Doe, John' share the same author identifier, this would be sufficient for us to conclude these are one and the same individual. We find email addresses similarly adequate—e.g., if two author names which share the same surname and same first initial have an email address in common, we conclude these authors are the same person. Author identifier and email address data is not always available, however. When this occurs, other fields are used to address the author uncertainty problem.Commonalities among author data other than unique identifiers and email addresses is less conclusive for name consolidation purposes. For example, if ‘Doe, John' and ‘Doe, J' have an affiliation in common, do we conclude that these names belong the same person? They may or may not; affiliations have employed two or more faculty members sharing the same last and first initial. Similarly, it's conceivable that two individuals with the same last name and first initial publish in the same journal, publish with the same co-authors, and/or cite the same references. Should we then ignore commonalities among these fields and conclude they're too imprecise for name consolidation purposes? It is our position that such commonalities are indeed valuable for addressing the author uncertainty problem, but more so when used in combination.Our approach makes use of automation as well as manual inspection, relying initially on author identifiers, then commonalities among fielded data other than author identifiers, and finally manual verification. To achieve name consolidation independent of author identifier matches, we have developed a procedure that is used with bibliometric software called VantagePoint(see www.thevantagepoint.com). While the application of our technique does not exclusively depend on VantagePoint, it is the software we find most efficient in this study. The script we developed to implement this procedure is designed to implement our name disambiguation procedure in a way that significantly reduces manual effort on the user's part. Those who seek to replicate our procedure independent of VantagePoint can do so by manually following the method we outline, but we note that the manual application of our procedure takes a significant amount of time and effort, especially when working with larger datasets.Our script begins by prompting the user for a surname and a first initial(for any author of interest). It then prompts the user to select a WOS field on which to consolidate author names. After this the user is prompted to point to the name of the authors field, and finally asked to identify a specific author name(referred to by the script as the primary author) within this field whom the user knows to be a true positive(a suggested approach is to point to an author name associated with one of the records that has the author's ORCID iD or email address attached to it).The script proceeds to identify and combine all author names sharing the primary author's surname and first initial of his or her first name who share commonalities in the WOS field on which the user was prompted to consolidate author names. This typically results in significant reduction in the initial dataset size. After the procedure completes the user is usually left with a much smaller(and more manageable) dataset to manually inspect(and/or apply additional name disambiguation techniques to).Research limitations: Match field coverage can be an issue. When field coverage is paltry dataset reduction is not as significant, which results in more manual inspection on the user's part. Our procedure doesn't lend itself to scholars who have had a legal family name change(after marriage, for example). Moreover, the technique we advance is(sometimes, but not always) likely to have a difficult time dealing with scholars who have changed careers or fields dramatically, as well as scholars whose work is highly interdisciplinary.Practical implications: The procedure we advance has the ability to save a significant amount of time and effort for individuals engaged in name disambiguation research, especially when the name under consideration is a more common family name. It is more effective when match field coverage is high and a number of match fields exist.Originality/value: Once again, the procedure we advance has the ability to save a significant amount of time and effort for individuals engaged in name disambiguation research. It combines preexisting with more recent approaches, harnessing the benefits of both.Findings: Our study applies the name disambiguation procedure we advance to three case studies. Ideal match fields are not the same for each of our case studies. We find that match field effectiveness is in large part a function of field coverage. Comparing original dataset size, the timeframe analyzed for each case study is not the same, nor are the subject areas in which they publish. Our procedure is more effective when applied to our third case study, both in terms of list reduction and 100% retention of true positives. We attribute this to excellent match field coverage, and especially in more specific match fields, as well as having a more modest/manageable number of publications.While machine learning is considered authoritative by many, we do not see it as practical or replicable. The procedure advanced herein is both practical, replicable and relatively user friendly. It might be categorized into a space between ORCID and machine learning. Machine learning approaches typically look for commonalities among citation data, which is not always available, structured or easy to work with. The procedure we advance is intended to be applied across numerous fields in a dataset of interest(e.g. emails, coauthors, affiliations, etc.), resulting in multiple rounds of reduction. Results indicate that effective match fields include author identifiers, emails, source titles, co-authors and ISSNs. While the script we present is not likely to result in a dataset consisting solely of true positives(at least for more common surnames), it does significantly reduce manual effort on the user's part. Dataset reduction(after our procedure is applied) is in large part a function of(a) field availability and(b) field coverage.展开更多
We consider an image semantic communication system in a time-varying fading Gaussian MIMO channel,with a finite number of channel states.A deep learning-aided broadcast approach scheme is proposed to benefit the adapt...We consider an image semantic communication system in a time-varying fading Gaussian MIMO channel,with a finite number of channel states.A deep learning-aided broadcast approach scheme is proposed to benefit the adaptive semantic transmission in terms of different channel states.We combine the classic broadcast approach with the image transformer to implement this adaptive joint source and channel coding(JSCC)scheme.Specifically,we utilize the neural network(NN)to jointly optimize the hierarchical image compression and superposition code mapping within this scheme.The learned transformers and codebooks allow recovering of the image with an adaptive quality and low error rate at the receiver side,in each channel state.The simulation results exhibit our proposed scheme can dynamically adapt the coding to the current channel state and outperform some existing intelligent schemes with the fixed coding block.展开更多
In this article, a novel scalarization technique, called the improved objective-constraint approach, is introduced to find efficient solutions of a given multiobjective programming problem. The presented scalarized pr...In this article, a novel scalarization technique, called the improved objective-constraint approach, is introduced to find efficient solutions of a given multiobjective programming problem. The presented scalarized problem extends the objective-constraint problem. It is demonstrated that how adding variables to the scalarized problem, can lead to find conditions for (weakly, properly) Pareto optimal solutions. Applying the obtained necessary and sufficient conditions, two algorithms for generating the Pareto front approximation of bi-objective and three-objective programming problems are designed. These algorithms are easy to implement and can achieve an even approximation of (weakly, properly) Pareto optimal solutions. These algorithms can be generalized for optimization problems with more than three criterion functions, too. The effectiveness and capability of the algorithms are demonstrated in test problems.展开更多
OBJECTIVE To assess the feasibility and safety of the minimalistic approach to left atrial appendage occlusion(LAAO) guided by cardiac computed tomography angiography(CCTA).METHODS Ninety consecutive patients who unde...OBJECTIVE To assess the feasibility and safety of the minimalistic approach to left atrial appendage occlusion(LAAO) guided by cardiac computed tomography angiography(CCTA).METHODS Ninety consecutive patients who underwent LAAO, with or without CCTA-guided, were matched(1:2). Each step of the LAAO procedure in the computed tomography(CT) guidance group(CT group) was directed by preprocedural CT planning. In the control group, LAAO was performed using the standard method. All patients were followed up for 12 months, and device surveillance was conducted using CCTA.RESULTS A total of 90 patients were included in the analysis, with 30 patients in the CT group and 60 matched patients in the control group. All patients were successfully implanted with Watchman devices. The mean ages for the CT group and the control group were 70.0 ± 9.4 years and 68.4 ± 11.9 years(P = 0.52), respectively. The procedure duration(45.6 ± 10.7 min vs. 58.8 ± 13.0 min,P < 0.001) and hospital stay(7.5 ± 2.4 day vs. 9.6 ± 2.8 day, P = 0.001) in the CT group was significantly shorter compared to the control group. However, the total radiation dose was higher in the CT group compared to the control group(904.9 ± 348.0 m Gy vs.711.9 ± 211.2 m Gy, P = 0.002). There were no significant differences in periprocedural pericardial effusion(3.3% vs. 6.3%, P = 0.8) between the two groups. The rate of postprocedural adverse events(13.3% vs. 18.3%, P = 0.55) were comparable between both groups at 12 months follow-up.CONCLUSIONS CCTA is capable of detailed LAAO procedure planning. Minimalistic LAAO with preprocedural CCTA planning was feasible and safe, with shortened procedure time and acceptable increased radiation and contras consumption. For patients with contraindications to general anesthesia and/or transesophageal echocardiography, this promising method may be an alternative to conventional LAAO.展开更多
Automatic voltage regulators(AVR)are designed to manipulate a synchronous generator’s voltage level automatically.Proportional integral derivative(PID)controllers are typically used in AVR systems to regulate voltage...Automatic voltage regulators(AVR)are designed to manipulate a synchronous generator’s voltage level automatically.Proportional integral derivative(PID)controllers are typically used in AVR systems to regulate voltage.Although advanced PID tuning methods have been proposed,the actual voltage response differs from the theoretical predictions due to modeling errors and system uncertainties.This requires continuous fine tuning of the PID parameters.However,manual adjustment of these parameters can compromise the stability and robustness of the AVR system.This study focuses on the online self-tuning of PID controllers called indirect design approach-2(IDA-2)in AVR systems while preserving robustness.In particular,we indirectly tune the PID controller by shifting the frequency response.The new PID parameters depend on the frequency-shifting constant and the previously optimized PID parameters.Adjusting the frequency-shifting constant modifies all the PID parameters simultaneously,thereby improving the control performance and robustness.We evaluate the robustness of the proposed online PID tuning method by comparing the gain margins(GMs)and phase margins(PMs)with previously optimized PID parameters during parameter uncertainties.The proposed method is further evaluated in terms of disturbance rejection,measurement noise,and frequency response analysis during parameter uncertainty calculations against existing methods.Simulations show that the proposed method significantly improves the robustness of the controller in the AVR system.In summary,online self-tuning enables automated PID parameter adjustment in an AVR system,while maintaining stability and robustness.展开更多
For the first time, the isogeometric analysis(IGA) approach is used to model and analyze free and forced vibrations of doubly-curved magneto-electro-elastic(MEE) composite shallow shell resting on the visco-Pasternak ...For the first time, the isogeometric analysis(IGA) approach is used to model and analyze free and forced vibrations of doubly-curved magneto-electro-elastic(MEE) composite shallow shell resting on the visco-Pasternak foundation in a hygro-temperature environment. The doubly-curved MEE shallow shell types include spherical shallow shell, cylindrical shallow shell, saddle shallow shell, and elliptical shallow shell subjected to blast load are investigated. The Maxwell equation and electromagnetic boundary conditions are used to determine the vary of the electric and magnetic potentials. The MEE shallow shell's equations of motion are derived from Hamilton's principle and refined higher-order shear theory. Then, the IGA method is used to derive the laws of natural frequencies and dynamic responses of the shell under various boundary conditions. The accuracy of the model and method is verified through reliable numerical comparisons. Aside from this, the impact of the input parameters on the free and forced vibration of the doubly-curved MEE shallow shell is examined in detail. These results may be useful in the design and manufacture of military structures such as warships, fighter aircraft, drones and missiles.展开更多
There is an increasing attention on oxidative derivatives of triglycerides,a group of potential thermal processing induced food toxicants,which are formed during the thermal processing of food lipids.This review aims ...There is an increasing attention on oxidative derivatives of triglycerides,a group of potential thermal processing induced food toxicants,which are formed during the thermal processing of food lipids.This review aims to summarize current knowledge about their formation mechanisms,detection approaches,and toxicology impacts.Oxidative derivatives of triglycerides are generated through the oxidation,cyclization,polymerization,and hydrolysis of triglycerides under high-temperature and abundant oxygen.The analytical techniques,including GC,HPSEC,MS,^(1)H-NMR were discussed in analyzing these components.In addition,their toxic effects on human health,including effects on the liver,intestines,cardiovascular system,immune system,and metabolism were elucidated.Information in this review could be used to improve the understanding of oxidative derivatives of triglycerides and ultimately improve academic and industrial strategies for eliminating these compounds in thermal processing food systems.展开更多
Desertification creeps across the landscape like a silent thief,stealing life and leaving behind a barren wasteland.This year's World Environment Day theme,‘Land Restoration,desertification and drought resilience...Desertification creeps across the landscape like a silent thief,stealing life and leaving behind a barren wasteland.This year's World Environment Day theme,‘Land Restoration,desertification and drought resilience’,is a stark reminder of the pressing need to heal the wounds we've inflicted on our planet.展开更多
Rising antimicrobial resistance(AMR)is a global health crisis for countries of all economic levels,alongside the broader challenge of access to antibiotics.As a result,development goals for child survival,healthy agei...Rising antimicrobial resistance(AMR)is a global health crisis for countries of all economic levels,alongside the broader challenge of access to antibiotics.As a result,development goals for child survival,healthy ageing,poverty reduction,and food security are at risk.Preserving antimicrobial effectiveness,a global public good,requires political will,targets,accountability frameworks,and funding.The upcoming second high-level meeting on AMR at the UN General Assembly(UNGA)in September,2024,is evidence of political interest in addressing the problem of AMR,but action on targets,accountability,and funding,absent from the 2016 UNGA resolution,is needed.展开更多
基金support from the US National Science Foundation under Award 1645237
文摘Purpose: The ability to identify the scholarship of individual authors is essential for performance evaluation. A number of factors hinder this endeavor. Common and similarly spelled surnames make it difficult to isolate the scholarship of individual authors indexed on large databases. Variations in name spelling of individual scholars further complicates matters. Common family names in scientific powerhouses like China make it problematic to distinguish between authors possessing ubiquitous and/or anglicized surnames(as well as the same or similar first names). The assignment of unique author identifiers provides a major step toward resolving these difficulties. We maintain, however, that in and of themselves, author identifiers are not sufficient to fully address the author uncertainty problem. In this study we build on the author identifier approach by considering commonalities in fielded data between authors containing the same surname and first initial of their first name. We illustrate our approach using three case studies.Design/methodology/approach: The approach we advance in this study is based on commonalities among fielded data in search results. We cast a broad initial net—i.e., a Web of Science(WOS) search for a given author's last name, followed by a comma, followed by the first initial of his or her first name(e.g., a search for ‘John Doe' would assume the form: ‘Doe, J'). Results for this search typically contain all of the scholarship legitimately belonging to this author in the given database(i.e., all of his or her true positives), along with a large amount of noise, or scholarship not belonging to this author(i.e., a large number of false positives). From this corpus we proceed to iteratively weed out false positives and retain true positives. Author identifiers provide a good starting point—e.g., if ‘Doe, J' and ‘Doe, John' share the same author identifier, this would be sufficient for us to conclude these are one and the same individual. We find email addresses similarly adequate—e.g., if two author names which share the same surname and same first initial have an email address in common, we conclude these authors are the same person. Author identifier and email address data is not always available, however. When this occurs, other fields are used to address the author uncertainty problem.Commonalities among author data other than unique identifiers and email addresses is less conclusive for name consolidation purposes. For example, if ‘Doe, John' and ‘Doe, J' have an affiliation in common, do we conclude that these names belong the same person? They may or may not; affiliations have employed two or more faculty members sharing the same last and first initial. Similarly, it's conceivable that two individuals with the same last name and first initial publish in the same journal, publish with the same co-authors, and/or cite the same references. Should we then ignore commonalities among these fields and conclude they're too imprecise for name consolidation purposes? It is our position that such commonalities are indeed valuable for addressing the author uncertainty problem, but more so when used in combination.Our approach makes use of automation as well as manual inspection, relying initially on author identifiers, then commonalities among fielded data other than author identifiers, and finally manual verification. To achieve name consolidation independent of author identifier matches, we have developed a procedure that is used with bibliometric software called VantagePoint(see www.thevantagepoint.com). While the application of our technique does not exclusively depend on VantagePoint, it is the software we find most efficient in this study. The script we developed to implement this procedure is designed to implement our name disambiguation procedure in a way that significantly reduces manual effort on the user's part. Those who seek to replicate our procedure independent of VantagePoint can do so by manually following the method we outline, but we note that the manual application of our procedure takes a significant amount of time and effort, especially when working with larger datasets.Our script begins by prompting the user for a surname and a first initial(for any author of interest). It then prompts the user to select a WOS field on which to consolidate author names. After this the user is prompted to point to the name of the authors field, and finally asked to identify a specific author name(referred to by the script as the primary author) within this field whom the user knows to be a true positive(a suggested approach is to point to an author name associated with one of the records that has the author's ORCID iD or email address attached to it).The script proceeds to identify and combine all author names sharing the primary author's surname and first initial of his or her first name who share commonalities in the WOS field on which the user was prompted to consolidate author names. This typically results in significant reduction in the initial dataset size. After the procedure completes the user is usually left with a much smaller(and more manageable) dataset to manually inspect(and/or apply additional name disambiguation techniques to).Research limitations: Match field coverage can be an issue. When field coverage is paltry dataset reduction is not as significant, which results in more manual inspection on the user's part. Our procedure doesn't lend itself to scholars who have had a legal family name change(after marriage, for example). Moreover, the technique we advance is(sometimes, but not always) likely to have a difficult time dealing with scholars who have changed careers or fields dramatically, as well as scholars whose work is highly interdisciplinary.Practical implications: The procedure we advance has the ability to save a significant amount of time and effort for individuals engaged in name disambiguation research, especially when the name under consideration is a more common family name. It is more effective when match field coverage is high and a number of match fields exist.Originality/value: Once again, the procedure we advance has the ability to save a significant amount of time and effort for individuals engaged in name disambiguation research. It combines preexisting with more recent approaches, harnessing the benefits of both.Findings: Our study applies the name disambiguation procedure we advance to three case studies. Ideal match fields are not the same for each of our case studies. We find that match field effectiveness is in large part a function of field coverage. Comparing original dataset size, the timeframe analyzed for each case study is not the same, nor are the subject areas in which they publish. Our procedure is more effective when applied to our third case study, both in terms of list reduction and 100% retention of true positives. We attribute this to excellent match field coverage, and especially in more specific match fields, as well as having a more modest/manageable number of publications.While machine learning is considered authoritative by many, we do not see it as practical or replicable. The procedure advanced herein is both practical, replicable and relatively user friendly. It might be categorized into a space between ORCID and machine learning. Machine learning approaches typically look for commonalities among citation data, which is not always available, structured or easy to work with. The procedure we advance is intended to be applied across numerous fields in a dataset of interest(e.g. emails, coauthors, affiliations, etc.), resulting in multiple rounds of reduction. Results indicate that effective match fields include author identifiers, emails, source titles, co-authors and ISSNs. While the script we present is not likely to result in a dataset consisting solely of true positives(at least for more common surnames), it does significantly reduce manual effort on the user's part. Dataset reduction(after our procedure is applied) is in large part a function of(a) field availability and(b) field coverage.
基金supported in part by the National Key R&D Project of China under Grant 2020YFA0712300National Natural Science Foundation of China under Grant NSFC-62231022,12031011supported in part by the NSF of China under Grant 62125108。
文摘We consider an image semantic communication system in a time-varying fading Gaussian MIMO channel,with a finite number of channel states.A deep learning-aided broadcast approach scheme is proposed to benefit the adaptive semantic transmission in terms of different channel states.We combine the classic broadcast approach with the image transformer to implement this adaptive joint source and channel coding(JSCC)scheme.Specifically,we utilize the neural network(NN)to jointly optimize the hierarchical image compression and superposition code mapping within this scheme.The learned transformers and codebooks allow recovering of the image with an adaptive quality and low error rate at the receiver side,in each channel state.The simulation results exhibit our proposed scheme can dynamically adapt the coding to the current channel state and outperform some existing intelligent schemes with the fixed coding block.
文摘In this article, a novel scalarization technique, called the improved objective-constraint approach, is introduced to find efficient solutions of a given multiobjective programming problem. The presented scalarized problem extends the objective-constraint problem. It is demonstrated that how adding variables to the scalarized problem, can lead to find conditions for (weakly, properly) Pareto optimal solutions. Applying the obtained necessary and sufficient conditions, two algorithms for generating the Pareto front approximation of bi-objective and three-objective programming problems are designed. These algorithms are easy to implement and can achieve an even approximation of (weakly, properly) Pareto optimal solutions. These algorithms can be generalized for optimization problems with more than three criterion functions, too. The effectiveness and capability of the algorithms are demonstrated in test problems.
基金supported by the Logistics Support Ministry of China (No.22BJZ41)the Capital's Funds for Health Improvement and Research (No.CFH2024-2-5071)。
文摘OBJECTIVE To assess the feasibility and safety of the minimalistic approach to left atrial appendage occlusion(LAAO) guided by cardiac computed tomography angiography(CCTA).METHODS Ninety consecutive patients who underwent LAAO, with or without CCTA-guided, were matched(1:2). Each step of the LAAO procedure in the computed tomography(CT) guidance group(CT group) was directed by preprocedural CT planning. In the control group, LAAO was performed using the standard method. All patients were followed up for 12 months, and device surveillance was conducted using CCTA.RESULTS A total of 90 patients were included in the analysis, with 30 patients in the CT group and 60 matched patients in the control group. All patients were successfully implanted with Watchman devices. The mean ages for the CT group and the control group were 70.0 ± 9.4 years and 68.4 ± 11.9 years(P = 0.52), respectively. The procedure duration(45.6 ± 10.7 min vs. 58.8 ± 13.0 min,P < 0.001) and hospital stay(7.5 ± 2.4 day vs. 9.6 ± 2.8 day, P = 0.001) in the CT group was significantly shorter compared to the control group. However, the total radiation dose was higher in the CT group compared to the control group(904.9 ± 348.0 m Gy vs.711.9 ± 211.2 m Gy, P = 0.002). There were no significant differences in periprocedural pericardial effusion(3.3% vs. 6.3%, P = 0.8) between the two groups. The rate of postprocedural adverse events(13.3% vs. 18.3%, P = 0.55) were comparable between both groups at 12 months follow-up.CONCLUSIONS CCTA is capable of detailed LAAO procedure planning. Minimalistic LAAO with preprocedural CCTA planning was feasible and safe, with shortened procedure time and acceptable increased radiation and contras consumption. For patients with contraindications to general anesthesia and/or transesophageal echocardiography, this promising method may be an alternative to conventional LAAO.
基金the Malaysian Ministry of Higher Education(MOHE)for their support through the Fundamental Research Grant Scheme(FRGS/1/2021/ICT02/UMP/03/3)(UMPSA Reference:RDU 210117)。
文摘Automatic voltage regulators(AVR)are designed to manipulate a synchronous generator’s voltage level automatically.Proportional integral derivative(PID)controllers are typically used in AVR systems to regulate voltage.Although advanced PID tuning methods have been proposed,the actual voltage response differs from the theoretical predictions due to modeling errors and system uncertainties.This requires continuous fine tuning of the PID parameters.However,manual adjustment of these parameters can compromise the stability and robustness of the AVR system.This study focuses on the online self-tuning of PID controllers called indirect design approach-2(IDA-2)in AVR systems while preserving robustness.In particular,we indirectly tune the PID controller by shifting the frequency response.The new PID parameters depend on the frequency-shifting constant and the previously optimized PID parameters.Adjusting the frequency-shifting constant modifies all the PID parameters simultaneously,thereby improving the control performance and robustness.We evaluate the robustness of the proposed online PID tuning method by comparing the gain margins(GMs)and phase margins(PMs)with previously optimized PID parameters during parameter uncertainties.The proposed method is further evaluated in terms of disturbance rejection,measurement noise,and frequency response analysis during parameter uncertainty calculations against existing methods.Simulations show that the proposed method significantly improves the robustness of the controller in the AVR system.In summary,online self-tuning enables automated PID parameter adjustment in an AVR system,while maintaining stability and robustness.
文摘For the first time, the isogeometric analysis(IGA) approach is used to model and analyze free and forced vibrations of doubly-curved magneto-electro-elastic(MEE) composite shallow shell resting on the visco-Pasternak foundation in a hygro-temperature environment. The doubly-curved MEE shallow shell types include spherical shallow shell, cylindrical shallow shell, saddle shallow shell, and elliptical shallow shell subjected to blast load are investigated. The Maxwell equation and electromagnetic boundary conditions are used to determine the vary of the electric and magnetic potentials. The MEE shallow shell's equations of motion are derived from Hamilton's principle and refined higher-order shear theory. Then, the IGA method is used to derive the laws of natural frequencies and dynamic responses of the shell under various boundary conditions. The accuracy of the model and method is verified through reliable numerical comparisons. Aside from this, the impact of the input parameters on the free and forced vibration of the doubly-curved MEE shallow shell is examined in detail. These results may be useful in the design and manufacture of military structures such as warships, fighter aircraft, drones and missiles.
基金funded by the National Natural Science Foundation of China (Grant No.32272426).
文摘There is an increasing attention on oxidative derivatives of triglycerides,a group of potential thermal processing induced food toxicants,which are formed during the thermal processing of food lipids.This review aims to summarize current knowledge about their formation mechanisms,detection approaches,and toxicology impacts.Oxidative derivatives of triglycerides are generated through the oxidation,cyclization,polymerization,and hydrolysis of triglycerides under high-temperature and abundant oxygen.The analytical techniques,including GC,HPSEC,MS,^(1)H-NMR were discussed in analyzing these components.In addition,their toxic effects on human health,including effects on the liver,intestines,cardiovascular system,immune system,and metabolism were elucidated.Information in this review could be used to improve the understanding of oxidative derivatives of triglycerides and ultimately improve academic and industrial strategies for eliminating these compounds in thermal processing food systems.
文摘Desertification creeps across the landscape like a silent thief,stealing life and leaving behind a barren wasteland.This year's World Environment Day theme,‘Land Restoration,desertification and drought resilience’,is a stark reminder of the pressing need to heal the wounds we've inflicted on our planet.
文摘Rising antimicrobial resistance(AMR)is a global health crisis for countries of all economic levels,alongside the broader challenge of access to antibiotics.As a result,development goals for child survival,healthy ageing,poverty reduction,and food security are at risk.Preserving antimicrobial effectiveness,a global public good,requires political will,targets,accountability frameworks,and funding.The upcoming second high-level meeting on AMR at the UN General Assembly(UNGA)in September,2024,is evidence of political interest in addressing the problem of AMR,but action on targets,accountability,and funding,absent from the 2016 UNGA resolution,is needed.