The fluidity of coal-water slurry(CWS)is crucial for various industrial applications such as long-distance transportation,gasification,and combustion.However,there is currently a lack of rapid and accurate detection m...The fluidity of coal-water slurry(CWS)is crucial for various industrial applications such as long-distance transportation,gasification,and combustion.However,there is currently a lack of rapid and accurate detection methods for assessing CWS fluidity.This paper proposed a method for analyzing the fluidity using videos of CWS dripping processes.By integrating the temporal and spatial features of each frame in the video,a multi-cascade classifier for CWS fluidity is established.The classifier distinguishes between four levels(A,B,C,and D)based on the quality of fluidity.The preliminary classification of A and D is achieved through feature engineering and the XGBoost algorithm.Subsequently,convolutional neural networks(CNN)and long short-term memory(LSTM)are utilized to further differentiate between the B and C categories which are prone to confusion.Finally,through detailed comparative experiments,the paper demonstrates the step-by-step design process of the proposed method and the superiority of the final solution.The proposed method achieves an accuracy rate of over 90%in determining the fluidity of CWS,serving as a technical reference for future industrial applications.展开更多
Objective To determine the impact of scenario-based lecture and personalized video feedback on anesthesia residents'communication skills during preoperative visits.Methods A total of 24 anesthesia residents were r...Objective To determine the impact of scenario-based lecture and personalized video feedback on anesthesia residents'communication skills during preoperative visits.Methods A total of 24 anesthesia residents were randomly divided into a video group and a control group.Residents in both groups took part in a simulated interview and received a scenario-based lecture on how to communicate with patients during preoperative visits.Afterwards,residents in the video group received personalized video feedback recorded during the simulated interview.One week later all the residents undertook another simulated interview.The communication skills of all the residents were assessed using the Consultation and Relational Empathy measure(CARE)scale by two examiners and one standardized patient(SP),both of whom were blinded to the group allocation.Results CARE scores were comparable between the two groups before training,and significantly improved after training in both groups(all P<0.05).The video group showed significantly greater increase in CARE score after the training than the control group,especially assessed by the SP(t=6.980,P<0.001).There were significant correlations between the examiner-assessed scores and SP-assessed scores(both P=0.001).Conclusion Scenario-based lectures with simulated interviews provide a good method for training communication skills of anesthesia residents,and personalized video feedback can enhance their performance on showing empathy during preoperative interview.展开更多
To improve the performance of video compression for machine vision analysis tasks,a video coding for machines(VCM)standard working group was established to promote standardization procedures.In this paper,recent advan...To improve the performance of video compression for machine vision analysis tasks,a video coding for machines(VCM)standard working group was established to promote standardization procedures.In this paper,recent advances in video coding for machine standards are presented and comprehensive introductions to the use cases,requirements,evaluation frameworks and corresponding metrics of the VCM standard are given.Then the existing methods are presented,introducing the existing proposals by category and the research progress of the latest VCM conference.Finally,we give conclusions.展开更多
To enhance the video quality after encoding and decoding in video compression,a video quality enhancement framework is pro-posed based on local and non-local priors in this paper.Low-level features are first extracted...To enhance the video quality after encoding and decoding in video compression,a video quality enhancement framework is pro-posed based on local and non-local priors in this paper.Low-level features are first extracted through a single convolution layer and then pro-cessed by several conv-tran blocks(CTB)to extract high-level features,which are ultimately transformed into a residual image.The final re-constructed video frame is obtained by performing an element-wise addition of the residual image and the original lossy video frame.Experi-ments show that the proposed Conv-Tran Network(CTN)model effectively recovers the quality loss caused by Versatile Video Coding(VVC)and further improves VVC's performance.展开更多
This paper proposes an adaptive hybrid forward error correction(AH-FEC)coding scheme for coping with dynamic packet loss events in video and audio transmission.Specifically,the proposed scheme consists of a hybrid Ree...This paper proposes an adaptive hybrid forward error correction(AH-FEC)coding scheme for coping with dynamic packet loss events in video and audio transmission.Specifically,the proposed scheme consists of a hybrid Reed-Solomon and low-density parity-check(RS-LDPC)coding system,combined with a Kalman filter-based adaptive algorithm.The hybrid RS-LDPC coding accommodates a wide range of code length requirements,employing RS coding for short codes and LDPC coding for medium-long codes.We delimit the short and medium-length codes by coding performance so that both codes remain in the optimal region.Additionally,a Kalman filter-based adaptive algorithm has been developed to handle dynamic alterations in a packet loss rate.The Kalman filter estimates packet loss rate utilizing observation data and system models,and then we establish the redundancy decision module through receiver feedback.As a result,the lost packets can be perfectly recovered by the receiver based on the redundant packets.Experimental results show that the proposed method enhances the decoding performance significantly under the same redundancy and channel packet loss.展开更多
On 15 February,OpenAI released its first video generation model"Sora".This is another disruptive work of the company after ChatGPT.It is reported that this AI video model can generate HD videos up to 1 minut...On 15 February,OpenAI released its first video generation model"Sora".This is another disruptive work of the company after ChatGPT.It is reported that this AI video model can generate HD videos up to 1 minute long based on the text given by the user.For the time being,its impact on the textile industry may be indirect,but it may also have some interesting and practical effects as it is developed and refined in the future.Here are some of the effects that Sora may have on the textile industry.展开更多
To transfer the color data from a device (video camera) dependent color space into a device? independent color space, a multilayer feedforward network with the error backpropagation (BP) learning rule, was regarded ...To transfer the color data from a device (video camera) dependent color space into a device? independent color space, a multilayer feedforward network with the error backpropagation (BP) learning rule, was regarded as a nonlinear transformer realizing the mapping from the RGB color space to CIELAB color space. A variety of mapping accuracy were obtained with different network structures. BP neural networks can provide a satisfactory mapping accuracy in the field of color space transformation for video cameras.展开更多
介绍一种应用于USB video camera中的自动对焦系统。由USB video camera获取的视频图像经计算机进行FFT运算或微分运算,得到其频谱幅值数据或微分幅值数据,计算机根据所得数据判断USB video camera中的镜头是否处于离焦位置并控制电机...介绍一种应用于USB video camera中的自动对焦系统。由USB video camera获取的视频图像经计算机进行FFT运算或微分运算,得到其频谱幅值数据或微分幅值数据,计算机根据所得数据判断USB video camera中的镜头是否处于离焦位置并控制电机将镜头移到对焦位置。文章还进一步讨论了提高自动对焦准确度的措施。实验结果表明该自动对焦系统能很好地实现USB video camera的自动对焦,该系统将使具有USB接口的video camera使用更简单方便。展开更多
基金supported by the Youth Fund of the National Natural Science Foundation of China(No.52304311)the National Natural Science Foundation of China(No.52274282)the Postdoctoral Fellowship Program of CPSF(No.GZC20233016)。
文摘The fluidity of coal-water slurry(CWS)is crucial for various industrial applications such as long-distance transportation,gasification,and combustion.However,there is currently a lack of rapid and accurate detection methods for assessing CWS fluidity.This paper proposed a method for analyzing the fluidity using videos of CWS dripping processes.By integrating the temporal and spatial features of each frame in the video,a multi-cascade classifier for CWS fluidity is established.The classifier distinguishes between four levels(A,B,C,and D)based on the quality of fluidity.The preliminary classification of A and D is achieved through feature engineering and the XGBoost algorithm.Subsequently,convolutional neural networks(CNN)and long short-term memory(LSTM)are utilized to further differentiate between the B and C categories which are prone to confusion.Finally,through detailed comparative experiments,the paper demonstrates the step-by-step design process of the proposed method and the superiority of the final solution.The proposed method achieves an accuracy rate of over 90%in determining the fluidity of CWS,serving as a technical reference for future industrial applications.
基金National High Level Hospital Clinical Research Fund(2022-PUMCH-C-011).
文摘Objective To determine the impact of scenario-based lecture and personalized video feedback on anesthesia residents'communication skills during preoperative visits.Methods A total of 24 anesthesia residents were randomly divided into a video group and a control group.Residents in both groups took part in a simulated interview and received a scenario-based lecture on how to communicate with patients during preoperative visits.Afterwards,residents in the video group received personalized video feedback recorded during the simulated interview.One week later all the residents undertook another simulated interview.The communication skills of all the residents were assessed using the Consultation and Relational Empathy measure(CARE)scale by two examiners and one standardized patient(SP),both of whom were blinded to the group allocation.Results CARE scores were comparable between the two groups before training,and significantly improved after training in both groups(all P<0.05).The video group showed significantly greater increase in CARE score after the training than the control group,especially assessed by the SP(t=6.980,P<0.001).There were significant correlations between the examiner-assessed scores and SP-assessed scores(both P=0.001).Conclusion Scenario-based lectures with simulated interviews provide a good method for training communication skills of anesthesia residents,and personalized video feedback can enhance their performance on showing empathy during preoperative interview.
基金supported by ZTE Industry-University-Institute Cooperation Funds.
文摘To improve the performance of video compression for machine vision analysis tasks,a video coding for machines(VCM)standard working group was established to promote standardization procedures.In this paper,recent advances in video coding for machine standards are presented and comprehensive introductions to the use cases,requirements,evaluation frameworks and corresponding metrics of the VCM standard are given.Then the existing methods are presented,introducing the existing proposals by category and the research progress of the latest VCM conference.Finally,we give conclusions.
基金supported by the Key R&D Program of China under Grant No. 2022YFC3301800Sichuan Local Technological Development Program under Grant No. 24YRGZN0010ZTE Industry-University-Institute Cooperation Funds under Grant No. HC-CN-03-2019-12
文摘To enhance the video quality after encoding and decoding in video compression,a video quality enhancement framework is pro-posed based on local and non-local priors in this paper.Low-level features are first extracted through a single convolution layer and then pro-cessed by several conv-tran blocks(CTB)to extract high-level features,which are ultimately transformed into a residual image.The final re-constructed video frame is obtained by performing an element-wise addition of the residual image and the original lossy video frame.Experi-ments show that the proposed Conv-Tran Network(CTN)model effectively recovers the quality loss caused by Versatile Video Coding(VVC)and further improves VVC's performance.
文摘This paper proposes an adaptive hybrid forward error correction(AH-FEC)coding scheme for coping with dynamic packet loss events in video and audio transmission.Specifically,the proposed scheme consists of a hybrid Reed-Solomon and low-density parity-check(RS-LDPC)coding system,combined with a Kalman filter-based adaptive algorithm.The hybrid RS-LDPC coding accommodates a wide range of code length requirements,employing RS coding for short codes and LDPC coding for medium-long codes.We delimit the short and medium-length codes by coding performance so that both codes remain in the optimal region.Additionally,a Kalman filter-based adaptive algorithm has been developed to handle dynamic alterations in a packet loss rate.The Kalman filter estimates packet loss rate utilizing observation data and system models,and then we establish the redundancy decision module through receiver feedback.As a result,the lost packets can be perfectly recovered by the receiver based on the redundant packets.Experimental results show that the proposed method enhances the decoding performance significantly under the same redundancy and channel packet loss.
文摘On 15 February,OpenAI released its first video generation model"Sora".This is another disruptive work of the company after ChatGPT.It is reported that this AI video model can generate HD videos up to 1 minute long based on the text given by the user.For the time being,its impact on the textile industry may be indirect,but it may also have some interesting and practical effects as it is developed and refined in the future.Here are some of the effects that Sora may have on the textile industry.
文摘To transfer the color data from a device (video camera) dependent color space into a device? independent color space, a multilayer feedforward network with the error backpropagation (BP) learning rule, was regarded as a nonlinear transformer realizing the mapping from the RGB color space to CIELAB color space. A variety of mapping accuracy were obtained with different network structures. BP neural networks can provide a satisfactory mapping accuracy in the field of color space transformation for video cameras.
文摘介绍一种应用于USB video camera中的自动对焦系统。由USB video camera获取的视频图像经计算机进行FFT运算或微分运算,得到其频谱幅值数据或微分幅值数据,计算机根据所得数据判断USB video camera中的镜头是否处于离焦位置并控制电机将镜头移到对焦位置。文章还进一步讨论了提高自动对焦准确度的措施。实验结果表明该自动对焦系统能很好地实现USB video camera的自动对焦,该系统将使具有USB接口的video camera使用更简单方便。