We developed a novel absolute multi-pole encoder structure to improve the resolution of the multi-pole encoder, realize absolute output and reduce the manufacturing cost of the encoder. The structure includes two ring...We developed a novel absolute multi-pole encoder structure to improve the resolution of the multi-pole encoder, realize absolute output and reduce the manufacturing cost of the encoder. The structure includes two ring alnicos defined as index track and sub-division track, respectively. The index track is magnetized based on the improved gray code, with linear halls placed around the track evenly. The outputs of linear halls show the region the rotor belongs to. The sub-division track is magnetized to N-S-N-S (north-south-north-south), and the number of N-S pole pairs is determined by the index track. Three linear hall sensors with an air-gap of 2 mm are used to translate the magnetic filed to voltage signals. The relative offset in a single N-S is obtained through look-up. The magnetic encoder is calibrated using a higher-resolution incremental optical encoder. The pulse output from the optical encoder and hall signals from the magnetic encoder are sampled at the same time and transmitted to a computer, and the relation between them is calculated, and stored in the FLASH of MCU (micro controller unit) for look-up. In the working state, the absolute angle is derived by looking-up with hall signals. The structure is simple and the manufacturing cost is very low and suitable for mass production.展开更多
This study proposes a novel particle encoding mechanism that seamlessly incorporates the quantum properties of particles,with a specific emphasis on constituent quarks.The primary objective of this mechanism is to fac...This study proposes a novel particle encoding mechanism that seamlessly incorporates the quantum properties of particles,with a specific emphasis on constituent quarks.The primary objective of this mechanism is to facilitate the digital registration and identification of a wide range of particle information.Its design ensures easy integration with different event generators and digital simulations commonly used in high-energy experiments.Moreover,this innovative framework can be easily expanded to encode complex multi-quark states comprising up to nine valence quarks and accommodating an angular momentum of up to 99/2.This versatility and scalability make it a valuable tool.展开更多
Images and videos provide a wealth of information for people in production and life.Although most digital information is transmitted via optical fiber,the image acquisition and transmission processes still rely heavil...Images and videos provide a wealth of information for people in production and life.Although most digital information is transmitted via optical fiber,the image acquisition and transmission processes still rely heavily on electronic circuits.The development of all-optical transmission networks and optical computing frameworks has pointed to the direction for the next generation of data transmission and information processing.Here,we propose a high-speed,low-cost,multiplexed parallel and one-piece all-fiber architecture for image acquisition,encoding,and transmission,called the Multicore Fiber Acquisition and Transmission Image System(MFAT).Based on different spatial and modal channels of the multicore fiber,fiber-coupled self-encoding,and digital aperture decoding technology,scenes can be observed directly from up to 1 km away.The expansion of capacity provides the possibility of parallel coded transmission of multimodal high-quality data.MFAT requires no additional signal transmitting and receiving equipment.The all-fiber processing saves the time traditionally spent on signal conversion and image pre-processing(compression,encoding,and modulation).Additionally,it provides an effective solution for 2D information acquisition and transmission tasks in extreme environments such as high temperatures and electromagnetic interference.展开更多
Leveraging the extraordinary phenomena of quantum superposition and quantum correlation,quantum computing offers unprecedented potential for addressing challenges beyond the reach of classical computers.This paper tac...Leveraging the extraordinary phenomena of quantum superposition and quantum correlation,quantum computing offers unprecedented potential for addressing challenges beyond the reach of classical computers.This paper tackles two pivotal challenges in the realm of quantum computing:firstly,the development of an effective encoding protocol for translating classical data into quantum states,a critical step for any quantum computation.Different encoding strategies can significantly influence quantum computer performance.Secondly,we address the need to counteract the inevitable noise that can hinder quantum acceleration.Our primary contribution is the introduction of a novel variational data encoding method,grounded in quantum regression algorithm models.By adapting the learning concept from machine learning,we render data encoding a learnable process.This allowed us to study the role of quantum correlation in data encoding.Through numerical simulations of various regression tasks,we demonstrate the efficacy of our variational data encoding,particularly post-learning from instructional data.Moreover,we delve into the role of quantum correlation in enhancing task performance,especially in noisy environments.Our findings underscore the critical role of quantum correlation in not only bolstering performance but also in mitigating noise interference,thus advancing the frontier of quantum computing.展开更多
流媒体技术是当前网络应用的主流技术之一,编码器是流媒体系统的重要组成部分.本文以微软公司提供的流媒体编码器Windows Media Encoder为例,通过剖析用于该编码器二次开发的SDK英文文档,归纳了利用该SDK对流媒体编码器进行二次开发的...流媒体技术是当前网络应用的主流技术之一,编码器是流媒体系统的重要组成部分.本文以微软公司提供的流媒体编码器Windows Media Encoder为例,通过剖析用于该编码器二次开发的SDK英文文档,归纳了利用该SDK对流媒体编码器进行二次开发的基本步骤,并以实例说明实际的编程思路.展开更多
with the development of 5G,the future wireless communication network tends to be more and more intelligent.In the face of new service de-mands of communication in the future such as super-heterogeneous network,multipl...with the development of 5G,the future wireless communication network tends to be more and more intelligent.In the face of new service de-mands of communication in the future such as super-heterogeneous network,multiple communication sce-narios,large number of antenna elements and large bandwidth,new theories and technologies of intelli-gent communication have been widely studied,among which Deep Learning(DL)is a powerful technology in artificial intelligence(AI).It can be trained to con-tinuously learn to update the optimal parameters.This paper reviews the latest research progress of DL in in-telligent communication,and emphatically introduces five scenarios including Cognitive Radio(CR),Edge Computing(EC),Channel Measurement(CM),End to end Encoder/Decoder(EED)and Visible Light Com-munication(VLC).The prospect and challenges of further research and development in the future are also discussed.展开更多
The visual features of continuous pseudocolor encoding is discussed and the optimiz- ing design algorithm of continuous pseudocolor scale is derived.The algorithm is restricting the varying range and direction of ligh...The visual features of continuous pseudocolor encoding is discussed and the optimiz- ing design algorithm of continuous pseudocolor scale is derived.The algorithm is restricting the varying range and direction of lightness,hue and saturation according to correlation and naturalness,automatically calculating the chromaticity coordinates of nodes in uniform color space to get the longest length of scale path,then interpolating points between nodes in equal color differences to obtain continuous pseudocolor scale with visual uniformity.When it was applied to the pseudocolor encoding of thermal image displays,the results showed that the correlation and the naturalness of original images and cognitive characteristics of target pattern were reserved well;the dynamic range of visual perception and the amount of visual information increased obviously;the contrast sensitivity of target identification improved;and the blindness of scale design were avoided.展开更多
Safety is the foundation of sustainable development in civil aviation.Although catastrophic accidents are rare,indicators of potential incidents and unsafe events frequently materialize.Therefore,a history of unsafe d...Safety is the foundation of sustainable development in civil aviation.Although catastrophic accidents are rare,indicators of potential incidents and unsafe events frequently materialize.Therefore,a history of unsafe data are considered in predicting safety risks.A deep learning method is adopted for extracting reactions in safety risks.The deep neural network(DNN)model for safety risk prediction is shown to extract complex data characteristics better than a shallow network model.Using extended unsafe data and monthly risk indices,hidden layers and iterations are determined.The effectiveness of DNN is also revealed in comparison with the traditional neural network.Through early risk detection using the method in the paper,airlines and the government can mitigate potential risk and take proactive measures to improve civil aviation safety.展开更多
An approach for color image segmentation is proposed based on the contributions of color features to segmentation rather than the choice of a particular color space. The determination of effective color features depen...An approach for color image segmentation is proposed based on the contributions of color features to segmentation rather than the choice of a particular color space. The determination of effective color features depends on the analysis of various color features from each tested color image via the designed feature encoding. It is different from the pervious methods where self organized feature map (SOFM) is used for constructing the feature encoding so that the feature encoding can self organize the effective features for different color images. Fuzzy clustering is applied for the final segmentation when the well suited color features and the initial parameter are available. The proposed method has been applied in segmenting different types of color images and the experimental results show that it outperforms the classical clustering method. The study shows that the feature encoding approach offers great promise in automating and optimizing the segmentation of color images.展开更多
Blind image quality assessment(BIQA)is of fundamental importance in low-level computer vision community.Increasing interest has been drawn in exploiting deep neural networks for BIQA.Despite of the notable success ach...Blind image quality assessment(BIQA)is of fundamental importance in low-level computer vision community.Increasing interest has been drawn in exploiting deep neural networks for BIQA.Despite of the notable success achieved,there is a broad consensus that training deep convolutional neural networks(DCNN)heavily relies on massive annotated data.Unfortunately,BIQA is typically a small sample problem,resulting the generalization ability of BIQA severely restricted.In order to improve the accuracy and generalization ability of BIQA metrics,this work proposed a totally opinion-unaware BIQA in which no subjective annotations are involved in the training stage.Multiple full-reference image quality assessment(FR-IQA)metrics are employed to label the distorted image as a substitution of subjective quality annotation.A deep neural network(DNN)is trained to blindly predict the multiple FR-IQA score in absence of corresponding pristine image.In the end,a selfsupervised FR-IQA score aggregator implemented by adversarial auto-encoder pools the predictions of multiple FR-IQA scores into the final quality predicting score.Even though none of subjective scores are involved in the training stage,experimental results indicate that our proposed full reference induced BIQA framework is as competitive as state-of-the-art BIQA metrics.展开更多
The non-binary(NB) Irregular Repeat Accumulate(IRA) codes, as a subclass of NB LDPC codes, potentially have an excellent error-correcting performance. They are also known to provide linear complexity of encoding, but ...The non-binary(NB) Irregular Repeat Accumulate(IRA) codes, as a subclass of NB LDPC codes, potentially have an excellent error-correcting performance. They are also known to provide linear complexity of encoding, but the basic encoding method with the serial rate-1 accumulator significantly limits the encoder throughput. Then the objective of the research presented in this paper is to develop an encoding method pro- viding significantly increased throughput of an NB-IRA encoder altogether with a flexible code construction methods for the structured(S-NB-IRA) codes eligible for the proposed encoding method. For this purpose, we reformulate the classic encoding algorithm to fit into the partial parallel encoder architecture. We propose the S-NB-IRA encoder block diagram and show that its estimated throughput is proportional to the submatrix size of the parity check matrix, which guarantees a wide complexity- throughput tradeoff. Then, in order to facilitate the S-NB-IRA coding systems design, we present a computer search algorithm for the construction of good S-NB-IRA codes. The algorithm aims at optimizing the code graph topology along with selecting an appropriate non-binary elements in the parity check matrix. Numerical results show that the constructed S-NB-IRA codes significantly outperform the binary IRA and S-IRA codes, while their performance is similar to the best unstructured NB-LDPC codes.展开更多
Purpose:Patent classification is one of the areas in Intellectual Property Analytics(IPA),and a growing use case since the number of patent applications has been increasing worldwide.We propose using machine learning ...Purpose:Patent classification is one of the areas in Intellectual Property Analytics(IPA),and a growing use case since the number of patent applications has been increasing worldwide.We propose using machine learning algorithms to classify Portuguese patents and evaluate the performance of transfer learning methodologies to solve this task.Design/methodology/approach:We applied three different approaches in this paper.First,we used a dataset available by INPI to explore traditional machine learning algorithms and ensemble methods.After preprocessing data by applying TF-IDF,FastText and Doc2Vec,the models were evaluated by cross-validation in 5 folds.In a second approach,we used two different Neural Networks architectures,a Convolutional Neural Network(CNN)and a bi-directional Long Short-Term Memory(BiLSTM).Finally,we used pre-trained BERT,DistilBERT,and ULMFiT models in the third approach.Findings:BERTTimbau,a BERT architecture model pre-trained on a large Portuguese corpus,presented the best results for the task,even though with a performance of only 4%superior to a LinearSVC model using TF-IDF feature engineering.Research limitations:The dataset was highly imbalanced,as usual in patent applications,so the classes with the lowest samples were expected to present the worst performance.That result happened in some cases,especially in classes with less than 60 training samples.Practical implications:Patent classification is challenging because of the hierarchical classification system,the context overlap,and the underrepresentation of the classes.However,the final model presented an acceptable performance given the size of the dataset and the task complexity.This model can support the decision and improve the time by proposing a category in the second level of ICP,which is one of the critical phases of the grant patent process.Originality/value:To our knowledge,the proposed models were never implemented for Portuguese patent classification.展开更多
To aim at higher coding efficiency for multiview video coding, the multiview video with a modified high efficiency video coding(MV-HEVC)codec is proposed to encode the dependent views.However, the computational comp...To aim at higher coding efficiency for multiview video coding, the multiview video with a modified high efficiency video coding(MV-HEVC)codec is proposed to encode the dependent views.However, the computational complexity of MV-HEVC encoder is also increased significantly since MV-HEVC inherits all computational complexity of HEVC. This paper presents an efficient algorithm for reducing the high computational complexity of MV-HEVC by fast deciding the coding unit during the encoding process. In our proposal, the depth information of the largest coding units(LCUs) from independent view and neighboring LCUs is analyzed first. Afterwards, the analyzed results are used to early determine the depth for dependent view and thus achieve computational complexity reduction. Furthermore, a prediction unit(PU) decision strategy is also proposed to maintain the video quality. Experimental results demonstrate that our algorithm can achieve 57% time saving on average,while maintaining good video quality and bit-rate performance compared with HTM8.0.展开更多
The translation activity is a process of the interlinguistic transmission of information realized by the information encoding and decoding.Encoding and decoding,cognitive practices operated in objective contexts,are i...The translation activity is a process of the interlinguistic transmission of information realized by the information encoding and decoding.Encoding and decoding,cognitive practices operated in objective contexts,are inevitably of selectivity ascribing to the restriction of contextual reasons.The translator as the intermediary agent connects the original author(encoder)and the target readers(decoder),shouldering the dual duties of the decoder and the encoder,for which his subjectivity is irrevocably manipulated by the selectivity of encoding and decoding.展开更多
基金Funded partly by Heilongjiang Province Financial Fund for Researchers Returning from Abroad
文摘We developed a novel absolute multi-pole encoder structure to improve the resolution of the multi-pole encoder, realize absolute output and reduce the manufacturing cost of the encoder. The structure includes two ring alnicos defined as index track and sub-division track, respectively. The index track is magnetized based on the improved gray code, with linear halls placed around the track evenly. The outputs of linear halls show the region the rotor belongs to. The sub-division track is magnetized to N-S-N-S (north-south-north-south), and the number of N-S pole pairs is determined by the index track. Three linear hall sensors with an air-gap of 2 mm are used to translate the magnetic filed to voltage signals. The relative offset in a single N-S is obtained through look-up. The magnetic encoder is calibrated using a higher-resolution incremental optical encoder. The pulse output from the optical encoder and hall signals from the magnetic encoder are sampled at the same time and transmitted to a computer, and the relation between them is calculated, and stored in the FLASH of MCU (micro controller unit) for look-up. In the working state, the absolute angle is derived by looking-up with hall signals. The structure is simple and the manufacturing cost is very low and suitable for mass production.
基金the Department of Education of Hunan Province,China(No.21A0541)the U.S.Department of Energy(No.DE-FG03-93ER40773)H.Z.acknowledges the financial support from Key Laboratory of Quark and Lepton Physics in Central China Normal University(No.QLPL2024P01)。
文摘This study proposes a novel particle encoding mechanism that seamlessly incorporates the quantum properties of particles,with a specific emphasis on constituent quarks.The primary objective of this mechanism is to facilitate the digital registration and identification of a wide range of particle information.Its design ensures easy integration with different event generators and digital simulations commonly used in high-energy experiments.Moreover,this innovative framework can be easily expanded to encode complex multi-quark states comprising up to nine valence quarks and accommodating an angular momentum of up to 99/2.This versatility and scalability make it a valuable tool.
基金financial supports from the National Key R&D Program of China (2021YFA1401103)the National Natural Science Foundation of China (61925502 and 51772145)
文摘Images and videos provide a wealth of information for people in production and life.Although most digital information is transmitted via optical fiber,the image acquisition and transmission processes still rely heavily on electronic circuits.The development of all-optical transmission networks and optical computing frameworks has pointed to the direction for the next generation of data transmission and information processing.Here,we propose a high-speed,low-cost,multiplexed parallel and one-piece all-fiber architecture for image acquisition,encoding,and transmission,called the Multicore Fiber Acquisition and Transmission Image System(MFAT).Based on different spatial and modal channels of the multicore fiber,fiber-coupled self-encoding,and digital aperture decoding technology,scenes can be observed directly from up to 1 km away.The expansion of capacity provides the possibility of parallel coded transmission of multimodal high-quality data.MFAT requires no additional signal transmitting and receiving equipment.The all-fiber processing saves the time traditionally spent on signal conversion and image pre-processing(compression,encoding,and modulation).Additionally,it provides an effective solution for 2D information acquisition and transmission tasks in extreme environments such as high temperatures and electromagnetic interference.
基金the National Natural Science Foun-dation of China(Grant Nos.12105090 and 12175057).
文摘Leveraging the extraordinary phenomena of quantum superposition and quantum correlation,quantum computing offers unprecedented potential for addressing challenges beyond the reach of classical computers.This paper tackles two pivotal challenges in the realm of quantum computing:firstly,the development of an effective encoding protocol for translating classical data into quantum states,a critical step for any quantum computation.Different encoding strategies can significantly influence quantum computer performance.Secondly,we address the need to counteract the inevitable noise that can hinder quantum acceleration.Our primary contribution is the introduction of a novel variational data encoding method,grounded in quantum regression algorithm models.By adapting the learning concept from machine learning,we render data encoding a learnable process.This allowed us to study the role of quantum correlation in data encoding.Through numerical simulations of various regression tasks,we demonstrate the efficacy of our variational data encoding,particularly post-learning from instructional data.Moreover,we delve into the role of quantum correlation in enhancing task performance,especially in noisy environments.Our findings underscore the critical role of quantum correlation in not only bolstering performance but also in mitigating noise interference,thus advancing the frontier of quantum computing.
基金the National Nat-ural Science Foundation of China under Grant No.62061039Postgraduate Innovation Project of Ningxia University No.JIP20210076Key project of Ningxia Natural Science Foundation No.2020AAC02006.
文摘with the development of 5G,the future wireless communication network tends to be more and more intelligent.In the face of new service de-mands of communication in the future such as super-heterogeneous network,multiple communication sce-narios,large number of antenna elements and large bandwidth,new theories and technologies of intelli-gent communication have been widely studied,among which Deep Learning(DL)is a powerful technology in artificial intelligence(AI).It can be trained to con-tinuously learn to update the optimal parameters.This paper reviews the latest research progress of DL in in-telligent communication,and emphatically introduces five scenarios including Cognitive Radio(CR),Edge Computing(EC),Channel Measurement(CM),End to end Encoder/Decoder(EED)and Visible Light Com-munication(VLC).The prospect and challenges of further research and development in the future are also discussed.
文摘The visual features of continuous pseudocolor encoding is discussed and the optimiz- ing design algorithm of continuous pseudocolor scale is derived.The algorithm is restricting the varying range and direction of lightness,hue and saturation according to correlation and naturalness,automatically calculating the chromaticity coordinates of nodes in uniform color space to get the longest length of scale path,then interpolating points between nodes in equal color differences to obtain continuous pseudocolor scale with visual uniformity.When it was applied to the pseudocolor encoding of thermal image displays,the results showed that the correlation and the naturalness of original images and cognitive characteristics of target pattern were reserved well;the dynamic range of visual perception and the amount of visual information increased obviously;the contrast sensitivity of target identification improved;and the blindness of scale design were avoided.
基金supported by the Joint Funds of the National Natural Science Foundation of China (No. U1833110)
文摘Safety is the foundation of sustainable development in civil aviation.Although catastrophic accidents are rare,indicators of potential incidents and unsafe events frequently materialize.Therefore,a history of unsafe data are considered in predicting safety risks.A deep learning method is adopted for extracting reactions in safety risks.The deep neural network(DNN)model for safety risk prediction is shown to extract complex data characteristics better than a shallow network model.Using extended unsafe data and monthly risk indices,hidden layers and iterations are determined.The effectiveness of DNN is also revealed in comparison with the traditional neural network.Through early risk detection using the method in the paper,airlines and the government can mitigate potential risk and take proactive measures to improve civil aviation safety.
文摘An approach for color image segmentation is proposed based on the contributions of color features to segmentation rather than the choice of a particular color space. The determination of effective color features depends on the analysis of various color features from each tested color image via the designed feature encoding. It is different from the pervious methods where self organized feature map (SOFM) is used for constructing the feature encoding so that the feature encoding can self organize the effective features for different color images. Fuzzy clustering is applied for the final segmentation when the well suited color features and the initial parameter are available. The proposed method has been applied in segmenting different types of color images and the experimental results show that it outperforms the classical clustering method. The study shows that the feature encoding approach offers great promise in automating and optimizing the segmentation of color images.
基金supported by the Public Welfare Technology Application Research Project of Zhejiang Province,China(No.LGF21F010001)the Key Research and Development Program of Zhejiang Province,China(Grant No.2019C01002)the Key Research and Development Program of Zhejiang Province,China(Grant No.2021C03138)。
文摘Blind image quality assessment(BIQA)is of fundamental importance in low-level computer vision community.Increasing interest has been drawn in exploiting deep neural networks for BIQA.Despite of the notable success achieved,there is a broad consensus that training deep convolutional neural networks(DCNN)heavily relies on massive annotated data.Unfortunately,BIQA is typically a small sample problem,resulting the generalization ability of BIQA severely restricted.In order to improve the accuracy and generalization ability of BIQA metrics,this work proposed a totally opinion-unaware BIQA in which no subjective annotations are involved in the training stage.Multiple full-reference image quality assessment(FR-IQA)metrics are employed to label the distorted image as a substitution of subjective quality annotation.A deep neural network(DNN)is trained to blindly predict the multiple FR-IQA score in absence of corresponding pristine image.In the end,a selfsupervised FR-IQA score aggregator implemented by adversarial auto-encoder pools the predictions of multiple FR-IQA scores into the final quality predicting score.Even though none of subjective scores are involved in the training stage,experimental results indicate that our proposed full reference induced BIQA framework is as competitive as state-of-the-art BIQA metrics.
基金supported by the Polish Ministry of Science and Higher Education funding for statutory activities (decision no. 8686/E-367/S/2015 of 19 February 2015)
文摘The non-binary(NB) Irregular Repeat Accumulate(IRA) codes, as a subclass of NB LDPC codes, potentially have an excellent error-correcting performance. They are also known to provide linear complexity of encoding, but the basic encoding method with the serial rate-1 accumulator significantly limits the encoder throughput. Then the objective of the research presented in this paper is to develop an encoding method pro- viding significantly increased throughput of an NB-IRA encoder altogether with a flexible code construction methods for the structured(S-NB-IRA) codes eligible for the proposed encoding method. For this purpose, we reformulate the classic encoding algorithm to fit into the partial parallel encoder architecture. We propose the S-NB-IRA encoder block diagram and show that its estimated throughput is proportional to the submatrix size of the parity check matrix, which guarantees a wide complexity- throughput tradeoff. Then, in order to facilitate the S-NB-IRA coding systems design, we present a computer search algorithm for the construction of good S-NB-IRA codes. The algorithm aims at optimizing the code graph topology along with selecting an appropriate non-binary elements in the parity check matrix. Numerical results show that the constructed S-NB-IRA codes significantly outperform the binary IRA and S-IRA codes, while their performance is similar to the best unstructured NB-LDPC codes.
基金This work was supported by national funds through FCT(Fundação para a Ciência e a Tecnologia),under the project-UIDB/04152/2020-Centro de Investigação em Gestão de Informação(MagIC)/NOVA IMS.
文摘Purpose:Patent classification is one of the areas in Intellectual Property Analytics(IPA),and a growing use case since the number of patent applications has been increasing worldwide.We propose using machine learning algorithms to classify Portuguese patents and evaluate the performance of transfer learning methodologies to solve this task.Design/methodology/approach:We applied three different approaches in this paper.First,we used a dataset available by INPI to explore traditional machine learning algorithms and ensemble methods.After preprocessing data by applying TF-IDF,FastText and Doc2Vec,the models were evaluated by cross-validation in 5 folds.In a second approach,we used two different Neural Networks architectures,a Convolutional Neural Network(CNN)and a bi-directional Long Short-Term Memory(BiLSTM).Finally,we used pre-trained BERT,DistilBERT,and ULMFiT models in the third approach.Findings:BERTTimbau,a BERT architecture model pre-trained on a large Portuguese corpus,presented the best results for the task,even though with a performance of only 4%superior to a LinearSVC model using TF-IDF feature engineering.Research limitations:The dataset was highly imbalanced,as usual in patent applications,so the classes with the lowest samples were expected to present the worst performance.That result happened in some cases,especially in classes with less than 60 training samples.Practical implications:Patent classification is challenging because of the hierarchical classification system,the context overlap,and the underrepresentation of the classes.However,the final model presented an acceptable performance given the size of the dataset and the task complexity.This model can support the decision and improve the time by proposing a category in the second level of ICP,which is one of the critical phases of the grant patent process.Originality/value:To our knowledge,the proposed models were never implemented for Portuguese patent classification.
基金supported by NSC under Grant No.NSC 100-2628-E-259-002-MY3
文摘To aim at higher coding efficiency for multiview video coding, the multiview video with a modified high efficiency video coding(MV-HEVC)codec is proposed to encode the dependent views.However, the computational complexity of MV-HEVC encoder is also increased significantly since MV-HEVC inherits all computational complexity of HEVC. This paper presents an efficient algorithm for reducing the high computational complexity of MV-HEVC by fast deciding the coding unit during the encoding process. In our proposal, the depth information of the largest coding units(LCUs) from independent view and neighboring LCUs is analyzed first. Afterwards, the analyzed results are used to early determine the depth for dependent view and thus achieve computational complexity reduction. Furthermore, a prediction unit(PU) decision strategy is also proposed to maintain the video quality. Experimental results demonstrate that our algorithm can achieve 57% time saving on average,while maintaining good video quality and bit-rate performance compared with HTM8.0.
文摘The translation activity is a process of the interlinguistic transmission of information realized by the information encoding and decoding.Encoding and decoding,cognitive practices operated in objective contexts,are inevitably of selectivity ascribing to the restriction of contextual reasons.The translator as the intermediary agent connects the original author(encoder)and the target readers(decoder),shouldering the dual duties of the decoder and the encoder,for which his subjectivity is irrevocably manipulated by the selectivity of encoding and decoding.