期刊文献+
共找到78,646篇文章
< 1 2 250 >
每页显示 20 50 100
FedCLCC:A personalized federated learning algorithm for edge cloud collaboration based on contrastive learning and conditional computing
1
作者 Kangning Yin Xinhui Ji +1 位作者 Yan Wang Zhiguo Wang 《Defence Technology(防务技术)》 2025年第1期80-93,共14页
Federated learning(FL)is a distributed machine learning paradigm for edge cloud computing.FL can facilitate data-driven decision-making in tactical scenarios,effectively addressing both data volume and infrastructure ... Federated learning(FL)is a distributed machine learning paradigm for edge cloud computing.FL can facilitate data-driven decision-making in tactical scenarios,effectively addressing both data volume and infrastructure challenges in edge environments.However,the diversity of clients in edge cloud computing presents significant challenges for FL.Personalized federated learning(pFL)received considerable attention in recent years.One example of pFL involves exploiting the global and local information in the local model.Current pFL algorithms experience limitations such as slow convergence speed,catastrophic forgetting,and poor performance in complex tasks,which still have significant shortcomings compared to the centralized learning.To achieve high pFL performance,we propose FedCLCC:Federated Contrastive Learning and Conditional Computing.The core of FedCLCC is the use of contrastive learning and conditional computing.Contrastive learning determines the feature representation similarity to adjust the local model.Conditional computing separates the global and local information and feeds it to their corresponding heads for global and local handling.Our comprehensive experiments demonstrate that FedCLCC outperforms other state-of-the-art FL algorithms. 展开更多
关键词 Federated learning Statistical heterogeneity Personalized model Conditional computing Contrastive learning
在线阅读 下载PDF
Dynamic access task scheduling of LEO constellation based on space-based distributed computing
2
作者 LIU Wei JIN Yifeng +2 位作者 ZHANG Lei GAO Zihe TAO Ying 《Journal of Systems Engineering and Electronics》 SCIE CSCD 2024年第4期842-854,共13页
A dynamic multi-beam resource allocation algorithm for large low Earth orbit(LEO)constellation based on on-board distributed computing is proposed in this paper.The allocation is a combinatorial optimization process u... A dynamic multi-beam resource allocation algorithm for large low Earth orbit(LEO)constellation based on on-board distributed computing is proposed in this paper.The allocation is a combinatorial optimization process under a series of complex constraints,which is important for enhancing the matching between resources and requirements.A complex algorithm is not available because that the LEO on-board resources is limi-ted.The proposed genetic algorithm(GA)based on two-dimen-sional individual model and uncorrelated single paternal inheri-tance method is designed to support distributed computation to enhance the feasibility of on-board application.A distributed system composed of eight embedded devices is built to verify the algorithm.A typical scenario is built in the system to evalu-ate the resource allocation process,algorithm mathematical model,trigger strategy,and distributed computation architec-ture.According to the simulation and measurement results,the proposed algorithm can provide an allocation result for more than 1500 tasks in 14 s and the success rate is more than 91%in a typical scene.The response time is decreased by 40%com-pared with the conditional GA. 展开更多
关键词 beam resource allocation distributed computing low Earth obbit(LEO)constellation spacecraft access task scheduling
在线阅读 下载PDF
Real-Time Monitoring Method for Cow Rumination Behavior Based on Edge Computing and Improved MobileNet v3
3
作者 ZHANG Yu LI Xiangting +4 位作者 SUN Yalin XUE Aidi ZHANG Yi JIANG Hailong SHEN Weizheng 《智慧农业(中英文)》 CSCD 2024年第4期29-41,共13页
[Objective]Real-time monitoring of cow ruminant behavior is of paramount importance for promptly obtaining relevant information about cow health and predicting cow diseases.Currently,various strategies have been propo... [Objective]Real-time monitoring of cow ruminant behavior is of paramount importance for promptly obtaining relevant information about cow health and predicting cow diseases.Currently,various strategies have been proposed for monitoring cow ruminant behavior,including video surveillance,sound recognition,and sensor monitoring methods.How‐ever,the application of edge device gives rise to the issue of inadequate real-time performance.To reduce the volume of data transmission and cloud computing workload while achieving real-time monitoring of dairy cow rumination behavior,a real-time monitoring method was proposed for cow ruminant behavior based on edge computing.[Methods]Autono‐mously designed edge devices were utilized to collect and process six-axis acceleration signals from cows in real-time.Based on these six-axis data,two distinct strategies,federated edge intelligence and split edge intelligence,were investigat‐ed for the real-time recognition of cow ruminant behavior.Focused on the real-time recognition method for cow ruminant behavior leveraging federated edge intelligence,the CA-MobileNet v3 network was proposed by enhancing the MobileNet v3 network with a collaborative attention mechanism.Additionally,a federated edge intelligence model was designed uti‐lizing the CA-MobileNet v3 network and the FedAvg federated aggregation algorithm.In the study on split edge intelli‐gence,a split edge intelligence model named MobileNet-LSTM was designed by integrating the MobileNet v3 network with a fusion collaborative attention mechanism and the Bi-LSTM network.[Results and Discussions]Through compara‐tive experiments with MobileNet v3 and MobileNet-LSTM,the federated edge intelligence model based on CA-Mo‐bileNet v3 achieved an average Precision rate,Recall rate,F1-Score,Specificity,and Accuracy of 97.1%,97.9%,97.5%,98.3%,and 98.2%,respectively,yielding the best recognition performance.[Conclusions]It is provided a real-time and effective method for monitoring cow ruminant behavior,and the proposed federated edge intelligence model can be ap‐plied in practical settings. 展开更多
关键词 cow rumination behavior real-time monitoring edge computing improved MobileNet v3 edge intelligence model Bi-LSTM
在线阅读 下载PDF
Hybrid fault tolerance in distributed in-memory storage systems
4
作者 Zheng Gong Si Wu Yinlong Xu 《中国科学技术大学学报》 北大核心 2025年第1期59-68,58,I0002,共12页
An in-memory storage system provides submillisecond latency and improves the concurrency of user applications by caching data into memory from external storage.Fault tolerance of in-memory storage systems is essential... An in-memory storage system provides submillisecond latency and improves the concurrency of user applications by caching data into memory from external storage.Fault tolerance of in-memory storage systems is essential,as the loss of cached data requires access to data from external storage,which evidently increases the response latency.Typically,replication and erasure code(EC)are two fault-tolerant schemes that pose different trade-offs between access performance and storage usage.To help make the best performance and space trade-off,we design ElasticMem,a hybrid fault-tolerant distributed in-memory storage system that supports elastic redundancy transition to dynamically change the fault-tolerant scheme.ElasticMem exploits a novel EC-oriented replication(EOR)that carefully designs the data placement of replication according to the future data layout of EC to enhance the I/O efficiency of redundancy transition.ElasticMem solves the consistency problem caused by concurrent data accesses via a lightweight table-based scheme combined with data bypassing.It detects correlated read and write requests and serves subsequent read requests with local data.We implement a prototype that realizes ElasticMem based on Memcached.Experiments show that ElasticMem remarkably reduces the time of redundancy transition,the overall latency of correlated concurrent data accesses,and the latency of single data access among them. 展开更多
关键词 in-memory storage system hybrid fault tolerance replication erasure code
在线阅读 下载PDF
New multi-DSP parallel computing architecture for real-time image processing 被引量:4
5
作者 Hu Junhong Zhang Tianxu Jiang Haoyang 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2006年第4期883-889,共7页
The flexibility of traditional image processing system is limited because those system are designed for specific applications. In this paper, a new TMS320C64x-based multi-DSP parallel computing architecture is present... The flexibility of traditional image processing system is limited because those system are designed for specific applications. In this paper, a new TMS320C64x-based multi-DSP parallel computing architecture is presented. It has many promising characteristics such as powerful computing capability, broad I/O bandwidth, topology flexibility, and expansibility. The parallel system performance is evaluated by practical experiment. 展开更多
关键词 parallel computing image processing REAL-TIME computer architecture
在线阅读 下载PDF
Research on Generalized Computing Systems 被引量:3
6
作者 Min, Yao Jianhua, Luo 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 1998年第3期39-43,共5页
This paper presents a kind of artificial intelligent system-generalized computing system (GCS for short), and introduces its mathematical description, implement problem and learning problem.
关键词 Artificial intelligence Generalized computing Generalized computing systems Generalized learning
在线阅读 下载PDF
SATVPC:Secure-agent-based trustworthy virtual private cloud model in open computing environments 被引量:2
7
作者 徐小龙 涂群 +2 位作者 BESSIS Nik 杨庚 王新珩 《Journal of Central South University》 SCIE EI CAS 2014年第8期3186-3196,共11页
Private clouds and public clouds are turning mutually into the open integrated cloud computing environment,which can aggregate and utilize WAN and LAN networks computing,storage,information and other hardware and soft... Private clouds and public clouds are turning mutually into the open integrated cloud computing environment,which can aggregate and utilize WAN and LAN networks computing,storage,information and other hardware and software resources sufficiently,but also bring a series of security,reliability and credibility problems.To solve these problems,a novel secure-agent-based trustworthy virtual private cloud model named SATVPC was proposed for the integrated and open cloud computing environment.Through the introduction of secure-agent technology,SATVPC provides an independent,safe and trustworthy computing virtual private platform for multi-tenant systems.In order to meet the needs of the credibility of SATVPC and mandate the trust relationship between each task execution agent and task executor node suitable for their security policies,a new dynamic composite credibility evaluation mechanism was presented,including the credit index computing algorithm and the credibility differentiation strategy.The experimental system shows that SATVPC and the credibility evaluation mechanism can ensure the security of open computing environments with feasibility.Experimental results and performance analysis also show that the credit indexes computing algorithm can evaluate the credibilities of task execution agents and task executor nodes quantitatively,correctly and operationally. 展开更多
关键词 cloud computing trustworthy computing VIRTUALIZATION agent
在线阅读 下载PDF
Task scheduling and virtual machine allocation policy in cloud computing environment 被引量:3
8
作者 Xiong Fu Yeliang Cang 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2015年第4期847-856,共10页
Cloud computing represents a novel computing model in the contemporary technology world. In a cloud system, the com- puting power of virtual machines (VMs) and network status can greatly affect the completion time o... Cloud computing represents a novel computing model in the contemporary technology world. In a cloud system, the com- puting power of virtual machines (VMs) and network status can greatly affect the completion time of data intensive tasks. How- ever, most of the current resource allocation policies focus only on network conditions and physical hosts. And the computing power of VMs is largely ignored. This paper proposes a comprehensive resource allocation policy which consists of a data intensive task scheduling algorithm that takes account of computing power of VMs and a VM allocation policy that considers bandwidth between storage nodes and hosts. The VM allocation policy includes VM placement and VM migration algorithms. Related simulations show that the proposed algorithms can greatly reduce the task comple- tion time and keep good load balance of physical hosts at the same time. 展开更多
关键词 cloud computing resource allocation task scheduling virtual machine (VM) allocation.
在线阅读 下载PDF
Calculation of angular glint in near field utilizing graphical electromagnetic computing 被引量:2
9
作者 Guangfu Zhang Chao Wang +2 位作者 Liguo Liu Yunqi Fu Naichang Yuan 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2013年第6期906-911,共6页
The angular glint in the near field plays an important role on radar tracking errors. To predict it more efficiently for electrically large targets, a new method based on graphical electromagnetic computing (GRECO) ... The angular glint in the near field plays an important role on radar tracking errors. To predict it more efficiently for electrically large targets, a new method based on graphical electromagnetic computing (GRECO) is proposed. With the benefit of the graphic card, the GRECO prediction method is faster and more accurate than other methods. The proposed method at the first time considers the special case that the targets cannot be completely covered by radar beams, which makes the prediction of radar tracking errors more self-contained in practical circumstances. On the other hand, the process of the scattering center extraction is omitted, resulting in possible angular glint prediction in real time. Comparisons between the simulation results and the theoretical ones validate its correctness and value to academic research and engineering applications. 展开更多
关键词 angular glint near field tracking error graphical electromagnetic computing (GRECO)
在线阅读 下载PDF
Low-cost cloud computing solution for geo-information processing 被引量:3
10
作者 高培超 刘钊 +1 位作者 谢美慧 田琨 《Journal of Central South University》 SCIE EI CAS CSCD 2016年第12期3217-3224,共8页
Cloud computing has emerged as a leading computing paradigm,with an increasing number of geographic information(geo-information) processing tasks now running on clouds.For this reason,geographic information system/rem... Cloud computing has emerged as a leading computing paradigm,with an increasing number of geographic information(geo-information) processing tasks now running on clouds.For this reason,geographic information system/remote sensing(GIS/RS) researchers rent more public clouds or establish more private clouds.However,a large proportion of these clouds are found to be underutilized,since users do not deal with big data every day.The low usage of cloud resources violates the original intention of cloud computing,which is to save resources by improving usage.In this work,a low-cost cloud computing solution was proposed for geo-information processing,especially for temporary processing tasks.The proposed solution adopted a hosted architecture and can be realized based on ordinary computers in a common GIS/RS laboratory.The usefulness and effectiveness of the proposed solution was demonstrated by using big data simplification as a case study.Compared to commercial public clouds and dedicated private clouds,the proposed solution is more low-cost and resource-saving,and is more suitable for GIS/RS applications. 展开更多
关键词 cloud computing geo-information processing geo-processing
在线阅读 下载PDF
Energy-efficient and security-optimized AES hardware design for ubiquitous computing 被引量:2
11
作者 Chen Yicheng Zou Xuecheng Liu Zhenglin Han Yu Zheng Zhaoxia 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2008年第4期652-658,共7页
Ubiquitous computing must incorporate a certain level of security. For the severely resource constrained applications, the energy-efficient and small size cryptography algorithm implementation is a critical problem. H... Ubiquitous computing must incorporate a certain level of security. For the severely resource constrained applications, the energy-efficient and small size cryptography algorithm implementation is a critical problem. Hardware implementations of the advanced encryption standard (AES) for authentication and encryption are presented. An energy consumption variable is derived to evaluate low-power design strategies for battery-powered devices. It proves that compact AES architectures fail to optimize the AES hardware energy, whereas reducing invalid switching activities and implementing power-optimized sub-modules are the reasonable methods. Implementations of different substitution box (S-Boxes) structures are presented with 0.25μm 1.8 V CMOS (complementary metal oxide semiconductor) standard cell library. The comparisons and trade-offs among area, security, and power are explored. The experimental results show that Galois field composite S-Boxes have smaller size and highest security but consume considerably more power, whereas decoder-switch-encoder S-Boxes have the best power characteristics with disadvantages in terms of size and security. The combination of these two type S-Boxes instead of homogeneous S-Boxes in AES circuit will lead to optimal schemes. The technique of latch-dividing data path is analyzed, and the quantitative simulation results demonstrate that this approach diminishes the glitches effectively at a very low hardware cost. 展开更多
关键词 encryption and decryption power analysis model inhomogeneous S-Boxes ubiquitous computing advanced encryption standard.
在线阅读 下载PDF
Energy efficient virtual machine migration approach with SLA conservation in cloud computing 被引量:4
12
作者 GARG Vaneet JINDAL Balkrishan 《Journal of Central South University》 SCIE EI CAS CSCD 2021年第3期760-770,共11页
In the age of online workload explosion,cloud users are increasing exponentialy.Therefore,large scale data centers are required in cloud environment that leads to high energy consumption.Hence,optimal resource utiliza... In the age of online workload explosion,cloud users are increasing exponentialy.Therefore,large scale data centers are required in cloud environment that leads to high energy consumption.Hence,optimal resource utilization is essential to improve energy efficiency of cloud data center.Although,most of the existing literature focuses on virtual machine(VM)consolidation for increasing energy efficiency at the cost of service level agreement degradation.In order to improve the existing approaches,load aware three-gear THReshold(LATHR)as well as modified best fit decreasing(MBFD)algorithm is proposed for minimizing total energy consumption while improving the quality of service in terms of SLA.It offers promising results under dynamic workload and variable number of VMs(1-290)allocated on individual host.The outcomes of the proposed work are measured in terms of SLA,energy consumption,instruction energy ratio(IER)and the number of migrations against the varied numbers of VMs.From experimental results it has been concluded that the proposed technique reduced the SLA violations(55%,26%and 39%)and energy consumption(17%,12%and 6%)as compared to median absolute deviation(MAD),inter quartile range(IQR)and double threshold(THR)overload detection policies,respectively. 展开更多
关键词 cloud computing energy efficiency three-gear threshold resource allocation service level agreement
在线阅读 下载PDF
Programming for scientific computing on peta-scale heterogeneous parallel systems 被引量:1
13
作者 杨灿群 吴强 +2 位作者 唐滔 王锋 薛京灵 《Journal of Central South University》 SCIE EI CAS 2013年第5期1189-1203,共15页
Peta-scale high-perfomlance computing systems are increasingly built with heterogeneous CPU and GPU nodes to achieve higher power efficiency and computation throughput. While providing unprecedented capabilities to co... Peta-scale high-perfomlance computing systems are increasingly built with heterogeneous CPU and GPU nodes to achieve higher power efficiency and computation throughput. While providing unprecedented capabilities to conduct computational experiments of historic significance, these systems are presently difficult to program. The users, who are domain experts rather than computer experts, prefer to use programming models closer to their domains (e.g., physics and biology) rather than MPI and OpenME This has led the development of domain-specific programming that provides domain-specific programming interfaces but abstracts away some performance-critical architecture details. Based on experience in designing large-scale computing systems, a hybrid programming framework for scientific computing on heterogeneous architectures is proposed in this work. Its design philosophy is to provide a collaborative mechanism for domain experts and computer experts so that both domain-specific knowledge and performance-critical architecture details can be adequately exploited. Two real-world scientific applications have been evaluated on TH-IA, a peta-scale CPU-GPU heterogeneous system that is currently the 5th fastest supercomputer in the world. The experimental results show that the proposed framework is well suited for developing large-scale scientific computing applications on peta-scale heterogeneous CPU/GPU systems. 展开更多
关键词 heterogeneous parallel system programming framework scientific computing GPU computing molecular dynamic
在线阅读 下载PDF
Resource pre-allocation algorithms for low-energy task scheduling of cloud computing 被引量:4
14
作者 Xiaolong Xu Lingling Cao Xinheng Wang 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2016年第2期457-469,共13页
In order to lower the power consumption and improve the coefficient of resource utilization of current cloud computing systems, this paper proposes two resource pre-allocation algorithms based on the "shut down the r... In order to lower the power consumption and improve the coefficient of resource utilization of current cloud computing systems, this paper proposes two resource pre-allocation algorithms based on the "shut down the redundant, turn on the demanded" strategy here. Firstly, a green cloud computing model is presented, abstracting the task scheduling problem to the virtual machine deployment issue with the virtualization technology. Secondly, the future workloads of system need to be predicted: a cubic exponential smoothing algorithm based on the conservative control(CESCC) strategy is proposed, combining with the current state and resource distribution of system, in order to calculate the demand of resources for the next period of task requests. Then, a multi-objective constrained optimization model of power consumption and a low-energy resource allocation algorithm based on probabilistic matching(RA-PM) are proposed. In order to reduce the power consumption further, the resource allocation algorithm based on the improved simulated annealing(RA-ISA) is designed with the improved simulated annealing algorithm. Experimental results show that the prediction and conservative control strategy make resource pre-allocation catch up with demands, and improve the efficiency of real-time response and the stability of the system. Both RA-PM and RA-ISA can activate fewer hosts, achieve better load balance among the set of high applicable hosts, maximize the utilization of resources, and greatly reduce the power consumption of cloud computing systems. 展开更多
关键词 green cloud computing power consumption prediction resource allocation probabilistic matching simulated annealing
在线阅读 下载PDF
An improved self-adaptive membrane computing optimization algorithm and its applications in residue hydrogenating model parameter estimation 被引量:1
15
作者 芦会彬 薄翠梅 杨世品 《Journal of Central South University》 SCIE EI CAS CSCD 2015年第10期3909-3915,共7页
In order to solve the non-linear and high-dimensional optimization problems more effectively, an improved self-adaptive membrane computing(ISMC) optimization algorithm was proposed. The proposed ISMC algorithm applied... In order to solve the non-linear and high-dimensional optimization problems more effectively, an improved self-adaptive membrane computing(ISMC) optimization algorithm was proposed. The proposed ISMC algorithm applied improved self-adaptive crossover and mutation formulae that can provide appropriate crossover operator and mutation operator based on different functions of the objects and the number of iterations. The performance of ISMC was tested by the benchmark functions. The simulation results for residue hydrogenating kinetics model parameter estimation show that the proposed method is superior to the traditional intelligent algorithms in terms of convergence accuracy and stability in solving the complex parameter optimization problems. 展开更多
关键词 optimization algorithm membrane computing benchmark function improved self-adaptive operator
在线阅读 下载PDF
A novel virtual machine deployment algorithm with energy efficiency in cloud computing 被引量:12
16
作者 周舟 胡志刚 +1 位作者 宋铁 于俊洋 《Journal of Central South University》 SCIE EI CAS CSCD 2015年第3期974-983,共10页
In order to improve the energy efficiency of large-scale data centers, a virtual machine(VM) deployment algorithm called three-threshold energy saving algorithm(TESA), which is based on the linear relation between the... In order to improve the energy efficiency of large-scale data centers, a virtual machine(VM) deployment algorithm called three-threshold energy saving algorithm(TESA), which is based on the linear relation between the energy consumption and(processor) resource utilization, is proposed. In TESA, according to load, hosts in data centers are divided into four classes, that is,host with light load, host with proper load, host with middle load and host with heavy load. By defining TESA, VMs on lightly loaded host or VMs on heavily loaded host are migrated to another host with proper load; VMs on properly loaded host or VMs on middling loaded host are kept constant. Then, based on the TESA, five kinds of VM selection policies(minimization of migrations policy based on TESA(MIMT), maximization of migrations policy based on TESA(MAMT), highest potential growth policy based on TESA(HPGT), lowest potential growth policy based on TESA(LPGT) and random choice policy based on TESA(RCT)) are presented, and MIMT is chosen as the representative policy through experimental comparison. Finally, five research directions are put forward on future energy management. The results of simulation indicate that, as compared with single threshold(ST) algorithm and minimization of migrations(MM) algorithm, MIMT significantly improves the energy efficiency in data centers. 展开更多
关键词 cloud computing energy efficiency three-threshold virtual machine(VM) selection policy energy management
在线阅读 下载PDF
Summary of the Fourth Beijing International Conference on System Simulation and Scientific Computing 被引量:1
17
《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 1999年第4期81-81,共1页
关键词 Simulation Summary of the Fourth Beijing International Conference on System Simulation and Scientific computing CASS
在线阅读 下载PDF
Object-Oriented Design for FDTD Visual Scientific Computing
18
作者 Dong, X. Wang, W. Wang, G. 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2001年第3期71-75,共5页
A scheme for general purposed FDTD visual scientific computing software is introduced in this paper using object-oriented design (OOD) method. By abstracting the parameters of FDTD grids to an individual class and sep... A scheme for general purposed FDTD visual scientific computing software is introduced in this paper using object-oriented design (OOD) method. By abstracting the parameters of FDTD grids to an individual class and separating from the iteration procedure, the visual software can be adapted to more comprehensive computing problems. Real-time gray degree graphic and wave curve of the results can be achieved using DirectX technique. The special difference equation and data structure in dispersive medium are considered, and the peculiarity of parameters in perfectly matched layer are also discussed. 展开更多
关键词 computational methods computer aided design Data structures Difference equations Finite difference method Iterative methods Natural sciences computing Object oriented programming Parameter estimation Three dimensional computer graphics Time domain analysis
在线阅读 下载PDF
Geometrical Modeling by NURBS Surface and RCS Computing by Visualization for Complex Targets
19
作者 Zhou, Yong Liu, Tiejun 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 1997年第1期13-21,共9页
A novel approach to compute the high frequency radar cross-section (RCS) of complex targets is described in this paper.From the three views or the sectional views of the target, target is geometrically modeled by non-... A novel approach to compute the high frequency radar cross-section (RCS) of complex targets is described in this paper.From the three views or the sectional views of the target, target is geometrically modeled by non-uniform rational B-spline (NURBS) parametric surfaces using the software CNFEOV developed by oneself which constructs NURBS representation of complex target from engineering orthographic views. RCS is obtained through PO, PTD, MEC and IBC techniques. When calculating RCS of the target, it is necessary to get the unit normal vector to surface illumi- nated by radar and the value Z which is the distance from the point on the surface to radar. ln this novel approach, the unit normal vector to the surface can be obtained either by the Phong rendering model, in which the color components (RGB) of every pixel on the image are equal to the coordinate components of the normal, or by the NURBS expressions. The value Z can be achieved by software or hardware Z-buffer. The effects of the size of image on the RCS of target are discussed and the correct method is recommended. The RCS of the perfect conducting sphere, cylinder and dihedral as well as the coated cylinder, as some examples, are computed. The accuracy of the method is verified by comparing the numerical results with those obtained by using other methods. 展开更多
关键词 RCS Visualization computation Geometrical modeling.
在线阅读 下载PDF
Job shop scheduling problem based on DNA computing
20
作者 Yin Zhixiang Cui Jianzhong Yang Yan Ma Ying 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2006年第3期654-659,共6页
To solve job shop scheduling problem, a new approach-DNA computing is used in solving job shop scheduling problem. The approach using DNA computing to solve job shop scheduling is divided into three stands. Finally, o... To solve job shop scheduling problem, a new approach-DNA computing is used in solving job shop scheduling problem. The approach using DNA computing to solve job shop scheduling is divided into three stands. Finally, optimum solutions are obtained by sequencing A small job shop scheduling problem is solved in DNA computing, and the "operations" of the computation were performed with standard protocols, as ligation, synthesis, electrophoresis etc. This work represents further evidence for the ability of DNA computing to solve NP-complete search problems. 展开更多
关键词 DNA computing job shop scheduling problem WEIGHTED tournament.
在线阅读 下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部