期刊文献+
共找到5篇文章
< 1 >
每页显示 20 50 100
基于事件驱动的新型处理器的研究与应用 被引量:1
1
作者 韩琳 潘登 《现代电子技术》 2012年第8期5-7,共3页
针对现代电子设计低成本、高效率、高灵活性的特点,研究了一种新型的处理器:事件驱动多核心处理器。通过对这种处理器基本构架的研究,以及采用新型处理器与采用传统控制器设计差异的对比,分析出该处理器具有性能高、实时性强、易编程等... 针对现代电子设计低成本、高效率、高灵活性的特点,研究了一种新型的处理器:事件驱动多核心处理器。通过对这种处理器基本构架的研究,以及采用新型处理器与采用传统控制器设计差异的对比,分析出该处理器具有性能高、实时性强、易编程等优点。最后,提出了一种新的设计方法:硬件设计软件化,给众多电子系统设计提供新的思路和参考。 展开更多
关键词 XMOS 事件驱动多核心处理器 硬件线程 硬件设计软件化
在线阅读 下载PDF
Cavium新款多核处理器采用超高性能MIPS64TM架构
2
《半导体技术》 CAS CSCD 北大核心 2012年第3期247-247,共1页
为数字家庭、网络和移动应用提供业界标准处理器架构与内核的领导厂商美普思科技公司(MIPS Technologies,Inc)2012年2月10日宣布MIPS64TM架构已获得Cavium用来开发新款28nmOCTEON III MIPS64系列多核心处理器。
关键词 多核处理器 架构 超高性能 多核心处理器 数字家庭 移动应用 III 内核
在线阅读 下载PDF
Aware conflict detection of non-uniform memory access system and prevention for transactional memory 被引量:3
3
作者 王睿伯 卢凯 卢锡城 《Journal of Central South University》 SCIE EI CAS 2012年第8期2266-2271,共6页
Most transactional memory (TM) research focused on multi-core processors, and others investigated at the clusters, leaving the area of non-uniform memory access (NUMA) system unexplored. The existing TM implementation... Most transactional memory (TM) research focused on multi-core processors, and others investigated at the clusters, leaving the area of non-uniform memory access (NUMA) system unexplored. The existing TM implementations made significant performance degradation on NUMA system because they ignored the slower remote memory access. To solve this problem, a latency-based conflict detection and a forecasting-based conflict prevention method were proposed. Using these techniques, the NUMA aware TM system was presented. By reducing the remote memory access and the abort rate of transaction, the experiment results show that the NUMA aware strategies present good practical TM performance on NUMA system. 展开更多
关键词 transactional memory non-uniform memory access (NUMA) conflict detection conflict prevention
在线阅读 下载PDF
Using multi-threads to hide deduplication I/O latency with low synchronization overhead 被引量:1
4
作者 朱锐 秦磊华 +1 位作者 周敬利 郑寰 《Journal of Central South University》 SCIE EI CAS 2013年第6期1582-1591,共10页
Data deduplication, as a compression method, has been widely used in most backup systems to improve bandwidth and space efficiency. As data exploded to be backed up, two main challenges in data deduplication are the C... Data deduplication, as a compression method, has been widely used in most backup systems to improve bandwidth and space efficiency. As data exploded to be backed up, two main challenges in data deduplication are the CPU-intensive chunking and hashing works and the I/0 intensive disk-index access latency. However, CPU-intensive works have been vastly parallelized and speeded up by multi-core and many-core processors; the I/0 latency is likely becoming the bottleneck in data deduplication. To alleviate the challenge of I/0 latency in multi-core systems, multi-threaded deduplication (Multi-Dedup) architecture was proposed. The main idea of Multi-Dedup was using parallel deduplication threads to hide the I/0 latency. A prefix based concurrent index was designed to maintain the internal consistency of the deduplication index with low synchronization overhead. On the other hand, a collisionless cache array was also designed to preserve locality and similarity within the parallel threads. In various real-world datasets experiments, Multi-Dedup achieves 3-5 times performance improvements incorporating with locality-based ChunkStash and local-similarity based SiLo methods. In addition, Multi-Dedup has dramatically decreased the synchronization overhead and achieves 1.5-2 times performance improvements comparing to traditional lock-based synchronization methods. 展开更多
关键词 MULTI-THREAD MULTI-CORE parallel data deduplication
在线阅读 下载PDF
Dynamic thermal management by greedy scheduling algorithm
5
作者 QU Shuang-xi ZHANG Min-xuan +1 位作者 LIU Guang-hui LIU Tao 《Journal of Central South University》 SCIE EI CAS 2012年第1期193-199,共7页
Chip multiprocessors(CMPs) allow thread level parallelism,thus increasing performance.However,this comes with the cost of temperature problem.CMPs require more power,creating non uniform power map and hotspots.Aiming ... Chip multiprocessors(CMPs) allow thread level parallelism,thus increasing performance.However,this comes with the cost of temperature problem.CMPs require more power,creating non uniform power map and hotspots.Aiming at this problem,a thread scheduling algorithm,the greedy scheduling algorithm,was proposed to reduce the thermal emergencies and to improve the throughput.The greedy scheduling algorithm was implemented in the Linux kernel on Intel's Quad-Core system.The experimental results show that the greedy scheduling algorithm can reduce 9.6%-78.5% of the hardware dynamic thermal management(DTM) in various combinations of workloads,and has an average of 5.2% and up to 9.7% throughput higher than the Linux standard scheduler. 展开更多
关键词 greedy scheduling algorithm chip multiprocessor thermal-aware
在线阅读 下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部