With the rapid development of information technology,IoT devices play a huge role in physiological health data detection.The exponential growth of medical data requires us to reasonably allocate storage space for clou...With the rapid development of information technology,IoT devices play a huge role in physiological health data detection.The exponential growth of medical data requires us to reasonably allocate storage space for cloud servers and edge nodes.The storage capacity of edge nodes close to users is limited.We should store hotspot data in edge nodes as much as possible,so as to ensure response timeliness and access hit rate;However,the current scheme cannot guarantee that every sub-message in a complete data stored by the edge node meets the requirements of hot data;How to complete the detection and deletion of redundant data in edge nodes under the premise of protecting user privacy and data dynamic integrity has become a challenging problem.Our paper proposes a redundant data detection method that meets the privacy protection requirements.By scanning the cipher text,it is determined whether each sub-message of the data in the edge node meets the requirements of the hot data.It has the same effect as zero-knowledge proof,and it will not reveal the privacy of users.In addition,for redundant sub-data that does not meet the requirements of hot data,our paper proposes a redundant data deletion scheme that meets the dynamic integrity of the data.We use Content Extraction Signature(CES)to generate the remaining hot data signature after the redundant data is deleted.The feasibility of the scheme is proved through safety analysis and efficiency analysis.展开更多
Efficient and effective data acquisition is of theoretical and practical importance in WSN applications because data measured and collected by WSN is often unreliable, such as those often accompanied by noise and erro...Efficient and effective data acquisition is of theoretical and practical importance in WSN applications because data measured and collected by WSN is often unreliable, such as those often accompanied by noise and error, missing values or inconsistent data. Motivated by fog computing, which focuses on how to effectively offload computation-intensive tasks from resource-constrained devices, this paper proposes a simple but yet effective data acquisition approach with the ability of filtering abnormal data and meeting the real-time requirement. Our method uses a cooperation mechanism by leveraging on both an architectural and algorithmic approach. Firstly, the sensor node with the limited computing resource only accomplishes detecting and marking the suspicious data using a light weight algorithm. Secondly, the cluster head evaluates suspicious data by referring to the data from the other sensor nodes in the same cluster and discard the abnormal data directly. Thirdly, the sink node fills up the discarded data with an approximate value using nearest neighbor data supplement method. Through the architecture, each node only consumes a few computational resources and distributes the heavily computing load to several nodes. Simulation results show that our data acquisition method is effective considering the real-time outlier filtering and the computing overhead.展开更多
As superconducting quantum computing continues to advance at an unprecedented pace,there is a compelling demand for the innovation of specialized electronic instruments that act as crucial conduits between quantum pro...As superconducting quantum computing continues to advance at an unprecedented pace,there is a compelling demand for the innovation of specialized electronic instruments that act as crucial conduits between quantum processors and host computers.Here,we introduce a microwave measurement and control system(M^(2)CS)dedicated to large-scale superconducting quantum processors.M^(2)CS features a compact modular design that balances overall performance,scalability and flexibility.Electronic tests of M^(2)CS show key metrics comparable to commercial instruments.Benchmark tests on transmon superconducting qubits further show qubit coherence and gate fidelities comparable to state-of-the-art results,confirming M^(2)CS's capability to meet the stringent requirements of quantum experiments running on intermediate-scale quantum processors.The compact and scalable nature of our design holds the potential to support over 1000 qubits after upgrade in stability and integration.The M^(2)CS architecture may also be adopted to a wider range of scenarios,including other quantum computing platforms such as trapped ions and silicon quantum dots,as well as more traditional applications like microwave kinetic inductance detectors and phased array radar systems.展开更多
Cloud computing technology is changing the development and usage patterns of IT infrastructure and applications. Virtualized and distributed systems as well as unified management and scheduling has greatly im proved c...Cloud computing technology is changing the development and usage patterns of IT infrastructure and applications. Virtualized and distributed systems as well as unified management and scheduling has greatly im proved computing and storage. Management has become easier, andOAM costs have been significantly reduced. Cloud desktop technology is develop ing rapidly. With this technology, users can flexibly and dynamically use virtual ma chine resources, companies' efficiency of using and allocating resources is greatly improved, and information security is ensured. In most existing virtual cloud desk top solutions, computing and storage are bound together, and data is stored as im age files. This limits the flexibility and expandability of systems and is insufficient for meetinz customers' requirements in different scenarios.展开更多
Reversible data hiding techniques are capable of reconstructing the original cover image from stego-images. Recently, many researchers have focused on reversible data hiding to protect intellectual property rights. In...Reversible data hiding techniques are capable of reconstructing the original cover image from stego-images. Recently, many researchers have focused on reversible data hiding to protect intellectual property rights. In this paper, we combine reversible data hiding with the chaotic Henon map as an encryption technique to achieve an acceptable level of confidentiality in cloud computing environments. And, Haar digital wavelet transformation (HDWT) is also applied to convert an image from a spatial domain into a frequency domain. And then the decimal of coefficients and integer of high frequency band are modified for hiding secret bits. Finally, the modified coefficients are inversely transformed to stego-images.展开更多
Integrating marketing and distribution businesses is crucial for improving the coordination of equipment and the efficient management of multi-energy systems.New energy sources are continuously being connected to dist...Integrating marketing and distribution businesses is crucial for improving the coordination of equipment and the efficient management of multi-energy systems.New energy sources are continuously being connected to distribution grids;this,however,increases the complexity of the information structure of marketing and distribution businesses.The existing unified data model and the coordinated application of marketing and distribution suffer from various drawbacks.As a solution,this paper presents a data model of"one graph of marketing and distribution"and a framework for graph computing,by analyzing the current trends of business and data in the marketing and distribution fields and using graph data theory.Specifically,this work aims to determine the correlation between distribution transformers and marketing users,which is crucial for elucidating the connection between marketing and distribution.In this manner,a novel identification algorithm is proposed based on the collected data for marketing and distribution.Lastly,a forecasting application is developed based on the proposed algorithm to realize the coordinated prediction and consumption of distributed photovoltaic power generation and distribution loads.Furthermore,an operation and maintenance(O&M)knowledge graph reasoning application is developed to improve the intelligent O&M ability of marketing and distribution equipment.展开更多
An attempt has been made to develop a distributed software infrastructure model for onboard data fusion system simulation, which is also applied to netted radar systems, onboard distributed detection systems and advan...An attempt has been made to develop a distributed software infrastructure model for onboard data fusion system simulation, which is also applied to netted radar systems, onboard distributed detection systems and advanced C3I systems. Two architectures are provided and verified: one is based on pure TCP/IP protocol and C/S model, and implemented with Winsock, the other is based on CORBA (common object request broker architecture). The performance of data fusion simulation system, i.e. reliability, flexibility and scalability, is improved and enhanced by two models. The study of them makes valuable explore on incorporating the distributed computation concepts into radar system simulation techniques.展开更多
Cloud computing is becoming an important solution for providing scalable computing resources via Internet. Because there are tens of thousands of nodes in data center, the probability of server failures is nontrivial....Cloud computing is becoming an important solution for providing scalable computing resources via Internet. Because there are tens of thousands of nodes in data center, the probability of server failures is nontrivial. Therefore, it is a critical challenge to guarantee the service reliability. Fault-tolerance strategies, such as checkpoint, are commonly employed. Because of the failure of the edge switches, the checkpoint image may become inaccessible. Therefore, current checkpoint-based fault tolerance method cannot achieve the best effect. In this paper, we propose an optimal checkpoint method with edge switch failure-aware. The edge switch failure-aware checkpoint method includes two algorithms. The first algorithm employs the data center topology and communication characteristic for checkpoint image storage server selection. The second algorithm employs the checkpoint image storage characteristic as well as the data center topology to select the recovery server. Simulation experiments are performed to demonstrate the effectiveness of the proposed method.展开更多
In order to discover the main causes of elevator group accidents in edge computing environment, a multi-dimensional data model of elevator accident data is established by using data cube technology, proposing and impl...In order to discover the main causes of elevator group accidents in edge computing environment, a multi-dimensional data model of elevator accident data is established by using data cube technology, proposing and implementing a method by combining classical Apriori algorithm with the model, digging out frequent items of elevator accident data to explore the main reasons for the occurrence of elevator accidents. In addition, a collaborative edge model of elevator accidents is set to achieve data sharing, making it possible to check the detail of each cause to confirm the causes of elevator accidents. Lastly the association rules are applied to find the law of elevator Accidents.展开更多
Edge computing is a highly virtualized paradigm that can services the Internet of Things(Io T)devices more efficiently.It is a non-trivial extension of cloud computing,which can not only meet the big data processing r...Edge computing is a highly virtualized paradigm that can services the Internet of Things(Io T)devices more efficiently.It is a non-trivial extension of cloud computing,which can not only meet the big data processing requirements of cloud computing,but also collect and analyze distributed data.However,it inherits many security and privacy challenges of cloud computing,such as:authentication and access control.To address these problem,we proposed a new efficient privacy-preserving aggregation scheme for edge computing.Our scheme consists of two steps.First,we divided the data of the end users with the Simulated Annealing Module Partition(SAMP)algorithm.And then,the end sensors and edge nodes performed respectively differential aggregation mechanism with the Differential Aggregation Encryption(DAE)algorithm which can make noise interference and encryption algorithm with trusted authority(TA).Experiment results show that the DAE can preserve user privacy,and has significantly less computation and communication overhead than existing approaches.展开更多
Cloud computing can significantly improve efficiency in Internet utilization and data management.Several cloud applications(file sharing,backup,data up/download etc.) imply transfers of large amount of data without re...Cloud computing can significantly improve efficiency in Internet utilization and data management.Several cloud applications(file sharing,backup,data up/download etc.) imply transfers of large amount of data without real-time requirements.In several use-cases cloud-computing solutions reduce operational costs and guarantee target QoS.These solutions become critical when satellite systems are utilized,since resources are limited,network latency is huge and bandwidth costs are high.Using satellite capacity for cloud-computing bulk traffic,keeping acceptable performance of interactive applications,is very important and can limit the connectivity costs.This goal can be achieved installing in the Set Top Box(STB) a proxy agent,to differentiate traffic and assign bandwidth according to priority,leaving spare capacity to bulk cloud computing traffic.This aim is typically reached using a specific QoS architecture,adding functional blocks at network or lower layers.We propose to manage such a process at transport layer only.The endpoint proxy implements a new transport protocol called TCP Noordwijk+,introducing a flow control differentiation capability.The proxy includes TPCN+ which efficiently transfers low-priority bulk data and handles interactive data,keeping a high degree of friendliness.The outcomes of Ns-2simulations confirm applicability and good performance of the proposed solution.展开更多
Nowadays, an increasing number of persons choose to outsource their computing demands and storage demands to the Cloud. In order to ensure the integrity of the data in the untrusted Cloud, especially the dynamic files...Nowadays, an increasing number of persons choose to outsource their computing demands and storage demands to the Cloud. In order to ensure the integrity of the data in the untrusted Cloud, especially the dynamic files which can be updated online, we propose an improved dynamic provable data possession model. We use some homomorphic tags to verify the integrity of the file and use some hash values generated by some secret values and tags to prevent replay attack and forgery attack. Compared with previous works, our proposal reduces the computational and communication complexity from O(logn) to O(1). We did some experiments to ensure this improvement and extended the model to file sharing situation.展开更多
This paper reviews the recently developed optical interconnect technologies designed for scalable, low latency and high-throughput comunications within datacenters or high perforrmnce computers. The three typical arch...This paper reviews the recently developed optical interconnect technologies designed for scalable, low latency and high-throughput comunications within datacenters or high perforrmnce computers. The three typical architectures including the broadcast-and-select based Optical Shared Memory Supercomputer Interconnect System (OSMOSIS) switch, the defection routing based Data Vortex switch and the arrayed waveguide grating based Low-latency Interconnect Optical Network Switch (LIONS) switch are discussed in detail. In particular, we investigate the various Ioopback buffering technologies in LIONS and present a proof of principle testbed demonstration showing feasibility of LIONS architecture. Moreover, the performance of LIONS, Data Vortex and OSMOSIS with traditional state-of-the-art electrical switching network based on the Flattened-ButterFly (FBF) architecture in terms of throughput and latency are compared. The sinmlation based perfortmnce study shows that the latency of LIONS is almost independent of the number of input ports and does not saturate even at very high input load.展开更多
In light of the coronavirus disease 2019(COVID-19)outbreak caused by the novel coronavirus,companies and institutions have instructed their employees to work from home as a precautionary measure to reduce the risk of ...In light of the coronavirus disease 2019(COVID-19)outbreak caused by the novel coronavirus,companies and institutions have instructed their employees to work from home as a precautionary measure to reduce the risk of contagion.Employees,however,have been exposed to different security risks because of working from home.Moreover,the rapid global spread of COVID-19 has increased the volume of data generated from various sources.Working from home depends mainly on cloud computing(CC)applications that help employees to efficiently accomplish their tasks.The cloud computing environment(CCE)is an unsung hero in the COVID-19 pandemic crisis.It consists of the fast-paced practices for services that reflect the trend of rapidly deployable applications for maintaining data.Despite the increase in the use of CC applications,there is an ongoing research challenge in the domains of CCE concerning data,guaranteeing security,and the availability of CC applications.This paper,to the best of our knowledge,is the first paper that thoroughly explains the impact of the COVID-19 pandemic on CCE.Additionally,this paper also highlights the security risks of working from home during the COVID-19 pandemic.展开更多
Aiming to increase the efficiency of gem design and manufacturing, a new method in computer-aided-design (CAD) of convex faceted gem cuts (CFGC) based on Half-edge data structure (HDS), including the algorithms for th...Aiming to increase the efficiency of gem design and manufacturing, a new method in computer-aided-design (CAD) of convex faceted gem cuts (CFGC) based on Half-edge data structure (HDS), including the algorithms for the implementation is presented in this work. By using object-oriented methods, geometrical elements of CFGC are classified and responding geometrical feature classes are established. Each class is implemented and embedded based on the gem process. Matrix arithmetic and analytical geometry are used to derive the affine transformation and the cutting algorithm. Based on the demand for a diversity of gem cuts, CAD functions both for free-style faceted cuts and parametric designs of typical cuts and visualization and human-computer interactions of the CAD system including two-dimensional and three-dimensional interactions have been realized which enhances the flexibility and universality of the CAD system. Furthermore, data in this CAD system can also be used directly by the gem CAM module, which will promote the gem CAD/CAM integration.展开更多
基金sponsored by the National Natural Science Foundation of China under grant number No. 62172353, No. 62302114, No. U20B2046 and No. 62172115Innovation Fund Program of the Engineering Research Center for Integration and Application of Digital Learning Technology of Ministry of Education No.1331007 and No. 1311022+1 种基金Natural Science Foundation of the Jiangsu Higher Education Institutions Grant No. 17KJB520044Six Talent Peaks Project in Jiangsu Province No.XYDXX-108
文摘With the rapid development of information technology,IoT devices play a huge role in physiological health data detection.The exponential growth of medical data requires us to reasonably allocate storage space for cloud servers and edge nodes.The storage capacity of edge nodes close to users is limited.We should store hotspot data in edge nodes as much as possible,so as to ensure response timeliness and access hit rate;However,the current scheme cannot guarantee that every sub-message in a complete data stored by the edge node meets the requirements of hot data;How to complete the detection and deletion of redundant data in edge nodes under the premise of protecting user privacy and data dynamic integrity has become a challenging problem.Our paper proposes a redundant data detection method that meets the privacy protection requirements.By scanning the cipher text,it is determined whether each sub-message of the data in the edge node meets the requirements of the hot data.It has the same effect as zero-knowledge proof,and it will not reveal the privacy of users.In addition,for redundant sub-data that does not meet the requirements of hot data,our paper proposes a redundant data deletion scheme that meets the dynamic integrity of the data.We use Content Extraction Signature(CES)to generate the remaining hot data signature after the redundant data is deleted.The feasibility of the scheme is proved through safety analysis and efficiency analysis.
基金supported by National Natural Science Foundation of China, "Research on Accurate and Fair Service Recommendation Approach in Mobile Internet Environment", (No. 61571066)
文摘Efficient and effective data acquisition is of theoretical and practical importance in WSN applications because data measured and collected by WSN is often unreliable, such as those often accompanied by noise and error, missing values or inconsistent data. Motivated by fog computing, which focuses on how to effectively offload computation-intensive tasks from resource-constrained devices, this paper proposes a simple but yet effective data acquisition approach with the ability of filtering abnormal data and meeting the real-time requirement. Our method uses a cooperation mechanism by leveraging on both an architectural and algorithmic approach. Firstly, the sensor node with the limited computing resource only accomplishes detecting and marking the suspicious data using a light weight algorithm. Secondly, the cluster head evaluates suspicious data by referring to the data from the other sensor nodes in the same cluster and discard the abnormal data directly. Thirdly, the sink node fills up the discarded data with an approximate value using nearest neighbor data supplement method. Through the architecture, each node only consumes a few computational resources and distributes the heavily computing load to several nodes. Simulation results show that our data acquisition method is effective considering the real-time outlier filtering and the computing overhead.
基金supported by the Science,Technology and Innovation Commission of Shenzhen Municipality(Grant Nos.KQTD20210811090049034,RCBS20231211090824040,and RCBS20231211090815032)the National Natural Science Foundation of China(Grant Nos.12174178,12204228,12374474,and 123b2071)+2 种基金the Innovation Program for Quantum Science and Technology(Grant No.2021ZD0301703)the Shenzhen-Hong Kong Cooperation Zone for Technology and Innovation(Grant No.HZQB-KCZYB-2020050)Guangdong Basic and Applied Basic Research Foundation(Grant Nos.2024A1515011714 and 2022A1515110615)。
文摘As superconducting quantum computing continues to advance at an unprecedented pace,there is a compelling demand for the innovation of specialized electronic instruments that act as crucial conduits between quantum processors and host computers.Here,we introduce a microwave measurement and control system(M^(2)CS)dedicated to large-scale superconducting quantum processors.M^(2)CS features a compact modular design that balances overall performance,scalability and flexibility.Electronic tests of M^(2)CS show key metrics comparable to commercial instruments.Benchmark tests on transmon superconducting qubits further show qubit coherence and gate fidelities comparable to state-of-the-art results,confirming M^(2)CS's capability to meet the stringent requirements of quantum experiments running on intermediate-scale quantum processors.The compact and scalable nature of our design holds the potential to support over 1000 qubits after upgrade in stability and integration.The M^(2)CS architecture may also be adopted to a wider range of scenarios,including other quantum computing platforms such as trapped ions and silicon quantum dots,as well as more traditional applications like microwave kinetic inductance detectors and phased array radar systems.
文摘Cloud computing technology is changing the development and usage patterns of IT infrastructure and applications. Virtualized and distributed systems as well as unified management and scheduling has greatly im proved computing and storage. Management has become easier, andOAM costs have been significantly reduced. Cloud desktop technology is develop ing rapidly. With this technology, users can flexibly and dynamically use virtual ma chine resources, companies' efficiency of using and allocating resources is greatly improved, and information security is ensured. In most existing virtual cloud desk top solutions, computing and storage are bound together, and data is stored as im age files. This limits the flexibility and expandability of systems and is insufficient for meetinz customers' requirements in different scenarios.
文摘Reversible data hiding techniques are capable of reconstructing the original cover image from stego-images. Recently, many researchers have focused on reversible data hiding to protect intellectual property rights. In this paper, we combine reversible data hiding with the chaotic Henon map as an encryption technique to achieve an acceptable level of confidentiality in cloud computing environments. And, Haar digital wavelet transformation (HDWT) is also applied to convert an image from a spatial domain into a frequency domain. And then the decimal of coefficients and integer of high frequency band are modified for hiding secret bits. Finally, the modified coefficients are inversely transformed to stego-images.
基金This work was supported by the National Key R&D Program of China(2020YFB0905900).
文摘Integrating marketing and distribution businesses is crucial for improving the coordination of equipment and the efficient management of multi-energy systems.New energy sources are continuously being connected to distribution grids;this,however,increases the complexity of the information structure of marketing and distribution businesses.The existing unified data model and the coordinated application of marketing and distribution suffer from various drawbacks.As a solution,this paper presents a data model of"one graph of marketing and distribution"and a framework for graph computing,by analyzing the current trends of business and data in the marketing and distribution fields and using graph data theory.Specifically,this work aims to determine the correlation between distribution transformers and marketing users,which is crucial for elucidating the connection between marketing and distribution.In this manner,a novel identification algorithm is proposed based on the collected data for marketing and distribution.Lastly,a forecasting application is developed based on the proposed algorithm to realize the coordinated prediction and consumption of distributed photovoltaic power generation and distribution loads.Furthermore,an operation and maintenance(O&M)knowledge graph reasoning application is developed to improve the intelligent O&M ability of marketing and distribution equipment.
文摘An attempt has been made to develop a distributed software infrastructure model for onboard data fusion system simulation, which is also applied to netted radar systems, onboard distributed detection systems and advanced C3I systems. Two architectures are provided and verified: one is based on pure TCP/IP protocol and C/S model, and implemented with Winsock, the other is based on CORBA (common object request broker architecture). The performance of data fusion simulation system, i.e. reliability, flexibility and scalability, is improved and enhanced by two models. The study of them makes valuable explore on incorporating the distributed computation concepts into radar system simulation techniques.
基金supported by Beijing Natural Science Foundation (4174100)NSFC(61602054)the Fundamental Research Funds for the Central Universities
文摘Cloud computing is becoming an important solution for providing scalable computing resources via Internet. Because there are tens of thousands of nodes in data center, the probability of server failures is nontrivial. Therefore, it is a critical challenge to guarantee the service reliability. Fault-tolerance strategies, such as checkpoint, are commonly employed. Because of the failure of the edge switches, the checkpoint image may become inaccessible. Therefore, current checkpoint-based fault tolerance method cannot achieve the best effect. In this paper, we propose an optimal checkpoint method with edge switch failure-aware. The edge switch failure-aware checkpoint method includes two algorithms. The first algorithm employs the data center topology and communication characteristic for checkpoint image storage server selection. The second algorithm employs the checkpoint image storage characteristic as well as the data center topology to select the recovery server. Simulation experiments are performed to demonstrate the effectiveness of the proposed method.
文摘In order to discover the main causes of elevator group accidents in edge computing environment, a multi-dimensional data model of elevator accident data is established by using data cube technology, proposing and implementing a method by combining classical Apriori algorithm with the model, digging out frequent items of elevator accident data to explore the main reasons for the occurrence of elevator accidents. In addition, a collaborative edge model of elevator accidents is set to achieve data sharing, making it possible to check the detail of each cause to confirm the causes of elevator accidents. Lastly the association rules are applied to find the law of elevator Accidents.
基金supported by the National Natural Science Foundation of China(61672321,61771289,and 61832012)the Natural Science Foundation of Shandong Province with Grants ZR2021QF050 and ZR2021MF075+6 种基金Shandong province key research and development plan(2019GGX101050)Shandong provincial Graduate Education Innovation Program(SDYY14052 and SDYY15049)Qufu Normal University Science and Technology Project(xkj201525)Shandong province agricultural machinery equipment research and development innovation project(2018YZ002)Qufu Normal University graduate degree thesis research innovation funding project(LWCXS201935)Shandong Provincial Specialized Degree Postgraduate Teaching Case Library Construction ProgramShandong Provincial Postgraduate Education Quality Curriculum Construction Program。
文摘Edge computing is a highly virtualized paradigm that can services the Internet of Things(Io T)devices more efficiently.It is a non-trivial extension of cloud computing,which can not only meet the big data processing requirements of cloud computing,but also collect and analyze distributed data.However,it inherits many security and privacy challenges of cloud computing,such as:authentication and access control.To address these problem,we proposed a new efficient privacy-preserving aggregation scheme for edge computing.Our scheme consists of two steps.First,we divided the data of the end users with the Simulated Annealing Module Partition(SAMP)algorithm.And then,the end sensors and edge nodes performed respectively differential aggregation mechanism with the Differential Aggregation Encryption(DAE)algorithm which can make noise interference and encryption algorithm with trusted authority(TA).Experiment results show that the DAE can preserve user privacy,and has significantly less computation and communication overhead than existing approaches.
文摘Cloud computing can significantly improve efficiency in Internet utilization and data management.Several cloud applications(file sharing,backup,data up/download etc.) imply transfers of large amount of data without real-time requirements.In several use-cases cloud-computing solutions reduce operational costs and guarantee target QoS.These solutions become critical when satellite systems are utilized,since resources are limited,network latency is huge and bandwidth costs are high.Using satellite capacity for cloud-computing bulk traffic,keeping acceptable performance of interactive applications,is very important and can limit the connectivity costs.This goal can be achieved installing in the Set Top Box(STB) a proxy agent,to differentiate traffic and assign bandwidth according to priority,leaving spare capacity to bulk cloud computing traffic.This aim is typically reached using a specific QoS architecture,adding functional blocks at network or lower layers.We propose to manage such a process at transport layer only.The endpoint proxy implements a new transport protocol called TCP Noordwijk+,introducing a flow control differentiation capability.The proxy includes TPCN+ which efficiently transfers low-priority bulk data and handles interactive data,keeping a high degree of friendliness.The outcomes of Ns-2simulations confirm applicability and good performance of the proposed solution.
基金supported by Major Program of Shanghai Science and Technology Commission under Grant No.10DZ1500200Collaborative Applied Research and Development Project between Morgan Stanley and Shanghai Jiao Tong University, China
文摘Nowadays, an increasing number of persons choose to outsource their computing demands and storage demands to the Cloud. In order to ensure the integrity of the data in the untrusted Cloud, especially the dynamic files which can be updated online, we propose an improved dynamic provable data possession model. We use some homomorphic tags to verify the integrity of the file and use some hash values generated by some secret values and tags to prevent replay attack and forgery attack. Compared with previous works, our proposal reduces the computational and communication complexity from O(logn) to O(1). We did some experiments to ensure this improvement and extended the model to file sharing situation.
基金the Department of Defense under Contract No.#H88230-08-C-0202the Google Research Awards
文摘This paper reviews the recently developed optical interconnect technologies designed for scalable, low latency and high-throughput comunications within datacenters or high perforrmnce computers. The three typical architectures including the broadcast-and-select based Optical Shared Memory Supercomputer Interconnect System (OSMOSIS) switch, the defection routing based Data Vortex switch and the arrayed waveguide grating based Low-latency Interconnect Optical Network Switch (LIONS) switch are discussed in detail. In particular, we investigate the various Ioopback buffering technologies in LIONS and present a proof of principle testbed demonstration showing feasibility of LIONS architecture. Moreover, the performance of LIONS, Data Vortex and OSMOSIS with traditional state-of-the-art electrical switching network based on the Flattened-ButterFly (FBF) architecture in terms of throughput and latency are compared. The sinmlation based perfortmnce study shows that the latency of LIONS is almost independent of the number of input ports and does not saturate even at very high input load.
文摘In light of the coronavirus disease 2019(COVID-19)outbreak caused by the novel coronavirus,companies and institutions have instructed their employees to work from home as a precautionary measure to reduce the risk of contagion.Employees,however,have been exposed to different security risks because of working from home.Moreover,the rapid global spread of COVID-19 has increased the volume of data generated from various sources.Working from home depends mainly on cloud computing(CC)applications that help employees to efficiently accomplish their tasks.The cloud computing environment(CCE)is an unsung hero in the COVID-19 pandemic crisis.It consists of the fast-paced practices for services that reflect the trend of rapidly deployable applications for maintaining data.Despite the increase in the use of CC applications,there is an ongoing research challenge in the domains of CCE concerning data,guaranteeing security,and the availability of CC applications.This paper,to the best of our knowledge,is the first paper that thoroughly explains the impact of the COVID-19 pandemic on CCE.Additionally,this paper also highlights the security risks of working from home during the COVID-19 pandemic.
基金Supported by the National Natural Science Foundation of China(21576240)Experimental Technology Research Program of China University of Geosciences(Key Program)(SJ-201422)
文摘Aiming to increase the efficiency of gem design and manufacturing, a new method in computer-aided-design (CAD) of convex faceted gem cuts (CFGC) based on Half-edge data structure (HDS), including the algorithms for the implementation is presented in this work. By using object-oriented methods, geometrical elements of CFGC are classified and responding geometrical feature classes are established. Each class is implemented and embedded based on the gem process. Matrix arithmetic and analytical geometry are used to derive the affine transformation and the cutting algorithm. Based on the demand for a diversity of gem cuts, CAD functions both for free-style faceted cuts and parametric designs of typical cuts and visualization and human-computer interactions of the CAD system including two-dimensional and three-dimensional interactions have been realized which enhances the flexibility and universality of the CAD system. Furthermore, data in this CAD system can also be used directly by the gem CAM module, which will promote the gem CAD/CAM integration.