Tsinghua Science and Technology

SPECIAL SECTION ON TRUSTED COMPUTING AND INFORMATION SECURITY

  • An Automatic Analysis Approach Toward Indistinguishability of Sampling on the LWE Problem

    Shuaishuai Zhu;Yiliang Han;Xiaoyuan Yang;

    Learning With Errors (LWE) is one of the Non-Polynomial (NP)-hard problems applied in cryptographic primitives against quantum attacks.However,the security and efficiency of schemes based on LWE are closely affected by the error sampling algorithms.The existing pseudo-random sampling methods potentially have security leaks that can fundamentally influence the security levels of previous cryptographic primitives.Given that these primitives are proved semantically secure,directly deducing the influences caused by leaks of sampling algorithms may be difficult.Thus,we attempt to use the attack model based on automatic learning system to identify and evaluate the practical security level of a cryptographic primitive that is semantically proved secure in indistinguishable security models.In this paper,we first analyzed the existing major sampling algorithms in terms of their security and efficiency.Then,concentrating on the Indistinguishability under Chosen-Plaintext Attack (IND-CPA) security model,we realized the new attack model based on the automatic learning system.The experimental data demonstrates that the sampling algorithms perform a key role in LWE-based schemes with significant disturbance of the attack advantages,which may potentially compromise security considerably.Moreover,our attack model is achievable with acceptable time and memory costs.

    2020年05期 v.25 553-563页 [查看摘要][在线阅读][下载 831K]
    [下载次数:52 ] |[网刊下载次数:0 ] |[引用频次:2 ] |[阅读次数:0 ]
  • An Automatic Analysis Approach Toward Indistinguishability of Sampling on the LWE Problem

    Shuaishuai Zhu;Yiliang Han;Xiaoyuan Yang;

    Learning With Errors (LWE) is one of the Non-Polynomial (NP)-hard problems applied in cryptographic primitives against quantum attacks.However,the security and efficiency of schemes based on LWE are closely affected by the error sampling algorithms.The existing pseudo-random sampling methods potentially have security leaks that can fundamentally influence the security levels of previous cryptographic primitives.Given that these primitives are proved semantically secure,directly deducing the influences caused by leaks of sampling algorithms may be difficult.Thus,we attempt to use the attack model based on automatic learning system to identify and evaluate the practical security level of a cryptographic primitive that is semantically proved secure in indistinguishable security models.In this paper,we first analyzed the existing major sampling algorithms in terms of their security and efficiency.Then,concentrating on the Indistinguishability under Chosen-Plaintext Attack (IND-CPA) security model,we realized the new attack model based on the automatic learning system.The experimental data demonstrates that the sampling algorithms perform a key role in LWE-based schemes with significant disturbance of the attack advantages,which may potentially compromise security considerably.Moreover,our attack model is achievable with acceptable time and memory costs.

    2020年05期 v.25 553-563页 [查看摘要][在线阅读][下载 831K]
    [下载次数:52 ] |[网刊下载次数:0 ] |[引用频次:2 ] |[阅读次数:0 ]
  • Modified Multi-Key Fully Homomorphic Encryption Based on NTRU Cryptosystem without Key-Switching

    Xiaoliang Che;Tanping Zhou;Ningbo Li;Haonan Zhou;Zhenhua Chen;Xiaoyuan Yang;

    The Multi-Key Fully Homomorphic Encryption (MKFHE) based on the NTRU cryptosystem is an important alternative to the post-quantum cryptography due to its simple scheme form,high efficiency,and fewer ciphertexts and keys.In 2012,Lopez-Alt et al.proposed the first NTRU-type MKFHE scheme,the LTV12 scheme,using the key-switching and modulus-reduction techniques,whose security relies on two assumptions:the Ring Learning With Error (RLWE) assumption and the Decisional Small Polynomial Ratio (DSPR) assumption.However,the LTV12and subsequent NTRU-type schemes are restricted to the family of power-of-2 cyclotomic rings,which may affect the security in the case of subfield attacks.Moreover,the key-switching technique of the LTV12 scheme requires a circular application of evaluation keys,which causes rapid growth of the error and thus affects the circuit depth.In this paper,an NTRU-type MKFHE scheme over prime cyclotomic rings without key-switching is proposed,which has the potential to resist the subfield attack and decrease the error exponentially during the homomorphic evaluating process.First,based on the RLWE and DSPR assumptions over the prime cyclotomic rings,a detailed analysis of the factors affecting the error during the homomorphic evaluations in the LTV12 scheme is provided.Next,a Low Bit Discarded&Dimension Expansion of Ciphertexts (LBD&DEC) technique is proposed,and the inherent homomorphic multiplication decryption structure of the NTRU is proposed,which can eliminate the key-switching operation in the LTV12 scheme.Finally,a leveled NTRU-type MKFHE scheme is developed using the LBD&DEC and modulus-reduction techniques.The analysis shows that the proposed scheme compared to the LTV12 scheme can decrease the magnitude of the error exponentially and minimize the dimension of ciphertexts.

    2020年05期 v.25 564-578页 [查看摘要][在线阅读][下载 1470K]
    [下载次数:79 ] |[网刊下载次数:0 ] |[引用频次:15 ] |[阅读次数:0 ]
  • Modified Multi-Key Fully Homomorphic Encryption Based on NTRU Cryptosystem without Key-Switching

    Xiaoliang Che;Tanping Zhou;Ningbo Li;Haonan Zhou;Zhenhua Chen;Xiaoyuan Yang;

    The Multi-Key Fully Homomorphic Encryption (MKFHE) based on the NTRU cryptosystem is an important alternative to the post-quantum cryptography due to its simple scheme form,high efficiency,and fewer ciphertexts and keys.In 2012,Lopez-Alt et al.proposed the first NTRU-type MKFHE scheme,the LTV12 scheme,using the key-switching and modulus-reduction techniques,whose security relies on two assumptions:the Ring Learning With Error (RLWE) assumption and the Decisional Small Polynomial Ratio (DSPR) assumption.However,the LTV12and subsequent NTRU-type schemes are restricted to the family of power-of-2 cyclotomic rings,which may affect the security in the case of subfield attacks.Moreover,the key-switching technique of the LTV12 scheme requires a circular application of evaluation keys,which causes rapid growth of the error and thus affects the circuit depth.In this paper,an NTRU-type MKFHE scheme over prime cyclotomic rings without key-switching is proposed,which has the potential to resist the subfield attack and decrease the error exponentially during the homomorphic evaluating process.First,based on the RLWE and DSPR assumptions over the prime cyclotomic rings,a detailed analysis of the factors affecting the error during the homomorphic evaluations in the LTV12 scheme is provided.Next,a Low Bit Discarded&Dimension Expansion of Ciphertexts (LBD&DEC) technique is proposed,and the inherent homomorphic multiplication decryption structure of the NTRU is proposed,which can eliminate the key-switching operation in the LTV12 scheme.Finally,a leveled NTRU-type MKFHE scheme is developed using the LBD&DEC and modulus-reduction techniques.The analysis shows that the proposed scheme compared to the LTV12 scheme can decrease the magnitude of the error exponentially and minimize the dimension of ciphertexts.

    2020年05期 v.25 564-578页 [查看摘要][在线阅读][下载 1470K]
    [下载次数:79 ] |[网刊下载次数:0 ] |[引用频次:15 ] |[阅读次数:0 ]

REGULAR ARTICLES

  • Improving Semantic Part Features for Person Re-identification with Supervised Non-local Similarity

    Yifan Sun;Zhaopeng Dou;Yali Li;Shengjin Wang;

    In person re-IDentification (re-ID) task,the learning of part-level features benefits from fine-grained information.To facilitate part alignment,which is a prerequisite for learning part-level features,a popular approach is to detect semantic parts with the use of human parsing or pose estimation.Such methods of semantic partition do offer cues to good part alignment but are prone to noisy part detection,especially when they are employed in an off-the-shelf manner.In response,this paper proposes a novel part feature learning method for re-ID,that suppresses the impact of noisy semantic part detection through Supervised Non-local Similarity (SNS) learning.Given several detected semantic parts,SNS first locates their center points on the convolutional feature maps for use as a set of anchors and then evaluates the similarity values between these anchors and each pixel on the feature maps.The non-local similarity learning is supervised such that:each anchor should be similar to itself and simultaneously dissimilar to any other anchors,thus yielding the SNS.Finally,each anchor absorbs features from all of the similar pixels on the convolutional feature maps to generate a corresponding part feature (SNS feature).We evaluate our method with extensive experiments conducted under both holistic and partial re-ID scenarios.Experimental results confirm that SNS consistently improves re-ID accuracy using human parsing or pose estimation,and that our results are on par with state-of-the-art methods.

    2020年05期 v.25 636-646页 [查看摘要][在线阅读][下载 3776K]
    [下载次数:39 ] |[网刊下载次数:0 ] |[引用频次:6 ] |[阅读次数:0 ]
  • Improving Semantic Part Features for Person Re-identification with Supervised Non-local Similarity

    Yifan Sun;Zhaopeng Dou;Yali Li;Shengjin Wang;

    In person re-IDentification (re-ID) task,the learning of part-level features benefits from fine-grained information.To facilitate part alignment,which is a prerequisite for learning part-level features,a popular approach is to detect semantic parts with the use of human parsing or pose estimation.Such methods of semantic partition do offer cues to good part alignment but are prone to noisy part detection,especially when they are employed in an off-the-shelf manner.In response,this paper proposes a novel part feature learning method for re-ID,that suppresses the impact of noisy semantic part detection through Supervised Non-local Similarity (SNS) learning.Given several detected semantic parts,SNS first locates their center points on the convolutional feature maps for use as a set of anchors and then evaluates the similarity values between these anchors and each pixel on the feature maps.The non-local similarity learning is supervised such that:each anchor should be similar to itself and simultaneously dissimilar to any other anchors,thus yielding the SNS.Finally,each anchor absorbs features from all of the similar pixels on the convolutional feature maps to generate a corresponding part feature (SNS feature).We evaluate our method with extensive experiments conducted under both holistic and partial re-ID scenarios.Experimental results confirm that SNS consistently improves re-ID accuracy using human parsing or pose estimation,and that our results are on par with state-of-the-art methods.

    2020年05期 v.25 636-646页 [查看摘要][在线阅读][下载 3776K]
    [下载次数:39 ] |[网刊下载次数:0 ] |[引用频次:6 ] |[阅读次数:0 ]
  • Optimal Data Transmission in Backscatter Communication for Passive Sensing Systems

    Jumin Zhao;Ji Li;Dengao Li;Haizhu Yang;

    Computational Radio Frequency IDentification (CRFID) is a device that integrates passive sensing and computing applications,which is powered by electromagnetic waves and read by the off-the-shelf Ultra High Frequency Radio Frequency IDentification (UHF RFID) readers.Traditional RFID only identifies the ID of the tag,and CRFID is different from traditional RFID.CRFID needs to transmit a large amount of sensing and computing data in the mobile sensing scene.However,the current Electronic Product Code,Class-1 Generation-2 (EPC C1G2)protocol mainly aims at the transmission of multi-tag and minor data.When a large amount of data need to be fed back,a more reliable communication mechanism must be used to ensure the efficiency of data exchange.The main strategy of this paper is to adjust the data frame length of the CRFID response dynamically to improve the efficiency and reliability of CRFID backscattering communication according to energy acquisition and channel complexity.This is done by constructing a dynamic data frame length model and optimizing the command set of the interface protocol.Then,according to the actual situation of the uplink,a dynamic data validation method is designed,which reduces the data transmission delay and the probability of retransmitting,and improves the throughput.The simulation results show that the proposed scheme is superior to the existing methods.Under different energy harvesting and channel conditions,the dynamic data frame length and verification method can approach the theoretical optimum.

    2020年05期 v.25 647-658页 [查看摘要][在线阅读][下载 16028K]
    [下载次数:67 ] |[网刊下载次数:0 ] |[引用频次:8 ] |[阅读次数:0 ]
  • Optimal Data Transmission in Backscatter Communication for Passive Sensing Systems

    Jumin Zhao;Ji Li;Dengao Li;Haizhu Yang;

    Computational Radio Frequency IDentification (CRFID) is a device that integrates passive sensing and computing applications,which is powered by electromagnetic waves and read by the off-the-shelf Ultra High Frequency Radio Frequency IDentification (UHF RFID) readers.Traditional RFID only identifies the ID of the tag,and CRFID is different from traditional RFID.CRFID needs to transmit a large amount of sensing and computing data in the mobile sensing scene.However,the current Electronic Product Code,Class-1 Generation-2 (EPC C1G2)protocol mainly aims at the transmission of multi-tag and minor data.When a large amount of data need to be fed back,a more reliable communication mechanism must be used to ensure the efficiency of data exchange.The main strategy of this paper is to adjust the data frame length of the CRFID response dynamically to improve the efficiency and reliability of CRFID backscattering communication according to energy acquisition and channel complexity.This is done by constructing a dynamic data frame length model and optimizing the command set of the interface protocol.Then,according to the actual situation of the uplink,a dynamic data validation method is designed,which reduces the data transmission delay and the probability of retransmitting,and improves the throughput.The simulation results show that the proposed scheme is superior to the existing methods.Under different energy harvesting and channel conditions,the dynamic data frame length and verification method can approach the theoretical optimum.

    2020年05期 v.25 647-658页 [查看摘要][在线阅读][下载 16028K]
    [下载次数:67 ] |[网刊下载次数:0 ] |[引用频次:8 ] |[阅读次数:0 ]
  • Analysis of a Firm's Optimal Size Based on Accessible Market Capacity

    Xiao Sun;Jun Qian;Yueting Chai;Yi Liu;

    E-commerce has dramatically reduced the limitation of space and time on economic activities,resulting in individuals having access to a huge number of consumers.In this paper,we propose a company's optimal size decision model containing management costs as a means of investigating the evolution of the company size in e-commerce.Given that production decisions are made based on accessible market capacity,we explain how a company enters the market,and we draw an evolutionary path of the optimal company size.The results show that in the early expansion stage of accessible market capacity,a firm's optimal size keeps increasing;after reaching a peak,the change in a firm's optimal size depends on its cost management.When the accessible market capacity reaches a threshold,the firm will no longer be in the market,and may no longer exist.Finally,we construct a simulation framework based on complex adaptive systems to validate our proposed model.A simulation experiment confirms our model and reveals the dynamic co-evolution process of individual producers and firms.

    2020年05期 v.25 659-667页 [查看摘要][在线阅读][下载 619K]
    [下载次数:31 ] |[网刊下载次数:0 ] |[引用频次:0 ] |[阅读次数:0 ]
  • Analysis of a Firm's Optimal Size Based on Accessible Market Capacity

    Xiao Sun;Jun Qian;Yueting Chai;Yi Liu;

    E-commerce has dramatically reduced the limitation of space and time on economic activities,resulting in individuals having access to a huge number of consumers.In this paper,we propose a company's optimal size decision model containing management costs as a means of investigating the evolution of the company size in e-commerce.Given that production decisions are made based on accessible market capacity,we explain how a company enters the market,and we draw an evolutionary path of the optimal company size.The results show that in the early expansion stage of accessible market capacity,a firm's optimal size keeps increasing;after reaching a peak,the change in a firm's optimal size depends on its cost management.When the accessible market capacity reaches a threshold,the firm will no longer be in the market,and may no longer exist.Finally,we construct a simulation framework based on complex adaptive systems to validate our proposed model.A simulation experiment confirms our model and reveals the dynamic co-evolution process of individual producers and firms.

    2020年05期 v.25 659-667页 [查看摘要][在线阅读][下载 619K]
    [下载次数:31 ] |[网刊下载次数:0 ] |[引用频次:0 ] |[阅读次数:0 ]
  • Hybrid Structure Reliability Analysis Based on the Damped Newton Method

    Hongwei Zheng;Guangwei Meng;Feng Li;Tonghui Wei;Wei Luo;Yaming Guo;

    This paper presents a hybrid model reliability analysis method based on the damped Newton method with both random and interval variables to solve the hybrid structure reliability problem.The method combines an outer iterative solution and inner layer numerical calculation.In the outer iteration,the method seeks an optimized solution to the interval variable iterative by adding the boundary constraint condition based on the damped Newton optimization theory.In the inner layer solution,the method first reduces the dimension of the random variable through the dimension reduction method,then obtains the first four-order central moment of the function through the application of the Taylor expansion method,and finally calculates the reliability index of the structure according to the fourth-order moment calculation structure of the function.The results of a numerical example and an engineering ten-rod truss structure show that the proposed method can effectively solve the random-interval hybrid reliability problem and has better calculation accuracy than that of the two-layer iterative method.

    2020年05期 v.25 668-677页 [查看摘要][在线阅读][下载 17571K]
    [下载次数:43 ] |[网刊下载次数:0 ] |[引用频次:3 ] |[阅读次数:0 ]
  • Hybrid Structure Reliability Analysis Based on the Damped Newton Method

    Hongwei Zheng;Guangwei Meng;Feng Li;Tonghui Wei;Wei Luo;Yaming Guo;

    This paper presents a hybrid model reliability analysis method based on the damped Newton method with both random and interval variables to solve the hybrid structure reliability problem.The method combines an outer iterative solution and inner layer numerical calculation.In the outer iteration,the method seeks an optimized solution to the interval variable iterative by adding the boundary constraint condition based on the damped Newton optimization theory.In the inner layer solution,the method first reduces the dimension of the random variable through the dimension reduction method,then obtains the first four-order central moment of the function through the application of the Taylor expansion method,and finally calculates the reliability index of the structure according to the fourth-order moment calculation structure of the function.The results of a numerical example and an engineering ten-rod truss structure show that the proposed method can effectively solve the random-interval hybrid reliability problem and has better calculation accuracy than that of the two-layer iterative method.

    2020年05期 v.25 668-677页 [查看摘要][在线阅读][下载 17571K]
    [下载次数:43 ] |[网刊下载次数:0 ] |[引用频次:3 ] |[阅读次数:0 ]
  • BAM: A Block-Based Bayesian Method for Detecting Genome-Wide Associations with Multiple Diseases

    Guanying Wu;Xuan Guo;Baohua Xu;

    Many human diseases involve multiple genes in complex interactions.Large Genome-Wide Association Studies (GWASs) have been considered to hold promise for unraveling such interactions.However,statistic tests for high-order epistatic interactions (≥2 Single Nucleotide Polymorphisms (SNPs)) raise enormous computational and analytical challenges.It is well known that the block-wise structure exists in the human genome due to Linkage Disequilibrium (LD) between adjacent SNPs.In this paper,we propose a novel Bayesian method,named BAM,for simultaneously partitioning SNPs into LD-blocks and detecting genome-wide multi-locus epistatic interactions that are associated with multiple diseases.Experimental results on the simulated datasets demonstrate that BAM is powerful and efficient.We also applied BAM on two GWAS datasets from WTCCC,i.e.,Rheumatoid Arthritis and Type 1 Diabetes,and accurately recovered the LD-block structure.Therefore,we believe that BAM is suitable and efficient for the full-scale analysis of multi-disease-related interactions in GWASs.

    2020年05期 v.25 678-689页 [查看摘要][在线阅读][下载 16853K]
    [下载次数:26 ] |[网刊下载次数:0 ] |[引用频次:1 ] |[阅读次数:0 ]
  • BAM: A Block-Based Bayesian Method for Detecting Genome-Wide Associations with Multiple Diseases

    Guanying Wu;Xuan Guo;Baohua Xu;

    Many human diseases involve multiple genes in complex interactions.Large Genome-Wide Association Studies (GWASs) have been considered to hold promise for unraveling such interactions.However,statistic tests for high-order epistatic interactions (≥2 Single Nucleotide Polymorphisms (SNPs)) raise enormous computational and analytical challenges.It is well known that the block-wise structure exists in the human genome due to Linkage Disequilibrium (LD) between adjacent SNPs.In this paper,we propose a novel Bayesian method,named BAM,for simultaneously partitioning SNPs into LD-blocks and detecting genome-wide multi-locus epistatic interactions that are associated with multiple diseases.Experimental results on the simulated datasets demonstrate that BAM is powerful and efficient.We also applied BAM on two GWAS datasets from WTCCC,i.e.,Rheumatoid Arthritis and Type 1 Diabetes,and accurately recovered the LD-block structure.Therefore,we believe that BAM is suitable and efficient for the full-scale analysis of multi-disease-related interactions in GWASs.

    2020年05期 v.25 678-689页 [查看摘要][在线阅读][下载 16853K]
    [下载次数:26 ] |[网刊下载次数:0 ] |[引用频次:1 ] |[阅读次数:0 ]
  • Real-Time Facial Pose Estimation and Tracking by Coarse-to-Fine Iterative Optimization

    Xiaolong Yang;Xiaohong Jia;Mengke Yuan;Dong-Ming Yan;

    We present a novel and efficient method for real-time multiple facial poses estimation and tracking in a single frame or video.First,we combine two standard convolutional neural network models for face detection and mean shape learning to generate initial estimations of alignment and pose.Then,we design a bi-objective optimization strategy to iteratively refine the obtained estimations.This strategy achieves faster speed and more accurate outputs.Finally,we further apply algebraic filtering processing,including Gaussian filter for background removal and extended Kalman filter for target prediction,to maintain real-time tracking superiority.Only general RGB photos or videos are required,which are captured by a commodity monocular camera without any priori or label.We demonstrate the advantages of our approach by comparing it with the most recent work in terms of performance and accuracy.

    2020年05期 v.25 690-700页 [查看摘要][在线阅读][下载 15002K]
    [下载次数:54 ] |[网刊下载次数:0 ] |[引用频次:3 ] |[阅读次数:0 ]
  • Real-Time Facial Pose Estimation and Tracking by Coarse-to-Fine Iterative Optimization

    Xiaolong Yang;Xiaohong Jia;Mengke Yuan;Dong-Ming Yan;

    We present a novel and efficient method for real-time multiple facial poses estimation and tracking in a single frame or video.First,we combine two standard convolutional neural network models for face detection and mean shape learning to generate initial estimations of alignment and pose.Then,we design a bi-objective optimization strategy to iteratively refine the obtained estimations.This strategy achieves faster speed and more accurate outputs.Finally,we further apply algebraic filtering processing,including Gaussian filter for background removal and extended Kalman filter for target prediction,to maintain real-time tracking superiority.Only general RGB photos or videos are required,which are captured by a commodity monocular camera without any priori or label.We demonstrate the advantages of our approach by comparing it with the most recent work in terms of performance and accuracy.

    2020年05期 v.25 690-700页 [查看摘要][在线阅读][下载 15002K]
    [下载次数:54 ] |[网刊下载次数:0 ] |[引用频次:3 ] |[阅读次数:0 ]

  • An Attribute-Based Encryption Scheme Based on Unrecognizable Trapdoors

    Ruizhong Du;Ailun Tan;Junfeng Tian;

    Attribute-Based Encryption (ABE) has been widely used for ciphertext retrieval in the cloud environment.However,bi-flexible attribute control and privacy keywords are difficult problems that have yet to be solved.In this paper,we introduce the denial of access policy and the mutual matching algorithm of a dataset used to realize bidirectional control of attributes in the cloud server.To solve the problem of keyword privacy,we construct a security trapdoor by adding random numbers that effectively resist keyword guessing attacks from cloud servers and external attackers.System security is reduced to the Deterministic Bilinear Diffie-Hellman (DBDH) hypothesis problem.We validate our scheme through theoretical security analysis and experimental verification.Experiments are conducted on a real dataset,and results show that the scheme has higher security and retrieval efficiency than previous methods.

    2020年05期 v.25 579-588页 [查看摘要][在线阅读][下载 1973K]
    [下载次数:39 ] |[网刊下载次数:0 ] |[引用频次:3 ] |[阅读次数:0 ]
  • An Attribute-Based Encryption Scheme Based on Unrecognizable Trapdoors

    Ruizhong Du;Ailun Tan;Junfeng Tian;

    Attribute-Based Encryption (ABE) has been widely used for ciphertext retrieval in the cloud environment.However,bi-flexible attribute control and privacy keywords are difficult problems that have yet to be solved.In this paper,we introduce the denial of access policy and the mutual matching algorithm of a dataset used to realize bidirectional control of attributes in the cloud server.To solve the problem of keyword privacy,we construct a security trapdoor by adding random numbers that effectively resist keyword guessing attacks from cloud servers and external attackers.System security is reduced to the Deterministic Bilinear Diffie-Hellman (DBDH) hypothesis problem.We validate our scheme through theoretical security analysis and experimental verification.Experiments are conducted on a real dataset,and results show that the scheme has higher security and retrieval efficiency than previous methods.

    2020年05期 v.25 579-588页 [查看摘要][在线阅读][下载 1973K]
    [下载次数:39 ] |[网刊下载次数:0 ] |[引用频次:3 ] |[阅读次数:0 ]
  • A Novel Hybrid Method to Analyze Security Vulnerabilities in Android Applications

    Junwei Tang;Ruixuan Li;Kaipeng Wang;Xiwu Gu;Zhiyong Xu;

    We propose a novel hybrid method to analyze the security vulnerabilities in Android applications.Our method combines static analysis,which consists of metadata and data flow analyses with dynamic analysis,which includes dynamic executable scripts and application program interface hooks.Our hybrid method can effectively analyze nine major categories of important security vulnerabilities in Android applications.We design dynamic executable scripts that record and perform manual operations to customize the execution path of the target application.Our dynamic executable scripts can replace most manual operations,simplify the analysis process,and further verify the corresponding security vulnerabilities.We successfully statically analyze 5547 malwares in Drebin and 10 151real-world applications.The average analysis time of each application in Drebin is 4.52 s,whereas it reaches 92.02 s for real-word applications.Our system can detect all the labeled vulnerabilities among 56 labeled applications.Further dynamic verification shows that our static analysis accuracy approximates 95%for real-world applications.Experiments show that our dynamic analysis can effectively detect the vulnerability named input unverified,which is difficult to be detected by other methods.In addition,our dynamic analysis can be extended to detect more types of vulnerabilities.

    2020年05期 v.25 589-603页 [查看摘要][在线阅读][下载 7290K]
    [下载次数:169 ] |[网刊下载次数:0 ] |[引用频次:9 ] |[阅读次数:0 ]
  • A Novel Hybrid Method to Analyze Security Vulnerabilities in Android Applications

    Junwei Tang;Ruixuan Li;Kaipeng Wang;Xiwu Gu;Zhiyong Xu;

    We propose a novel hybrid method to analyze the security vulnerabilities in Android applications.Our method combines static analysis,which consists of metadata and data flow analyses with dynamic analysis,which includes dynamic executable scripts and application program interface hooks.Our hybrid method can effectively analyze nine major categories of important security vulnerabilities in Android applications.We design dynamic executable scripts that record and perform manual operations to customize the execution path of the target application.Our dynamic executable scripts can replace most manual operations,simplify the analysis process,and further verify the corresponding security vulnerabilities.We successfully statically analyze 5547 malwares in Drebin and 10 151real-world applications.The average analysis time of each application in Drebin is 4.52 s,whereas it reaches 92.02 s for real-word applications.Our system can detect all the labeled vulnerabilities among 56 labeled applications.Further dynamic verification shows that our static analysis accuracy approximates 95%for real-world applications.Experiments show that our dynamic analysis can effectively detect the vulnerability named input unverified,which is difficult to be detected by other methods.In addition,our dynamic analysis can be extended to detect more types of vulnerabilities.

    2020年05期 v.25 589-603页 [查看摘要][在线阅读][下载 7290K]
    [下载次数:169 ] |[网刊下载次数:0 ] |[引用频次:9 ] |[阅读次数:0 ]
  • A Memory-Related Vulnerability Detection Approach Based on Vulnerability Features

    Jinchang Hu;Jinfu Chen;Lin Zhang;Yisong Liu;Qihao Bao;Hilary Ackah-Arthur;Chi Zhang;

    Developing secure software systems is a major challenge in the software industry due to errors or weaknesses that bring vulnerabilities to the software system.To address this challenge,researchers often use the source code features of vulnerabilities to improve vulnerability detection.Notwithstanding the success achieved by these techniques,the existing studies mainly focus on the conceptual description without an accurate definition of vulnerability features.In this study,we introduce a novel and efficient Memory-Related Vulnerability Detection Approach using Vulnerability Features (MRVDAVF).Our framework uses three distinct strategies to improve vulnerability detection.In the first stage,we introduce an improved Control Flow Graph (CFG) and Pointer-related Control Flow Graph (PCFG) to describe the features of some common vulnerabilities,including memory leak,doublefree,and use-after-free.Afterward,two algorithms,namely Vulnerability Judging algorithm based on Vulnerability Feature (VJVF) and Feature Judging (FJ) algorithm,are employed to detect memory-related vulnerabilities.Finally,the proposed model is validated using three test cases obtained from Juliet Test Suite.The experimental results show that the proposed approach is feasible and effective.

    2020年05期 v.25 604-613页 [查看摘要][在线阅读][下载 579K]
    [下载次数:42 ] |[网刊下载次数:0 ] |[引用频次:8 ] |[阅读次数:0 ]
  • A Memory-Related Vulnerability Detection Approach Based on Vulnerability Features

    Jinchang Hu;Jinfu Chen;Lin Zhang;Yisong Liu;Qihao Bao;Hilary Ackah-Arthur;Chi Zhang;

    Developing secure software systems is a major challenge in the software industry due to errors or weaknesses that bring vulnerabilities to the software system.To address this challenge,researchers often use the source code features of vulnerabilities to improve vulnerability detection.Notwithstanding the success achieved by these techniques,the existing studies mainly focus on the conceptual description without an accurate definition of vulnerability features.In this study,we introduce a novel and efficient Memory-Related Vulnerability Detection Approach using Vulnerability Features (MRVDAVF).Our framework uses three distinct strategies to improve vulnerability detection.In the first stage,we introduce an improved Control Flow Graph (CFG) and Pointer-related Control Flow Graph (PCFG) to describe the features of some common vulnerabilities,including memory leak,doublefree,and use-after-free.Afterward,two algorithms,namely Vulnerability Judging algorithm based on Vulnerability Feature (VJVF) and Feature Judging (FJ) algorithm,are employed to detect memory-related vulnerabilities.Finally,the proposed model is validated using three test cases obtained from Juliet Test Suite.The experimental results show that the proposed approach is feasible and effective.

    2020年05期 v.25 604-613页 [查看摘要][在线阅读][下载 579K]
    [下载次数:42 ] |[网刊下载次数:0 ] |[引用频次:8 ] |[阅读次数:0 ]
  • Fast Carrier Selection of JPEG Steganography Appropriate for Application

    Weixiang Ren;Yibo Xu;Liming Zhai;Lina Wang;Ju Jia;

    In recent years,the improvement of the security of steganography mainly involves not only carrier security but also distortion function.In the actual environment,the existing method of carrier selection is limited by its complex algorithm and slow running speed,making it not appropriate for rapid communication.This study proposes a method for selecting carriers and improving the security of steganography.JPEG images are decompressed to spatial domain.Then correlation coefficients between two adjacent pixels in the horizontal,vertical,counter diagonal,and major diagonal directions are calculated.The mean value of the four correlation coefficients is used to evaluate the security of each JPEG image.The images with low correlation coefficients are considered safe carriers and used for embedding in our scheme.The experimental results indicate that the stego images generated from the selected carriers exhibit a higher anti-steganalysis capability than those generated from the randomly selected carriers.Under the premise of the same security level,the images with low correlation coefficients have a high capacity.Our algorithm has a very fast running speed,and the running time of a 2048×2048 image is less than 1 s.

    2020年05期 v.25 614-624页 [查看摘要][在线阅读][下载 2050K]
    [下载次数:31 ] |[网刊下载次数:0 ] |[引用频次:0 ] |[阅读次数:0 ]
  • Fast Carrier Selection of JPEG Steganography Appropriate for Application

    Weixiang Ren;Yibo Xu;Liming Zhai;Lina Wang;Ju Jia;

    In recent years,the improvement of the security of steganography mainly involves not only carrier security but also distortion function.In the actual environment,the existing method of carrier selection is limited by its complex algorithm and slow running speed,making it not appropriate for rapid communication.This study proposes a method for selecting carriers and improving the security of steganography.JPEG images are decompressed to spatial domain.Then correlation coefficients between two adjacent pixels in the horizontal,vertical,counter diagonal,and major diagonal directions are calculated.The mean value of the four correlation coefficients is used to evaluate the security of each JPEG image.The images with low correlation coefficients are considered safe carriers and used for embedding in our scheme.The experimental results indicate that the stego images generated from the selected carriers exhibit a higher anti-steganalysis capability than those generated from the randomly selected carriers.Under the premise of the same security level,the images with low correlation coefficients have a high capacity.Our algorithm has a very fast running speed,and the running time of a 2048×2048 image is less than 1 s.

    2020年05期 v.25 614-624页 [查看摘要][在线阅读][下载 2050K]
    [下载次数:31 ] |[网刊下载次数:0 ] |[引用频次:0 ] |[阅读次数:0 ]
  • ePUF: A Lightweight Double Identity Verification in IoT

    Bo Zhao;Pengyuan Zhao;Peiru Fan;

    Remote authentication is a safe and verifiable mechanism.In the Internet of Things (loT),remote hosts need to verify the legitimacy of identity of terminal devices.However,embedded devices can hardly afford sufficient resources for the necessary trusted hardware components.Software authentication with no hardware guarantee is generally vulnerable to various network attacks.In this paper,we propose a lightweight remote verification protocol.The protocol utilizes the unique response returned by Physical Unclonable Function (PUF) as legitimate identity basis of the terminal devices and uses quadratic residues to encrypt the PUF authentication process to perform a double identity verification scheme.Our scheme is secure against middleman attacks on the attestation response by preventing conspiracy attacks from forgery authentication.

    2020年05期 v.25 625-635页 [查看摘要][在线阅读][下载 2799K]
    [下载次数:53 ] |[网刊下载次数:0 ] |[引用频次:6 ] |[阅读次数:0 ]
  • ePUF: A Lightweight Double Identity Verification in IoT

    Bo Zhao;Pengyuan Zhao;Peiru Fan;

    Remote authentication is a safe and verifiable mechanism.In the Internet of Things (loT),remote hosts need to verify the legitimacy of identity of terminal devices.However,embedded devices can hardly afford sufficient resources for the necessary trusted hardware components.Software authentication with no hardware guarantee is generally vulnerable to various network attacks.In this paper,we propose a lightweight remote verification protocol.The protocol utilizes the unique response returned by Physical Unclonable Function (PUF) as legitimate identity basis of the terminal devices and uses quadratic residues to encrypt the PUF authentication process to perform a double identity verification scheme.Our scheme is secure against middleman attacks on the attestation response by preventing conspiracy attacks from forgery authentication.

    2020年05期 v.25 625-635页 [查看摘要][在线阅读][下载 2799K]
    [下载次数:53 ] |[网刊下载次数:0 ] |[引用频次:6 ] |[阅读次数:0 ]
  • Call for Papers Special issue on Edge Computing for Smart City

    <正>The publication of Tsinghua Science and Technology was started in 1996.Since then,it has been an international academic journal sponsored by Tsinghua University and published bimonthly.This journal aims at presenting the state-of-the-art scientific achievements in computer science and other IT fields.From 2012,the journal enters into IEEE Xplore Digital Library with the open access mode.In 2015,Tsinghua Science and Technology has been indexed in the Science Citation Index Expanded with IF 1.696 (2018).The concept of smart cities is formulated to create a sustainable environment and improve the quality of life effectively,

    2020年05期 v.25 701页 [查看摘要][在线阅读][下载 57K]
    [下载次数:56 ] |[网刊下载次数:0 ] |[引用频次:0 ] |[阅读次数:0 ]
  • Call for Papers Special issue on Edge Computing for Smart City

    <正>The publication of Tsinghua Science and Technology was started in 1996.Since then,it has been an international academic journal sponsored by Tsinghua University and published bimonthly.This journal aims at presenting the state-of-the-art scientific achievements in computer science and other IT fields.From 2012,the journal enters into IEEE Xplore Digital Library with the open access mode.In 2015,Tsinghua Science and Technology has been indexed in the Science Citation Index Expanded with IF 1.696 (2018).The concept of smart cities is formulated to create a sustainable environment and improve the quality of life effectively,

    2020年05期 v.25 701页 [查看摘要][在线阅读][下载 57K]
    [下载次数:56 ] |[网刊下载次数:0 ] |[引用频次:0 ] |[阅读次数:0 ]
  • Call for Papers Special Issue on AI Powered Service Optimization for Edge/Fog Computing

    <正>The publication of Tsinghua Science and Technology was started in 1996.Since then,it has been an international academic journal sponsored by Tsinghua University and published bimonthly.This journal aims at presenting the state-of-the-art scientific achievements in computer science and other IT fields.From 2012,the journal enters into IEEE Xplore Digital Library with the open access mode.In 2015,Tsinghua Science and Technology has been indexed in the Science Citation Index Expanded with IF 1.696 (2018).

    2020年05期 v.25 702页 [查看摘要][在线阅读][下载 56K]
    [下载次数:66 ] |[网刊下载次数:0 ] |[引用频次:0 ] |[阅读次数:0 ]
  • Call for Papers Special Issue on AI Powered Service Optimization for Edge/Fog Computing

    <正>The publication of Tsinghua Science and Technology was started in 1996.Since then,it has been an international academic journal sponsored by Tsinghua University and published bimonthly.This journal aims at presenting the state-of-the-art scientific achievements in computer science and other IT fields.From 2012,the journal enters into IEEE Xplore Digital Library with the open access mode.In 2015,Tsinghua Science and Technology has been indexed in the Science Citation Index Expanded with IF 1.696 (2018).

    2020年05期 v.25 702页 [查看摘要][在线阅读][下载 56K]
    [下载次数:66 ] |[网刊下载次数:0 ] |[引用频次:0 ] |[阅读次数:0 ]
  • 下载本期数据