HOLMES: A Platform for Detecting Malicious Inputs in Secure Collaborative Computation

Though maliciously secure multiparty computation (SMPC) ensures confidentiality and integrity of the computation from malicious parties, malicious parties can still provide malformed inputs. As a result, when using SMPC for collaborative computation, input can be manipulated to perform biasing and poisoning attacks. Parties may defend against many of these attacks by performing statistical tests over one another’s input, before the actual computation. We present HOLMES, a platform for expressing and performing statistical tests securely and efficiently. Using HOLMES, parties can perform well-known statistical tests or define new tests. For efficiency, instead of performing such tests naively in SMPC, HOLMES blends together zero-knowledge proofs (ZK) and SMPC protocols, based on the insight that most computation for statistical tests is local to the party who provides the data. High-dimensional tests are critical for detecting malicious inputs but are prohibitively expensive in secure computation. To reduce this cost, HOLMES provides a new secure dimensionality reduction procedure tailored for high-dimensional statistical tests. This new procedure leverages recent development of algebraic pseudorandom functions. Our evaluation shows that, for a variety of statistical tests, HOLMES is 18× to 40× more efficient than naively implementing the statistical tests in a generic SMPC framework. I . I N T R O D U C T I O N To meet the increasing demands of big data, many services today try to gain access to diverse and wide-ranging datasets. For this reason, a recent trend between competing business organizations is to perform collaborative computation over their joint datasets, so they can make decisions based on more than just their own data [1–3]. This approach, however, comes with data privacy issues, as organizations are often unwilling, sometimes prohibited by regulators, to share data [4, 5]. A solution to this problem is secure multiparty computation (SMPC), which enables such collaboration without compromising privacy. SMPC has been used in various settings, such as data analytics and machine learning [6–22], and in a wide range of applications, such as medical [23] and financial [24]. Though SMPC ensures the privacy and correctness of the computation, it does not ensure that parties provide well-formed datasets as input. As a result, state-of-the-art works offering malicious security, such as Senate [6], Helen [11], and Private Deep Learning [19], have assumed that all parties provide wellformed data, though the security of these systems ensures that parties cannot deviate from the protocol in many other ways. However, if we consider real-world use cases, such as training models for anti-money laundering or for medical studies, manipulated input can lead to grave consequences. For instance, a malicious organization can gain an unfair advantage in market competition by contributing grossly biased data to make the result of collaborative computation unusable. This raises the following question: Can we practically detect malicious input in secure collaborative computation? Though identifying every possible malicious input is infeasible, in many scenarios we know properties that the honest input must satisfy. For instance, we know that age data must lie in a specific range (e.g., 0–100), and that in a typical city, only a small fraction of the population has age over 90. In fact, range checks [25–27] are frequently used to limit the effect of misreported values in secure computation. However, range checks are not always enough. For example, assume that two banks with the same number of clients use the age of clients to predict the success of a bank marketing campaign. A malicious bank can decrease the combined mean age from, for example 20, to 10 by contributing manipulated data where all the ages are 1. Therefore, if some statistical characteristics of the data must be enforced, statistical hypothesis testing [28, 29] can be used as a general tool to check the quality of the input. Indeed, statistical testing has been a major tool in quality control [30–32], which checks the quality of all factors involved in manufacturing. Building defenses against ill-formed or biased input is an active research area in machine learning [33–46]. In biasing attacks, the attacker provides biased data to reduce the accuracy of the model. In poisoning attacks, the attacker injects malicious input to the training dataset and influences a model. These attacks can not only affect the correctness, but also reveal information about the training data [47–49]. Various defenses against poisoning attacks are also based on computing statistical characteristics of the input [50, 51]. Thus, statistical tests are building blocks of many known and future defenses against biased or poisoned data in various settings. We present HOLMES, a platform for expressing a rich class of statistical tests and performing them efficiently and securely. HOLMES does not aim to prescribe which specific tests each application should run because they depend on the use case, and new research may open up new defenses. Nonetheless, HOLMES enables parties in secure collaborative computation to express checks of statistical properties over the input by offering a rich set of statistical tests and building blocks, and performs these tests securely and efficiently. The efficiency gain is indeed important, as it allows parties to run more statistical tests with the same cost. In sum, we envision that users of secure collaborative computation can use HOLMES to perform input checks before the actual computation, to detect malformed input. ... Statistical tests Secure collaborative computation ... Planning phase Execution phase

[1]  Avi Feller,et al.  Algorithmic Decision Making and the Cost of Fairness , 2017, KDD.

[2]  András Sárközy,et al.  On finite pseudorandom binary sequences I: Measure of pseudorandomness, the Legendre symbol , 1997 .

[3]  Carlos V. Rozas,et al.  Innovative instructions and software model for isolated execution , 2013, HASP '13.

[4]  Thomas Rbement,et al.  Fundamentals of quality control and improvement , 1993 .

[5]  Michael W. Carroll Sharing Research Data and Intellectual Property Law: A Primer , 2015, PLoS biology.

[6]  Eli Ben-Sasson,et al.  Design of Symmetric-Key Primitives for Advanced Cryptographic Protocols , 2020, IACR Trans. Symmetric Cryptol..

[7]  Melissa Chase,et al.  Private Collaborative Neural Network Learning , 2017, IACR Cryptol. ePrint Arch..

[8]  Dawn Xiaodong Song,et al.  Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning , 2017, ArXiv.

[9]  Aleksei Udovenko,et al.  Cryptanalysis of the Legendre PRF and generalizations , 2020, IACR Cryptol. ePrint Arch..

[10]  Marcel Keller,et al.  Overdrive: Making SPDZ Great Again , 2018, IACR Cryptol. ePrint Arch..

[11]  Dan Bogdanov,et al.  A new way to protect privacy in large-scale genome-wide association studies , 2013, Bioinform..

[12]  Daniel Genkin,et al.  SGAxe: How SGX Fails in Practice , 2020 .

[13]  Cédric Fournet,et al.  Hash First, Argue Later: Adaptive Verifiable Computations on Outsourced Data , 2016, CCS.

[14]  Suresh Venkatasubramanian,et al.  The Johnson-Lindenstrauss Transform: An Empirical Study , 2011, ALENEX.

[15]  Xiao Wang,et al.  More Efficient MPC from Improved Triple Generation and Authenticated Garbling , 2020, IACR Cryptol. ePrint Arch..

[16]  Avi Wigderson,et al.  Completeness theorems for non-cryptographic fault-tolerant distributed computation , 1988, STOC '88.

[17]  Kang Yang,et al.  QuickSilver: Efficient and Affordable Zero-Knowledge Proofs for Circuits and Polynomials over Any Field , 2021, IACR Cryptol. ePrint Arch..

[18]  Alan Szepieniec,et al.  On the Use of the Legendre Symbol in Symmetric Cipher Design , 2021, IACR Cryptol. ePrint Arch..

[19]  Kaveh Razavi,et al.  CrossTalk: Speculative Data Leaks Across Cores Are Real , 2021, 2021 IEEE Symposium on Security and Privacy (SP).

[20]  Payman Mohassel,et al.  SecureML: A System for Scalable Privacy-Preserving Machine Learning , 2017, 2017 IEEE Symposium on Security and Privacy (SP).

[21]  Ralph B. D'Agostino,et al.  Goodness-of-Fit-Techniques , 2020 .

[22]  Mariana Raykova,et al.  Privacy-Preserving Distributed Linear Regression on High-Dimensional Data , 2017, Proc. Priv. Enhancing Technol..

[23]  Andreas Haeberlen,et al.  Honeycrisp: large-scale differentially private aggregation without a trusted core , 2019, SOSP.

[24]  Jon Howell,et al.  Geppetto: Versatile Verifiable Computation , 2015, 2015 IEEE Symposium on Security and Privacy.

[25]  Peter J. Costa Applied Mathematics for the Analysis of Biomedical Data: Models, Methods, and MATLAB (R) , 2017 .

[26]  Krishna P. Gummadi,et al.  Fairness Beyond Disparate Treatment & Disparate Impact: Learning Classification without Disparate Mistreatment , 2016, WWW.

[27]  Jason Baldridge,et al.  Mind the GAP: A Balanced Corpus of Gendered Ambiguous Pronouns , 2018, TACL.

[28]  Peter Scholl,et al.  Low Cost Constant Round MPC Combining BMR and Oblivious Transfer , 2017, Journal of Cryptology.

[29]  Joseph M. Hellerstein,et al.  Senate: A Maliciously-Secure MPC Platform for Collaborative Analytics , 2020, IACR Cryptol. ePrint Arch..

[30]  Jens Groth,et al.  Efficient Zero-Knowledge Arguments for Arithmetic Circuits in the Discrete Log Setting , 2016, EUROCRYPT.

[31]  Carl A. Gunter,et al.  Leaky Cauldron on the Dark Land: Understanding Memory Side-Channel Hazards in SGX , 2017, CCS.

[32]  Paulo Cortez,et al.  A data-driven approach to predict the success of bank telemarketing , 2014, Decis. Support Syst..

[33]  Jacob T. Schwartz,et al.  Fast Probabilistic Algorithms for Verification of Polynomial Identities , 1980, J. ACM.

[34]  Aaron Roth,et al.  The Algorithmic Foundations of Differential Privacy , 2014, Found. Trends Theor. Comput. Sci..

[35]  Ion Stoica,et al.  Opaque: An Oblivious and Encrypted Distributed Analytics Platform , 2017, NSDI.

[36]  Luis Muñoz-González,et al.  Detection of Adversarial Training Examples in Poisoning Attacks through Anomaly Detection , 2018, ArXiv.

[37]  Percy Liang,et al.  Certified Defenses for Data Poisoning Attacks , 2017, NIPS.

[38]  Marcel Keller,et al.  Practical Covertly Secure MPC for Dishonest Majority - Or: Breaking the SPDZ Limits , 2013, ESORICS.

[39]  Silvio Micali,et al.  The knowledge complexity of interactive proof-systems , 1985, STOC '85.

[40]  Andreas Haeberlen,et al.  Verifiable differential privacy , 2015, EuroSys.

[41]  Dawn Xiaodong Song,et al.  Libra: Succinct Zero-Knowledge Proofs with Optimal Prover Computation , 2019, IACR Cryptol. ePrint Arch..

[42]  Dario Fiore,et al.  LegoSNARK: Modular Design and Composition of Succinct Zero-Knowledge Proofs , 2019, IACR Cryptol. ePrint Arch..

[43]  Dragos Rotaru,et al.  Feistel Structures for MPC, and More , 2019, IACR Cryptol. ePrint Arch..

[44]  Dmitry Khovratovich Key recovery attacks on the Legendre PRFs within the birthday bound , 2019, IACR Cryptol. ePrint Arch..

[45]  Ivan Damgård,et al.  Multiparty Computation from Somewhat Homomorphic Encryption , 2012, IACR Cryptol. ePrint Arch..

[46]  Yao Lu,et al.  Oblivious Neural Network Predictions via MiniONN Transformations , 2017, IACR Cryptol. ePrint Arch..

[47]  Yuval Yarom,et al.  CacheOut: Leaking Data on Intel CPUs via Cache Evictions , 2020, 2021 IEEE Symposium on Security and Privacy (SP).

[48]  Fred Spiring,et al.  Introduction to Statistical Quality Control , 2007, Technometrics.

[49]  Ion Stoica,et al.  Cerebro: A Platform for Multi-Party Cryptographic Collaborative Learning , 2021, IACR Cryptol. ePrint Arch..

[50]  Nishant Kumar,et al.  CrypTFlow: Secure TensorFlow Inference , 2020, 2020 IEEE Symposium on Security and Privacy (SP).

[51]  Alexandr Andoni,et al.  Two Party Distribution Testing: Communication and Security , 2018, IACR Cryptol. ePrint Arch..

[52]  Joe Kilian,et al.  Uses of randomness in algorithms and protocols , 1990 .

[53]  Somesh Jha,et al.  CaPC Learning: Confidential and Private Collaborative Learning , 2021, ICLR.

[54]  Daniel Escudero,et al.  Secure training of decision trees with continuous attributes , 2020, IACR Cryptol. ePrint Arch..

[55]  Marcus Peinado,et al.  Controlled-Channel Attacks: Deterministic Side Channels for Untrusted Operating Systems , 2015, 2015 IEEE Symposium on Security and Privacy.

[56]  Cynthia Dwork,et al.  Differential Privacy , 2006, ICALP.

[57]  Farinaz Koushanfar,et al.  XONN: XNOR-based Oblivious Deep Neural Network Inference , 2019, IACR Cryptol. ePrint Arch..

[58]  Srinath T. V. Setty,et al.  Spartan: Efficient and general-purpose zkSNARKs without trusted setup , 2020, IACR Cryptol. ePrint Arch..

[59]  Manuel Blum,et al.  Coin flipping by telephone a protocol for solving impossible problems , 1983, SIGA.

[60]  Ivan Damgård,et al.  On the Randomness of Legendre and Jacobi Sequences , 1990, CRYPTO.

[61]  Kristina Lerman,et al.  A Survey on Bias and Fairness in Machine Learning , 2019, ACM Comput. Surv..

[62]  K. F. Gauss,et al.  Theoria combinationis observationum erroribus minimis obnoxiae , 1823 .

[63]  Stefan Katzenbeisser,et al.  Private Evaluation of Decision Trees using Sublinear Cost , 2019, Proc. Priv. Enhancing Technol..

[64]  Chris Clifton,et al.  Privacy-preserving distributed mining of association rules on horizontally partitioned data , 2004, IEEE Transactions on Knowledge and Data Engineering.

[65]  Alexander May,et al.  Legendre PRF (Multiple) Key Attacks and the Power of Preprocessing , 2021, IACR Cryptol. ePrint Arch..

[66]  Rafail Ostrovsky,et al.  Prio+: Privacy Preserving Aggregate Statistics via Boolean Shares , 2021, IACR Cryptol. ePrint Arch..

[67]  Murat Kantarcioglu,et al.  SGX-BigMatrix: A Practical Encrypted Data Analytic Framework With Trusted Processors , 2017, CCS.

[68]  Richard Zippel,et al.  Probabilistic algorithms for sparse polynomials , 1979, EUROSAM.

[69]  Yuan Xiao,et al.  SgxPectre: Stealing Intel Secrets from SGX Enclaves Via Speculative Execution , 2018, 2019 IEEE European Symposium on Security and Privacy (EuroS&P).

[70]  Andreas Haeberlen,et al.  DJoin: differentially private join queries over distributed databases , 2012, OSDI 2012.

[71]  Dawn Song,et al.  Transparent Polynomial Delegation and Its Applications to Zero Knowledge Proof , 2020, 2020 IEEE Symposium on Security and Privacy (SP).

[72]  Máté Horváth,et al.  The Legendre Pseudorandom Function as a Multivariate Quadratic Cryptosystem: Security and Applications , 2023, IACR Cryptol. ePrint Arch..

[73]  J. Sheil,et al.  The Distribution of Non‐Negative Quadratic Forms in Normal Variables , 1977 .

[74]  Anantha Chandrakasan,et al.  Gazelle: A Low Latency Framework for Secure Neural Network Inference , 2018, IACR Cryptol. ePrint Arch..

[75]  Bernhard Schölkopf,et al.  Avoiding Discrimination through Causal Reasoning , 2017, NIPS.

[76]  Krishna P. Gummadi,et al.  A Unified Approach to Quantifying Algorithmic Unfairness: Measuring Individual &Group Unfairness via Inequality Indices , 2018, KDD.

[77]  Marcel Keller,et al.  MP-SPDZ: A Versatile Framework for Multi-Party Computation , 2020, IACR Cryptol. ePrint Arch..

[78]  Andrew Chi-Chih Yao,et al.  Protocols for secure computations , 1982, FOCS 1982.

[79]  Cryptanalysis of the generalised Legendre pseudorandom function , 2020, Open Book Series.

[80]  Stephen E. Fienberg,et al.  Testing Statistical Hypotheses , 2005 .

[81]  Matei Zaharia,et al.  ObliDB: Oblivious Query Processing using Hardware Enclaves , 2017 .

[82]  Abhi Shelat,et al.  Full Accounting for Verifiable Outsourcing , 2017, CCS.

[83]  Beata Strack,et al.  Impact of HbA1c Measurement on Hospital Readmission Rates: Analysis of 70,000 Clinical Database Patient Records , 2014, BioMed research international.

[84]  Heather A. Piwowar,et al.  Data reuse and the open data citation advantage , 2013, PeerJ.

[85]  Dan Boneh,et al.  Bulletproofs: Short Proofs for Confidential Transactions and More , 2018, 2018 IEEE Symposium on Security and Privacy (SP).

[86]  Mariana Raykova,et al.  Secure Computation for Machine Learning With SPDZ , 2019, ArXiv.

[87]  Jonathan Katz,et al.  Global-Scale Secure Multiparty Computation , 2017, CCS.

[88]  Michael O. Rabin,et al.  Probabilistic Algorithms in Finite Fields , 1980, SIAM J. Comput..

[89]  Anirban Dasgupta,et al.  A sparse Johnson: Lindenstrauss transform , 2010, STOC '10.

[90]  Cunsheng Ding,et al.  On the Linear Complexity of Legendre Sequences , 1998, IEEE Trans. Inf. Theory.

[91]  Jonathan Katz,et al.  vSQL: Verifying Arbitrary SQL Queries over Dynamic Outsourced Databases , 2017, 2017 IEEE Symposium on Security and Privacy (SP).

[92]  J. Imhof Computing the distribution of quadratic forms in normal variables , 1961 .

[93]  Vinod M. Prabhakaran,et al.  Private Two-Terminal Hypothesis Testing , 2020, 2020 IEEE International Symposium on Information Theory (ISIT).

[94]  Raymond J. Mooney,et al.  Comparative Experiments on Disambiguating Word Senses: An Illustration of the Role of Bias in Machine Learning , 1996, EMNLP.

[95]  Sunil K. Chebolu,et al.  Counting Irreducible Polynomials over Finite Fields Using the Inclusion-Exclusion Principle , 2010 .

[96]  Kapil Vaswani,et al.  EnclaveDB: A Secure Database Using SGX , 2018, 2018 IEEE Symposium on Security and Privacy (SP).

[97]  Elaine Shi,et al.  xJsnark: A Framework for Efficient Verifiable Computation , 2018, 2018 IEEE Symposium on Security and Privacy (SP).

[98]  Cynthia Dwork,et al.  Calibrating Noise to Sensitivity in Private Data Analysis , 2006, TCC.

[99]  Eyal Kushilevitz,et al.  Falcon: Honest-Majority Maliciously Secure Framework for Private Deep Learning , 2021, Proc. Priv. Enhancing Technol..

[100]  Chang Liu,et al.  Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning , 2018, 2018 IEEE Symposium on Security and Privacy (SP).

[101]  Jonathan Lee,et al.  Proving the correct execution of concurrent services in zero-knowledge , 2018, IACR Cryptol. ePrint Arch..

[102]  Milton Packer,et al.  Data sharing in medical research , 2018, British Medical Journal.

[103]  Thomas F. Wenisch,et al.  Foreshadow: Extracting the Keys to the Intel SGX Kingdom with Transient Out-of-Order Execution , 2018, USENIX Security Symposium.

[104]  Melissa Chase,et al.  Property Inference from Poisoning , 2021, 2022 IEEE Symposium on Security and Privacy (SP).

[105]  Arnab Roy,et al.  Poseidon: A New Hash Function for Zero-Knowledge Proof Systems , 2021, USENIX Security Symposium.

[106]  Micah Goldblum,et al.  Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses , 2020, ArXiv.

[107]  Emmanuel Abbe,et al.  Privacy-Preserving Methods for Sharing Financial Risk Exposures , 2011, ArXiv.

[108]  Dragos Rotaru,et al.  MPC-Friendly Symmetric Key Primitives , 2016, CCS.

[109]  Fernando Diaz,et al.  Towards a Fair Marketplace: Counterfactual Evaluation of the trade-off between Relevance, Fairness & Satisfaction in Recommendation Systems , 2018, CIKM.

[110]  R. F.,et al.  Statistical Method from the Viewpoint of Quality Control , 1940, Nature.

[111]  Dan Boneh,et al.  Prio: Private, Robust, and Scalable Computation of Aggregate Statistics , 2017, NSDI.

[112]  Krishna P. Gummadi,et al.  Human Perceptions of Fairness in Algorithmic Decision Making: A Case Study of Criminal Risk Prediction , 2018, WWW.

[113]  Luis Muñoz-González,et al.  Label Sanitization against Label Flipping Poisoning Attacks , 2018, Nemesis/UrbReas/SoGood/IWAISe/GDM@PKDD/ECML.

[114]  Tudor Dumitras,et al.  Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks , 2018, NeurIPS.

[115]  Yehuda Lindell,et al.  Universally composable two-party and multi-party secure computation , 2002, STOC '02.

[116]  Ion Stoica,et al.  Helen: Maliciously Secure Coopetitive Learning for Linear Models , 2019, 2019 IEEE Symposium on Security and Privacy (SP).

[117]  Heidi L. Williams,et al.  Intellectual Property Rights and Innovation: Evidence from the Human Genome , 2013, Journal of Political Economy.

[118]  Ignacio Cascudo,et al.  Flexible and Efficient Verifiable Computation on Encrypted Data , 2020, IACR Cryptol. ePrint Arch..

[119]  Vitaly Shmatikov,et al.  Privacy-preserving deep learning , 2015, 2015 53rd Annual Allerton Conference on Communication, Control, and Computing (Allerton).

[120]  Xiao Wang,et al.  Ferret: Fast Extension for Correlated OT with Small Communication , 2020, IACR Cryptol. ePrint Arch..

[121]  Kush R. Varshney,et al.  Optimized Pre-Processing for Discrimination Prevention , 2017, NIPS.

[122]  Daniel M. Kane,et al.  A Derandomized Sparse Johnson-Lindenstrauss Transform , 2010, Electron. Colloquium Comput. Complex..

[123]  Martin R. Albrecht,et al.  MiMC: Efficient Encryption and Cryptographic Hashing with Minimal Multiplicative Complexity , 2016, ASIACRYPT.

[124]  Andreas Haeberlen,et al.  Orchard: Differentially Private Analytics at Scale , 2020, OSDI.

[125]  R. Davies The distribution of a linear combination of 2 random variables , 1980 .

[126]  Dimitris Achlioptas,et al.  Database-friendly random projections: Johnson-Lindenstrauss with binary coins , 2003, J. Comput. Syst. Sci..

[127]  Richard J. Lipton,et al.  A Probabilistic Remark on Algebraic Program Testing , 1978, Inf. Process. Lett..

[128]  Somesh Jha,et al.  Privacy-Preserving Ridge Regression with only Linearly-Homomorphic Encryption , 2018, IACR Cryptol. ePrint Arch..

[129]  Yehuda Lindell,et al.  How To Simulate It - A Tutorial on the Simulation Proof Technique , 2016, IACR Cryptol. ePrint Arch..

[130]  Krishna P. Gummadi,et al.  Beyond Distributive Fairness in Algorithmic Decision Making: Feature Selection for Procedurally Fair Learning , 2018, AAAI.

[131]  Silvio Micali,et al.  A Completeness Theorem for Protocols with Honest Majority , 1987, STOC 1987.

[132]  Susmita Sur-Kolay,et al.  Systematic Poisoning Attacks on and Defenses for Machine Learning in Healthcare , 2015, IEEE Journal of Biomedical and Health Informatics.

[133]  Elias Bareinboim,et al.  Fairness in Decision-Making - The Causal Explanation Formula , 2018, AAAI.

[134]  B. Ripley,et al.  Robust Statistics , 2018, Encyclopedia of Mathematical Geosciences.

[135]  Craig Gentry,et al.  Pinocchio: Nearly Practical Verifiable Computation , 2013, 2013 IEEE Symposium on Security and Privacy.

[136]  Michael P. Wellman,et al.  SoK: Security and Privacy in Machine Learning , 2018, 2018 IEEE European Symposium on Security and Privacy (EuroS&P).

[137]  Viktória Tóth Collision and avalanche effect in families of pseudorandom binary sequences , 2007, Period. Math. Hung..

[138]  Dragos Rotaru,et al.  Maliciously Secure Matrix Multiplication with Applications to Private Deep Learning , 2020, IACR Cryptol. ePrint Arch..

[139]  Stratis Ioannidis,et al.  Privacy-Preserving Ridge Regression on Hundreds of Millions of Records , 2013, 2013 IEEE Symposium on Security and Privacy.

[140]  Daniel M. Kane,et al.  Recent Advances in Algorithmic High-Dimensional Robust Statistics , 2019, ArXiv.

[141]  D. Ruppert Robust Statistics: The Approach Based on Influence Functions , 1987 .

[142]  Data sharing and the future of science , 2018, Nature Communications.

[143]  Rüdiger Kapitza,et al.  Telling Your Secrets without Page Faults: Stealthy Page Table-Based Attacks on Enclaved Execution , 2017, USENIX Security Symposium.