In real-world settings involving consequential decision-making, the deployment of machine learning systems generally requires both reliable uncertainty quantification and protection of individuals’ privacy. We present a framework that treats these two desiderata jointly. Our framework is based on conformal prediction, a methodology that augments predictive models to return prediction sets that provide uncertainty quantification—they provably cover the true response with a user-specified probability, such as 90%. One might hope that when used with privately trained models, conformal prediction would yield privacy guarantees for the resulting prediction sets; unfortunately this is not the case. To remedy this key problem, we develop a method that takes any pretrained predictive model and outputs differentially private prediction sets. Our method follows the general approach of split conformal prediction; we use holdout data to calibrate the size of the prediction sets but preserve privacy by using a privatized quantile subroutine. This subroutine compensates for the noise introduced to preserve privacy in order to guarantee correct coverage. We evaluate the method on large-scale computer vision data sets.
Keywords: privacy, uncertainty quantification, conformal prediction, prediction sets
The impressive predictive accuracies of black-box machine learning algorithms on tightly controlled test beds do not sanctify their use in consequential applications. For example, given the gravity of medical decision-making, automated diagnostic predictions must come with rigorous instance-wise uncertainty to avoid silent, high-consequence failures. Furthermore, medical data science requires privacy guarantees, since individuals would suffer material harm were their data to be accessed or reconstructed by a nefarious actor. While uncertainty quantification and privacy are generally dealt with in isolation, they arise together in many real-world predictive systems, and, as we discuss, they interact. Accordingly, the work that we present here involves a framework that addresses uncertainty and privacy jointly. Specifically, we develop a differentially private version of conformal prediction that results in private, rigorous, finite-sample uncertainty quantification for any model and any data set at little computational cost.
This work takes a modular viewpoint on data science: we seek to give practitioners the flexibility to train whatever underlying private model gives the best performance (e.g., via deep learning), then later endow the model with rigorous statistical properties, without modifying that underlying model. See Figure 1. Zooming out, perhaps the most consistent trend in the history of engineering is that modular engineering building blocks—like integrated circuits or reaction chemistry—are key to scalable and deployable systems where each part can be improved and debugged separately. Private prediction sets allow simple, private uncertainty quantification for any system and thus provide conceptual building blocks for data scientists constructing such systems.
Turning to the details, our approach builds on the notion of prediction sets—subsets of the response space that provably cover the true response variable with prespecified probability (e.g., ). Formally, for a test point with feature vector and response , we compute an uncertainty set function, , mapping a feature vector to a subset of such that
for a user-specified confidence level , where . We use the output of an underlying predictive model (e.g., a pretrained, privatized neural network) along with a held-out calibration data set, , from the same distribution as to fit the set-valued function . The probability in expression 1 is therefore taken over both the randomness in and . If the underlying model expresses uncertainty, will be large, signaling skepticism regarding the model’s prediction.
Moreover, we introduce a differentially private mechanism for fitting , such that the sets that we compute have low sensitivity to the removal of any calibration point. This will allow an individual to contribute a calibration data point without fear that the prediction sets will reveal their sensitive information. Note that even if the underlying model is trained in a privacy-preserving fashion, this provides no privacy guarantee for the calibration data. Therefore, we will provide an adjustment that masks the calibration data set with additional randomness, addressing both privacy and uncertainty simultaneously.
See Figure 2 for a concrete example of private prediction sets applied to the automated diagnosis of COVID-19. In this setting, the prediction sets represent a set of plausible diagnoses based on an X-ray image—either viral pneumonia
(presumed COVID-19), bacterial pneumonia
, or normal
. We guarantee that the true diagnosis is contained in the prediction set with high probability, while simultaneously ensuring that an adversary cannot detect the presence of any one of the X-ray images used to train the predictive system.
Our main contribution is a privacy-preserving algorithm that takes as input any predictive model together with a calibration data set, and outputs a set-valued function that maps any input feature vector to a set of labels such that the true label is contained in the predicted set with probability at least , as per Equation 1. In order to generate prediction sets satisfying this property, we use ideas from split conformal prediction (J. Lei et al., 2018; Papadopoulos et al., 2002; Vovk et al., 2005), modifying this approach to ensure privacy. Importantly, if the provided predictive model is also trained in a differentially private way, then the whole pipeline that maps data to a prediction set function is differentially private as well.
In Algorithm 1, we sketch our main procedure.
input: predictor , calibration data , privacy level , confidence level For , compute conformity score Compute -differentially private -quantile of , denoted output: |
Algorithm 1 first computes the conformity scores for all training samples. Informally, these scores indicate how well a feature–label pair ‘conforms’ to the provided model , a low score implying high conformity and a high score being indicative of an atypical point from the perspective of . Then, the algorithm generates a certain carefully chosen private quantile of the scores. Finally, it returns a prediction set function which, for a given input feature vector, returns all labels that result in a conformity score below the critical threshold .
Our main theoretical result asserts that Algorithm 1 has strict coverage guarantees and is differentially private. In addition, we show that the coverage is almost tight, that is, not much higher than .
We obtain a gap between the lower and upper bound on the probability of coverage to be roughly of the order , similar to the standard gap without the privacy requirement. With this, we provide the first theoretical insight into the cost of privacy in conformal prediction. To shed further light on the properties of our procedure, we perform an extensive empirical study where we evaluate the tradeoff between the level of privacy on one hand, and the coverage and size of prediction sets on the other.
Differential privacy (Dwork et al., 2006) has become the de facto standard for privacy-preserving data analysis, as witnessed by its widespread adoption in large-scale systems such as those by Google (Bittau et al., 2017; Erlingsson et al., 2014), Apple (2017), Microsoft (Ding et al., 2017), and the US Census Bureau (Abowd, 2018; Dwork, 2019). This increasing adoption of differential privacy goes hand in hand with steady progress in differentially private model training, ranging across both convex (Bassily et al., 2014; Chaudhuri et al., 2011) and nonconvex (Abadi et al., 2016; Neel et al., 2020) settings. Our work complements these works by proposing a procedure that can be combined with any differentially private model training algorithm to account for the uncertainty of the resulting predictive model by producing a prediction set function with formal guarantees. At a technical level, closest to our algorithm on the privacy side are existing methods for reporting histograms and quantiles in a privacy-preserving fashion (Dwork et al., 2006; Feldman & Steinke, 2017; J. Lei, 2011; Smith, 2011; Xu et al., 2013). Finally,
there have also been significant efforts to quantify uncertainty with formal privacy guarantees through various types of private confidence intervals (Gaboardi et al., 2019; Karwa & Vadhan, 2017; Sheffet, 2017; Wang et al., 2019). While prediction sets resemble confidence intervals, they are fundamentally different objects as they do not aim to cover a fixed parameter of the population distribution, but rather a randomly sampled outcome. As a result, existing methods for differentially private confidence intervals do not generalize to our problem setting.
Prediction sets as a way to represent uncertainty are a classical idea, going back at least to tolerance regions in the 1940s (Tukey, 1947; Wald, 1943; Wilks, 1941, 1942). See Krishnamoorthy & Mathew (2009) for an overview of tolerance regions and Park et al. (2020) for a recent application to deep learning models. Conformal prediction (Shafer & Vovk, 2008; Vovk et al., 1999, 2005) is a related way of producing predictive sets with finite-sample guarantees. Most relevant to the present work, split conformal prediction (J. Lei et al., 2015, 2018; Papadopoulos et al., 2002) is a convenient version that uses data splitting to give prediction sets in a computationally efficient way. Vovk (2015) and Barber et al. (2021) refine this approach to reuse data for both training and calibration, improving statistical efficiency. Recent work has targeted desiderata such as small set sizes (Angelopoulos et al., 2020; Sadinle et al., 2019), coverage that is approximately balanced across feature space (Cauchois, Gupta, & Duchi, 2020; Barber et al., 2019; Guan, 2020; Izbicki et al., 2019; Romano et al., 2019; Romano et al., 2020; Vovk, 2012), and coverage that is balanced across classes (Guan & Tibshirani, 2019; Hechtlinger et al., 2018; J. Lei, 2014; Sadinle et al., 2019). Further extensions address problems in distribution estimation (Vovk et al., 2017, 2020), handling or testing distribution shift (Cauchois, Gupta, Ali, & Duchi, 2020; Hu & Lei, 2020; Tibshirani et al., 2019), causal inference (L. Lei & Candès, 2020), and controlling other notions of statistical error (Bates et al., 2021). We suggest (Angelopoulos & Bates, 2021) and (Shafer & Vovk, 2008) as introductory tutorials on conformal prediction for the unfamiliar reader. Lastly, we highlight two alternative approaches with a similar goal to conformal prediction. First, the calibration technique in (Jung et al., 2020) and (Gupta et al., 2021) generates prediction sets via the estimation of higher moments across many overlapping subpopulations. Second, there is a family of techniques that define a utility function balancing set-size and coverage and then search for set-valued predictors to maximize this utility (del Coz et al., 2009; Grycko, 1993; Mortier et al., 2020). The present work builds on split conformal prediction, but modifies the calibration step to preserve privacy.
In this section, we formally introduce the main concepts in our problem setting. Split conformal prediction assumes access to a predictive model, , and aims to output prediction sets that achieve coverage by quantifying the uncertainty of and the intrinsic randomness in and . It quantifies this uncertainty using a calibration data set consisting of i.i.d. samples, , that were not used to train . The calibration proceeds by defining a score function . Without loss of generality we take the range of this function to be the unit interval . The reader should think of the score as measuring the degree of consistency of the response with the features based on the predictive model (e.g., the size of the residual in a regression model), but any score function would lead to correct coverage. To simplify notation we will write to denote the score, where we implicitly assume an underlying model . From this score function, one forms prediction sets as follows:
for a choice of based on the calibration dataset. In particular, is taken to be a quantile of the calibration scores for . In nonprivate conformal prediction, one simply takes to be the quantile, and then a standard argument shows that the coverage property in (1) holds. In this work we show how to take a modified private quantile that maintains this coverage guarantee.
As a concrete example of standard split conformal prediction, consider classifying an image in into one of a thousand classes, . Given a standard classifier outputting a probability distribution over the classes, (e.g., the output of a softmax layer), we can define a natural score function based on the activation of the correct class, . Then we take as the upper quantile of the calibration scores and define as in Equation 2. That is, we take as the cutoff the value such that if we include all classes with estimated probability greater than , our sets have (only slightly more than) 90% coverage on the calibration data. The result on a test point is then a set of plausible classes guaranteed to contain the true class with probability 90%. Our proposed method will follow a similar workflow, but with a slightly different choice of to guarantee both coverage and privacy.
We next formally define differential privacy. We say that two data sets are neighboring if they differ in a single element, that is, either data set can be obtained from the other by removing a single entry. For example, and , for some . Differential privacy then requires that two neighboring data sets produce similar distributions on the output.
In short, if no adversary observing the algorithm’s output can distinguish between and a data set with the -th entry removed, the presence of individual in the analysis cannot be detected and hence their privacy is not compromised.
A key ingredient to our procedure is a privatized quantile of the conformity scores. We obtain this private quantile by discretizing the scores into bins and applying the exponential mechanism (McSherry & Talwar, 2007), one of the most ubiquitous tools in differential privacy. Our private quantile routine is then an extension of the private median routine proposed by Feldman and Steinke (2017) to handle arbitrary quantiles. Specifically, let us fix a number of bins , as well as edges . The edges define the bins , . We use Algorithm 2 with appropriately chosen quantile level as a subroutine of our main conformal procedure.
input: calibration scores , bins , privacy level , level For all , compute discretized score , where For all , compute Let with probability output: private quantile |
We next precisely state our main algorithm and its formal guarantees. First, our algorithm has a calibration step, Algorithm 3, carried out one time using the calibration scores as input; this is the heart of our proposed procedure. The output of this step is a cutoff learned from the calibration data. With this in hand, one forms the prediction set for a test point as in Equation 2, which for completeness we state in Algorithm 4.
input: calibration scores , privacy parameter , coverage level , bins Compute -quantile of via Algorithm 2, where is defined in (3), denoted output: calibrated score cutoff |
input: test point , calibrated score cutoff output: prediction set as in (2): . |
This algorithm both satisfies differential privacy and guarantees correct coverage, as stated next in Proposition 1 and Theorem 2, respectively. The privacy property is a straightforward consequence of the privacy guarantees on the exponential mechanism (McSherry & Talwar, 2007).
Therefore, the main challenge for theory lies in understanding how to compensate for the added differentially private noise in order to get strict, distribution-free coverage guarantees.
Remark 1. We can choose to minimize , which leads to smallest prediction sets. The optimal value depends only on and , and can be found by taking a derivative of (2); see Appendix C.
Note that the significance level in (3) is just a slightly inflated version of the nonprivate conformal quantile: . Indeed, taking in (2) recovers the nonprivate quantile. Intuitively, we must raise the significance level to compensate for the noise introduced to preserve privacy. We note that the additive factor of order is in fact necessary to compute an approximate quantile with -differential privacy (Bun et al., 2017).
We informally sketch the main ideas in the proof, deferring the details to the Appendix.
Proof sketch. We can write the probability of coverage as:
where is the distribution of appropriately discretized empirical scores. We observe that for all , the exponential mechanism with input and returns an empirical quantile no smaller than the empirical quantile. This allows us to write
where denotes the empirical distribution of the discretized scores. For any , the random variable is distributed as the -th order statistic of a super-uniform distribution, which implies that it can be stochastically lower bounded by the -th order statistic of a uniform distribution. This order statistic follows a beta distribution with known parameters, whose expectation can hence be evaluated analytically. Carefully choosing as a function of this expectation completes the proof of the theorem.
With the validity of Algorithm 3 established, we next prove that the algorithm is not too conservative in the sense that the coverage is not far above . A key quantity in our upper bound is
This quantity captures the impact of the score discretization. Smaller corresponds to mass being spread more evenly throughout the bins. For well-behaved score functions, we expect to scale as . Indeed, if the scores have any continuous density on bounded above and we take uniformly spaced bins, then . In terms of , we have the following upper bound.
If we further assume a weak regularity condition on the scores, then by balancing the rates in the expression above we arrive at an explicit upper bound.
We emphasize that the assumptions on the score distribution are only needed to prove the upper bound; the coverage lower bound holds for any distribution. In any case, these assumptions are very weak, essentially requiring only that the score distribution contains no point masses. In fact, this requirement could even be enforced ex post facto by adding a small amount of tiebreaking noise, in which case we would need no restrictions on the input distribution of scores whatsoever.
The upper bound answers an important practical question: how many bins should we take? If is too small, then the histogram only coarsely approximates the empirical distribution of the scores. On the other hand, if is too large, then the histogram is accurate, but the private quantile in 3 can grow as well. This tension can be observed in the terms in Theorem 3 that have a dependence on . Corollary 1 suggests that the correct balance—which leads to minimal excess coverage—is to take . In practice, because the dependence of on is only logarithmic, is often very large.
This upper bound also gives insight to an important theoretical question: what is the cost of privacy in conformal prediction? In nonprivate conformal prediction, the upper bound is (J. Lei et al., 2018). In private conformal prediction, we achieve an upper bound of , a relatively modest cost incurred by privacy-preserving calibration.
We now turn to an empirical evaluation of differentially private conformal prediction for image classification problems. In this setting, each image has a single unique class label estimated by a predictive model . We seek to create private prediction sets, , achieving coverage as in Equation 1, using the following score function:
as in Sadinle et al. (2019). This section evaluates the prediction sets generated by Algorithm 3 by quantifying the cost of privacy and the effects of the model, number of calibration points, and number of bins used in our procedure. We use the CIFAR-10 data set (Krizhevsky & Hinton, 2009) wherever we require a privately trained neural network. Otherwise, we use a nonprivate model on the ImageNet data set (Deng et al., 2009), to investigate the performance of our procedure in a more challenging setting with a large number of possible labels. Except where otherwise mentioned, we use an automated number of uniformly spaced bins to construct the privatized CDF. Appendix C describes the algorithm for choosing an approximately optimal value of when the conformal scores are roughly uniform based on fixed values of , , and . We finish the section by providing private prediction sets for diagnosing viral pneumonia on the CoronaHack data set (Perez et al., 2020). The reader can reproduce the experiments exactly using our public GitHub repository.
We would like to disentangle the effects of private conformal prediction from those of private model training. To that end, we report the coverage and set sizes of the following four procedures: private conformal prediction with a private model, nonprivate conformal prediction with a private model, private conformal prediction with a nonprivate model, and nonprivate conformal prediction with a nonprivate model. The nonprivate model and private model are both the same stock convolutional architecture from the Opacus
library. The private model is trained with private SGD (Abadi et al., 2016), as implemented in the Opacus
library, with privacy parameters and . We used the suggested private model training parameters from the Opacus
library (see Appendix C), as our work does not aim to improve private model training. The nonprivate model’s accuracy () was significantly higher than that of the private model ().
Figure 4 shows histograms of the coverages and set sizes of these procedures over 1,000 random splits of the CIFAR-10 validation set with . Notably, the results show the price of private conformal prediction is very low, as evidenced by the minuscule increase in set size caused by private conformal prediction. However, the private model training causes a larger set size due to the private model’s comparatively poor performance. Note that a user desiring a fully private pipeline will use the procedure in the bottom right quadrant of the plot.
Here we probe the performance of private prediction sets as the number of uniformly spaced bins in our procedure changes. Based on our theoretical results, should be on the order of , with the exact number dependent on the underlying model and the choices of , , and . A too-small choice of coarsely quantizes the scores, so Algorithm 4 may be forced to round up to a very conservative private quantile. A too-large choice of increases the logarithmic term in 3. The optimal choice of balances these two factors.
To demonstrate this tradeoff, we performed experiments on ImageNet. We used a nonprivate, pretrained ResNet-152 from the torchvision
repository as the base model. Figure 5 shows the coverage and set size of private prediction sets over 100 random splits of ImageNet’s validation set for several choices of ; we used and evaluated on the remaining images. The experimental results suggest works comparatively well, and that our method is relatively insensitive to the number of bins over several orders of magnitude.
Next we quantify how the coverage changes with the privacy parameter . We used calibration points and evaluation points as in Experiment 4.3. For each value of we choose a different value of . Figure 6 shows the coverage and set size of private prediction sets over 100 splits of ImageNet’s validation set for several choices of . As grows, the procedure becomes less conservative. Overall the procedure exhibits little sensitivity to .
Next we show results on the CoronaHack data set, a public chest X-ray data set containing X-rays labeled as normal
, viral pneumonia
(primarily COVID-19), or bacterial pneumonia
. Using training pairs over epochs, we (nonprivately) fine-tuned the last layer of a pretrained ResNet-18 from torchvision
to predict one of the three diagnoses. The private conformal calibration procedure saw a further examples, and we used the remaining for validation. The ResNet-18 had a final accuracy of after fine-tuning. Figure 7 plots the coverage and set size of this procedure over different train/calibration/validation splits of the dataset, and Figure 2 shows selected examples of these sets.
We introduce a method to produce differentially private prediction sets that contain the true response with a user-specified probability by blending split conformal prediction with differentially private quantile computation. The primary challenge we resolve in this work is simultaneously satisfying the coverage property and privacy property, which requires a careful choice of the conformal score threshold to account for the added privacy noise. Our corresponding upper bound shows that the coverage does not greatly exceed the nominal level , meaning that our procedure is not too conservative. Moreover, our upper bound gives insight into the price of privacy in conformal prediction: the upper bound scales as compared to for nonprivate conformal prediction, a mild decrease in efficiency. This is confirmed in our experiments, where we show that there is little difference between private and nonprivate conformal prediction when using the same predictive model. We also observe the familiar phenomenon that there is a substantial decrease in accuracy for private model fitting compared to nonprivate model fitting. We conclude that the cost of privacy lies primarily in the model fitting—private calibration has a comparatively minor effect on performance. We also note that any improvement in private model training would immediately translate to smaller prediction sets returned by our method. In sum, we view private conformal prediction as an appealing method for uncertainty quantification with differentially private models.
Anastasios Nikolas Angelopoulos, Stephen Bates, Tijana Zrnic, and Michael I. Jordan have no financial or non-financial disclosures to share for this article.
Abadi, M., Chu, A., Goodfellow, I., McMahan, H. B., Mironov, I., Talwar, K., & Zhang, L. (2016). Deep learning with differential privacy. In Proceedings of the 2016 ACM SIGSAC conference on computer and communications security (pp. 308–318). https://doi.org/10.1145/2976749.2978318
Abowd, J. M. (2018). The U.S. census bureau adopts differential privacy. In Proceedings of the 24th ACM SIGKDD international conference on knowledge discovery & data mining (p. 2867). https://doi.org/10.1145/3219819.3226070
Angelopoulos, A. N., & Bates, S. (2021). A gentle introduction to conformal prediction and distribution-free uncertainty quantification. arXiv. https://doi.org/10.48550/arXiv.2107.07511
Angelopoulos, A. N., Bates, S., Malik, J., & Jordan, M. I. (2020). Uncertainty sets for image classifiers using conformal prediction. arXiv. https://doi.org/10.48550/arXiv.2009.14193
Barber, R. F., Candès, E. J., Ramdas, A., & Tibshirani, R. J. (2019). The limits of distribution-free conditional predictive inference. Information and Inference: A Journal of the IMA, 10(2), 455–482. https://doi.org/10.1093/imaiai/iaaa017
Barber, R. F., Candès, E. J., Ramdas, A., Tibshirani, R. J. (2021). Predictive inference with the jackknife+. Annals of Statistics, 49(1), 486–507. https://doi.org/10.1214/20-AOS1965
Bassily, R., Smith, A., & Thakurta, A. (2014). Private empirical risk minimization: Efficient algorithms and tight error bounds. In 2014 IEEE 55th annual symposium on foundations of computer science (pp. 464–473). https://doi.org/10.1109/FOCS.2014.56
Bates, S., Angelopoulos, A., Lei, L., Malik, J., & Jordan, M. I. (2021). Distribution-free, risk-controlling prediction sets. arXiv. https://doi.org/10.48550/arXiv.2101.02703
Bittau, A., Erlingsson, U´ ., Maniatis, P., Mironov, I., Raghunathan, A., Lie, D., . . . Seefeld, B. (2017). Prochlo: Strong privacy for analytics in the crowd. In Proceedings of the 26th symposium on operating systems principles (pp. 441–459). https://doi.org/10.1145/3132747.3132769
Bun, M., Steinke, T., & Ullman, J. (2017). Make up your mind: The price of online queries in differential privacy. In Proceedings of the 28th annual ACM-SIAM symposium on discrete algorithms (pp. 1306–1325).
Cauchois, M., Gupta, S., Ali, A., & Duchi, J. C. (2020). Robust validation: Confident predictions even when distributions shift. arXiv. https://doi.org/10.48550/arXiv.2008.04267
Cauchois, M., Gupta, S., & Duchi, J. (2020). Knowing what you know: Valid and validated confidence sets in multiclass and multilabel prediction. arXiv. https://doi.org/10.48550/arXiv.2004.10181
Chaudhuri, K., Monteleoni, C., & Sarwate, A. D. (2011). Differentially private empirical risk minimization. Journal of Machine Learning Research, 12(29), 1069–1109. https://jmlr.org/papers/v12/chaudhuri11a.html
del Coz, J. J., Díez, J., & Bahamonde, A. (2009). Learning nondeterministic classifiers. Journal of Machine Learning Research, 10(79), 2273–2293. http://jmlr.org/papers/v10/delcoz09a.html
Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., & Fei-Fei, L. (2009). Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition (pp. 248–255). https://doi.org/10.1109/CVPR.2009.5206848
Differential Privacy Team Apple. (2017). Learning with privacy at scale. Apple machine learning research. https://machinelearning.apple.com/research/learning-with-privacy-at-scale
Ding, B., Kulkarni, J., & Yekhanin, S. (2017). Collecting telemetry data privately. In Proceedings of the 31st Conference on Neural Information Processing Systems (NIPS 2017) (pp. 3571–3580). https://papers.nips.cc/paper/2017/hash/253614bbac999b38b5b60cae531c4969-Abstract.html
Dwork, C. (2019). Differential privacy and the US census. In Proceedings of the 38th ACM SIGMOD-SIGACT-SIGAI symposium on principles of database systems (p. 1). https://doi.org/10.1145/3294052.3322188
Dwork, C., McSherry, F., Nissim, K., & Smith, A. (2006). Calibrating noise to sensitivity in private data analysis. In S. Halevi, & T. Rabin (Eds.), Theory of cryptography: Third theory of cryptography conference, TCC 2006, New York, NY, USA, March 4-7, 2006. Proceedings (pp. 265–284). https://doi.org/10.1007/11681878_14
Dwork, C., & Roth, A. (2014). The algorithmic foundations of differential privacy. Foundations and Trends in Theoretical Computer Science, 9(3–4), 211–407. https://doi.org/10.1561/0400000042
Erlingsson, Ú., Pihur, V., & Korolova, A. (2014). Rappor: Randomized aggregatable privacy-preserving ordinal response. In Proceedings of the 2014 ACM SIGSAC conference on computer and communications security (pp. 1054–1067). https://doi.org/10.1145/2660267.2660348
Feldman, V., & Steinke, T. (2017). Generalization for adaptively-chosen estimators via stable median. In Proceedings of Machine Learning Research: Vol. 65. Proceedings of the 2017 Conference on Learning Theory (pp. 728–757). https://proceedings.mlr.press/v65/feldman17a.html
Gaboardi, M., Rogers, R., & Sheffet, O. (2019). Locally private mean estimation: Z-test and tight confidence intervals. In Proceedings of Machine Learning Research: Vol. 89. The 22nd international conference on artificial intelligence and statistics (pp. 2545–2554). http://proceedings.mlr.press/v89/gaboardi19a.html
Grycko, E. (1993). Classification with set-valued decision functions. In O. Opitz, B. Lausen, & R. Klar (Eds.), Information and classification: Concepts, methods and applications (pp. 218–224). https://doi.org/10.1007/978-3-642-50974-2_22
Guan, L. (2020). Conformal prediction with localization. arXiv. https://doi.org/10.48550/arXiv.1908.08558
Guan, L., & Tibshirani, R. (2019). Prediction and outlier detection in classification problems. arXiv. https://doi.org/10.48550/arXiv.1905.04396
Gupta, V., Jung, C., Noarov, G., Pai, M. M., & Roth, A. (2021). Online multivalid learning: Means, moments, and prediction intervals. arXiv. https://doi.org/10.48550/arXiv.2101.01739
Hechtlinger, Y., Póczos, B., & Wasserman, L. (2018). Cautious deep learning. arXiv. https://doi.org/10.48550/arXiv.1805.09460
Hu, X., & Lei, J. (2020). A distribution-free test of covariate shift using conformal prediction. arXiv. https://doi.org/10.48550/arXiv.2010.07147
Izbicki, R., Shimizu, G. T., & Stern, R. B. (2019). Flexible distribution-free conditional predictive bands using density estimators. arXiv. https://doi.org/10.48550/arXiv.1910.05575
Jung, C., Lee, C., Pai, M. M., Roth, A., & Vohra, R. (2020). Moment multicalibration for uncertainty estimation. arXiv. https://doi.org/10.48550/arXiv.2008.08037
Karwa, V., & Vadhan, S. (2017). Finite sample differentially private confidence intervals. arXiv. https://doi.org/10.48550/arXiv.1711.03908
Krishnamoorthy, K., & Mathew, T. (2009). Statistical tolerance regions: Theory, applications, and computation. Wiley.
Krizhevsky, A., & Hinton, G. (2009). Learning multiple layers of features from tiny images [Technical Report]. The University of Toronto. http://www.cs.toronto.edu/~kriz/learning-features-2009-TR.pdf
Lei, J. (2011). Differentially private m-estimators. In NIPS'11: Proceedings of the 24th International Conference on Neural Information Processing Systems (pp. 361–369). https://papers.nips.cc/paper/2011/hash/f718499c1c8cef6730f9fd03c8125cab-Abstract.html
Lei, J. (2014). Classification with confidence. Biometrika, 101(4), 755–769. https://doi.org/10.1093/biomet/asu038
Lei, J., G’Sell, M., Rinaldo, A., Tibshirani, R. J., & Wasserman, L. (2018). Distribution-free predictive inference for regression. Journal of the American Statistical Association, 113(523), 1094–1111. https://doi.org/10.1080/01621459.2017.1307116
Lei, J., Rinaldo, A., & Wasserman, L. (2015). A conformal prediction approach to explore functional data. Annals of Mathematics and Artificial Intelligence, 74(1–2), 29–43. https://doi.org/10.1007/s10472-013-9366-6
Lei, L., & Candès, E. J. (2020). Conformal inference of counterfactuals and individual treatment effects. arXiv. https://doi.org/10.48550/arXiv.2006.06138
McSherry, F., & Talwar, K. (2007). Mechanism design via differential privacy. In 48th annual IEEE symposium on foundations of computer science (pp. 94–103). https://doi.org/10.1109/FOCS.2007.66
Mortier, T., Wydmuch, M., Dembczyński, K., Hüllermeier, E., & Waegeman, W. (2020). Efficient set-valued prediction in multi-class classification. arXiv. https://doi.org/10.48550/arXiv.1906.08129
Neel, S., Roth, A., Vietri, G., & Wu, S. (2020). Oracle efficient private non-convex optimization. In Proceedings of Machine Learning Research: Vol. 119. Proceedings of the 37th International Conference on Machine Learning (pp. 7243–7252). http://proceedings.mlr.press/v119/neel20a.html
Papadopoulos, H., Proedrou, K., Vovk, V., & Gammerman, A. (2002). Inductive confidence machines for regression. In Proceedings of the 13th European Conference on Machine Learning (pp. 345–356). https://doi.org/10.1007/3-540-36755-1_29
Park, S., Bastani, O., Matni, N., & Lee, I. (2020). PAC confidence sets for deep neural networks via calibrated prediction [paper presentation]. The 2020 International Conference on Learning Representations. Retrieved from https://openreview.net/forum?id=BJxVI04YvB
Pérez, J. C., de Blas Pérez, C., Alvarez, F. L., & Contreras, J. M. C. (2020). Databiology Lab CORONAHACK: Collection of public COVID-19 data. bioRxiv. https://doi.org/10.1101/2020.10.22.328864
Romano, Y., Patterson, E., & Candès, E. (2019). Conformalized quantile regression. In Proceedings of the 33rd International Conference on Neural Information Processing Systems (pp. 3543–3553). https://papers.nips.cc/paper/2019/hash/5103c3584b063c431bd1268e9b5e76fb-Abstract.html
Romano, Y., Sesia, M., & Candès, E. J. (2020). Classification with valid and adaptive coverage. arXiv. https://doi.org/10.48550/arXiv.2006.02544
Sadinle, M., Lei, J., & Wasserman, L. (2019). Least ambiguous set-valued classifiers with bounded error levels. Journal of the American Statistical Association, 114(525), 223–234. https://doi.org/10.1080/01621459.2017.1395341
Shafer, G., & Vovk, V. (2008). A tutorial on conformal prediction. Journal of Machine Learning Research, 9(12), 371–421. https://jmlr.org/papers/v9/shafer08a.html
Sheffet, O. (2017). Differentially private ordinary least squares. In Proceedings of Machine Learning Research: Vol. 70. Proceedings of the 34th International Conference on Machine Learning (pp. 3105–3114). https://proceedings.mlr.press/v70/sheffet17a.html
Smith, A. (2011). Privacy-preserving statistical estimation with optimal convergence rates. In Proceedings of the forty-third annual ACM symposium on theory of computing (pp. 813–822). https://doi.org/10.1145/1993636.1993743
Tibshirani, R. J., Barber, R. F., Candès, E., & Ramdas, A. (2019). Conformal prediction under covariate shift. In Proceedings of the 33rd International Conference on Neural Information Processing Systems (pp. 2530–2540). https://papers.nips.cc/paper/2019/hash/8fb21ee7a2207526da55a679f0332de2-Abstract.html
Tukey, J. W. (1947). Non-parametric estimation II. statistically equivalent blocks and tolerance regions—the continuous case. Annals of Mathematical Statistics, 18(4), 529–539. https://doi.org/10.1214/aoms/1177730343
Vovk, V. (2012). Conditional validity of inductive conformal predictors. In Proceedings of Machine Learning Research: Vol. 25. Proceedings of the Asian conference on machine learning (pp. 475–490). https://proceedings.mlr.press/v25/vovk12.html
Vovk, V. (2015). Cross-conformal predictors. Annals of Mathematics and Artificial Intelligence, 74(1–2), 9–28. https://doi.org/10.1007/s10472-013-9368-4
Vovk, V., Gammerman, A., & Saunders, C. (1999). Machine-learning applications of algorithmic randomness. In Proceedings of the Sixteenth International Conference on Machine Learning (pp. 444–453).
Vovk, V., Gammerman, A., & Shafer, G. (2005). Algorithmic Learning in a Random World. Springer. https://doi.org/10.1007/b106715
Vovk, V., Petej, I., Toccaceli, P., Gammerman, A., Ahlberg, E., & Carlsson, L. (2020). Conformal calibrators. In Proceedings of Machine Learning Research: Vol. 128. Proceedings of the Ninth Symposium on Conformal and Probabilistic Prediction and Applications (pp. 84–99). https://proceedings.mlr.press/v128/vovk20a.html
Vovk, V., Shen, J., Manokhin, V., & Xie, M.-g. (2017). Nonparametric predictive distributions based on conformal prediction. Machine Learning, 108(3), 445–474. https://doi.org/10.1007/s10994-018-5755-8
Wald, A. (1943). An extension of Wilks’ method for setting tolerance limits. Annals of Mathematical Statistics, 14(1), 45–55. https://doi.org/10.1214/aoms/1177731491
Wang, Y., Kifer, D., & Lee, J. (2019). Differentially private confidence intervals for empirical risk minimization. Journal of Privacy and Confidentiality, 9(1). https://doi.org/10.29012/jpc.660
Wilks, S. S. (1941). Determination of sample sizes for setting tolerance limits. Annals of Mathematical Statistics, 12(1), 91–96. https://doi.org/10.1214/aoms/1177731788
Wilks, S. S. (1942). Statistical prediction with special reference to the problem of tolerance limits. Annals of Mathematical Statistics, 13(4), 400–409. https://doi.org/10.1214/aoms/1177731537
Xu, J., Zhang, Z., Xiao, X., Yang, Y., Yu, G., & Winslett, M. (2013). Differentially private histogram publication. The VLDB Journal, 22(6), 797–822. https://doi.org/10.1007/s00778-013-0309-y
We start with a result about the error of the private quantile mechanism, stated in Algorithm 2. The following is an extension of the the analogous result for the private median due to (Feldman & Steinke, 2017).
and
Proof. By the standard utility guarantee for the exponential mechanism (McSherry & Talwar, 2007) (e.g., Corollary 3.12 in (Dwork & Roth, 2014)), we have
First we argue that . Let . Then, trivially. Furthermore, by the definition of , since is the first point at which the cumulative fraction of scores less than or equal to exceeds . Since we have identified a bin where , we can conclude that .
Going back to Equation 4, we have that with probability at least ,
Similarly,
Next, we package some classical facts about the distribution of order statistics in a form helpful for analyzing conformal prediction.
Proof. Since we take by definition, then that implies , where denotes the -th nondecreasing order statistic of . By monotonicity of , we further have that is identical to the -th nondecreasing order statistic of . By a standard argument, the samples are super-uniform, that is, for all . In other words, they are stochastically larger than a uniform distribution on , and thus their -th order statistic is stochastically lower bounded by the -th order statistic of a uniform distribution, which follows the distribution. This completes the proof of the lower bound. For the upper bound, we use the fact that , and so are stochastically dominated by , where are i.i.d. uniform on . Their -th order statistic is distributed as , which completes the proof.
First we introduce some notation. By we will denote the discretized CDF of the scores; in particular, for any ,
Here, by we denote a discretized version of where we set if . We also let denote the empirical distribution of the discretized scores:
By convention, we let denote the left-continuous inverse of , i.e. , and we similarly define .
We can write
Denote the event , and note that by Lemma 1 and the fact that , . By splitting up the analysis depending on , we obtain the following:
where the final inequality follows by the definition of . Thus, it suffices to show that
Let . Then, by Lemma 2,
so
By the definition of , we see that
holds, which implies Equation 5 and thus completes the proof.
We adopt the definitions of , from Theorem 2, and define as the event
which by Lemma 1 has probability at least . By a similar reasoning as in Theorem 2, we obtain the following:
where the final inequality follows by the definition of .
Let . By Lemma 2, we have
so
By the definition of , we see that
Putting together Equations (6), (7), and (8) completes the proof.
Choosing and . Algorithm 5 gives automatic choices of the optimal number of uniformly spaced bins, , and the tuning parameter that work well for approximately uniformly distributed scores. In a moment, we will show how to find the optimal value for a fixed value of . Once gets chosen, we will simulate uniformly distributed scores to choose the value that results in the best quantile for specific, pre-determined values of , , and . In practice, can be chosen from a relatively coarse grid of, say, 50 values logarithmically spaced from to .
We start choosing the optimal value by solving for the zeros of the derivative , leading to the quadratic expression,
Letting be the roots of (9), we can then choose the optimal value as
where the number 1e-12 takes care of the case that both roots lie outside the interval .
input: number of calibration points , privacy level , confidence level output: , |
Private training procedure. We used the Opacus
library with the default parameter choices included in the CIFAR-10 example code. The only difference in the nonprivate model training is the use of the ––disable–dp
flag, turning off the added noise but preserving all other settings.
©2022 Anastasios Nikolas Angelopoulos, Stephen Bates, Tijana Zrnic, and Michael I. Jordan. This article is licensed under a Creative Commons Attribution (CC BY 4.0) International license, except where otherwise indicated with respect to particular material included in the article.