Ethics Reading List
Part 1
1.1 - Foundations of Ethical AI
Assessed Reading
- Floridi, L., & Cowls, J. (2019). A Unified Framework of Five Principles for AI in Society. Harvard Data Science Review, 1(1).
Core Reading
- Kearns, M. & Roth, A. (2020). The ethical algorithm: the science of socially aware algorithm design. Oxford University Press. [Introduction chapter]
Live Session Documents
1.2 - Privacy and Autonomy
Assessed Reading
- Narayanan, & Shmatikov, V. (2008). Robust De-anonymization of Large Sparse Datasets. 2008 IEEE Symposium on Security and Privacy (sp 2008), 111–125.
Core Reading
- Kearns, M. & Roth, A. (2020). The ethical algorithm: the science of socially aware algorithm design. Oxford University Press. [Chapter 1 - Algorithmic Privacy]
1.3 - Fairness
Assessed Reading
- Buolamwini, J., & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. In Conference on fairness, accountability and transparency (pp. 77-91). PMLR.
Core Reading
- Kearns, M. & Roth, A. (2020). The ethical algorithm: the science of socially aware algorithm design. Oxford University Press. [Chapter 2 - Algorithmic Fairness]
Supplementary Reading
- Verma, S., & Rubin, J. (2018). Fairness definitions explained. In 2018 ieee/acm international workshop on software fairness (fairware) (pp. 1-7). IEEE.
1.4 - Value Alignment and Control
Assessed Reading
- Awad, E., et al. (2018). The moral machine experiment. Nature 563, 59-64.
Core Reading
- Gunantara, N. (2018). A review of multi-objective optimization: methods and its applications. Cogent Engineering 5:1.
1.5: Explainability and Interpretability
Assessed Reading
- Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). “Why should I trust you?” Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD, International Conference on Knowledge Discovery and Data Mining (pp. 1135-1144).
Core Reading
- Molnar, C. (2020). Interpretable machine learning.
- Chapter 8.1 on Partial Dependence Plots
- Chapter 9.2 on LIME
Supplementary Reading
- Recording of LIME conference presentation (in supplementary material section)
1.6: Safety, Security and Accountability
Assessed Reading
- Ameisen. (2020). Building machine learning powered applications: going from idea to product (First edition). O’Reilly. [Chapter 11 - Monitor and Update Models]
Core Reading
- Wilson, G., Bryan, J., Cranston, K., Kitzes, J., Nederbragt, L., & Teal, T. K. (2017). Good enough practices in scientific computing. PLoS computational biology, 13(6).
Part 2
2.1: Model Interpretation
Assessed Reading
- Guthery, F. & Bingham, R. (2007). A Primer on Interpreting Regression Models. Journal of Wildlife Management 71(3):684–692.
Core Reading
- Molnar, C. (2020). Interpretable machine learning.
- Chapter 5.1 - Linear Regression
- Chapter 5.2 - Logistic Regression
- Chapter 8.1 - Partial Dependence Plots
Supplementary Resources:
Tu, Y. K., Gunnell, D., & Gilthorpe, M. S. (2008). Simpson’s Paradox, Lord’s Paradox, and Suppression Effects are the same phenomenon–the reversal paradox. Emerging themes in epidemiology.
Hastie, T., Tibshirani, R., & Friedman, J. H. (2009). The elements of statistical learning: data mining, inference, and prediction. Springer. [Sections 10.13 & 10.14 - Partial Dependence Plots.]
pdp package homepage and documentation.
sklearn documentation on partial dependence plots.
2.2: Local Effects and Interactions
Assessed Reading
- Berrington De Gonzalez, A and Cox, DR. (2007). Interpretation of interaction: a review. Annals of Applied Statistics, 371-385.
Core Reading
- Molnar, C. (2020). Interpretable machine learning.
- Chapter 8.2 - Accumulated Local Effect Plots
- Chapter 8.3 - Feature Interaction
- Goldstein et al. (2015). Peeking Inside the Black Box: Visualizing Statistical Learning With Plots of Individual Conditional Expectation. Journal of Computational and Graphical Statistics, 24:1, 44-65.
Supplementary Reading
- Molnar, C. (2020). Interpretable machine learning.
- Chapter 8.5 - Permuation Feature Importance
- Chapter 9.5 & 9.6 - Shapley values and SHAP
2.3: Introduction to Causality
Assessed Reading
- Chesnaye, N. C., et. al. (2022). An introduction to inverse probability of treatment weighting in observational research. Clinical Kidney Journal, 15(1), 14-20.
Core Reading
- Huntington-Klein, N.C. (2022) The Effect: An Introduction to Research Design and Causality.
Supplementary Reading
- Cunninham, S. (2021) Causal Inference: The Mixtape.
- Chapter 9 - Difference-in-Differences.
2.4: Randomised Control Trials and A/B Testing
Assessed Reading
- Kohavi, Tang, D., & Xu, Y. (2020). Trustworthy online controlled experiments: a practical guide to A/B testing. Cambridge University Press. [Chapter 9 - Ethics of experimental studies in a business setting]
Core Reading
Kendall J. M. (2003). Designing a research project: randomised controlled trials and their principles. Emergency medicine journal : EMJ, 20(2), 164–168.
Kohavi, R., Tang, D., Xu, Y., Hemkens, L. G., & Ioannidis, J. (2020). Online randomized controlled experiments at scale: lessons and extensions to medicine. Trials, 21(1), 1-9.
Supplementary Reading
Kirkwood, B. R., & Sterne, J. A. (2010). Essential medical statistics. John Wiley & Sons. [Part F - Chapter 34, particularly §34.2]
Kohavi, Tang, D., & Xu, Y. (2020). Trustworthy online controlled experiments: a practical guide to A/B testing. Cambridge University Press. [Chapter 17 - Statistics of Online Controlled Experiments]
Hahn S. (2012). Understanding noninferiority trials. Korean journal of pediatrics, 55(11), 403–407.
2.5: Communicating Uncertainty
Assessed Reading
- Van der Bles, A. M., et al. (2019). Communicating uncertainty about facts, numbers and science. Royal Society open science.
Core Reading
- Schünemann, H. J., et al. (2003). Letters, numbers, symbols and words: how to communicate grades of evidence and recommendations. Canadian Medical Association Journal.
Supplementary Reading
- Kearns, M. & Roth, A. (2020). The ethical algorithm: The science of socially aware algorithm design. Oxford University Press. (Chapter 4)
2.6: Assessment Week
Assessed Reading
- In this exercise you may write a rhetorical precis for any of the supplementary readings from weeks 1 - 5.
Part 3
3.1: Reproducibility and Robustness I
Assessed Reading
- Sculley, D. et al. (2015). Hidden Technical Debt in Machine Learning Systems. Advances in neural information processing systems, 28.
Supplementary Reading
- Video - Machine Learning, Techincal Debt, and You.
3.2: Reproducibility and Robustness II
Assessed Reading
- Srivastava, N. et al. (2014). Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research.
Supplementary Reading
- Hardt, M., Recht, B. and Singer, Y. (2016). “Train faster, generalize better: Stability of stochastic gradient descent. Proceedings of Machine Learning Research.
3.3: Homomorphic Encryption
Assessed Reading
- Chen et al. (2018). Logistic regression over encrypted data from fully homomorphic encryption. BMC Medical Genomics, 11:81.
Supplementary Reading
Graepel, T., Lauter, K. and Naehrig, M. (2012). ML Confidential: Machine Learning on Encrypted Data. International Conference Information Security and Cryptography, 15. [pdf]
Iezzi, M. (2020). Practical Privacy-Preserving Data Science With Homomorphic Encryption: An Overview. ArXiv Preprint.
3.4: Federated Learning
Assessed Reading
- Li et al. (2020). Federated Learning: Challenges, Methods, and Future Directions. IEEE Signal Processing Magazine, vol. 37, no. 3, pp. 50-60.
Core Reading
- McMahan et al. (2017). Communication-Efficient Learning of Deep Networks from Decentralized Data. Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, PMLR 54:1273-1282. [pdf]
Supplementary Reading
- IBM research blog. What is Federated Learning?
3.5: Adverserial Thinking
Assessed Reading
- Rigaki, M. & Garcia, S. (2020). “A Survey of Privacy Attacks in Machine Learning”. ArXiv preprint.
3.6: Assessment Week
Assessed Reading
- In this exercise you may write a rhetorical precis for any of the supplementary readings from weeks 1 - 5.