Research Summary

My research interests and output are varied. As broad binning, about half my work falls under statistical machine learning (ML) and the other half falls under technology policy.

Most of my recent work focuses either on applying ML to diverse problems in policy analysis or on examining pathologies in the use of ML for decision-making. The latter includes dissections of the problem of fairness in the use of ML for decision-making in social institutions (see example report or my TEDx talk). The former includes identifying and developing ML tools that are better suited for policy or systems analyses (e.g. FCM for causal simulation of policy counterfactuals).

I spent most of my doctoral and post-doctoral years focused on the theoretical foundations of statistical machine learning, specifically working on establishing stochastic-resonant/noise-benefit phenomena in ML.

My dissertation work proved (summary chapter here) that controlled injection of noise can improve the convergence times of Expectation-Maximization (EM) algorithms. This is an example of a noise benefit or stochastic resonance in statistical signal-processing. We call our noise-assisted improvement to the EM algorithm the Noisy Expectation-Maximization (NEM) algorithm. Many iterative statistical estimation and learning algorithms are instances of the EM algorithm. Our noise-induced convergence speed-up also applies to these algorithms. The most notable of such estimation algorithms is backpropagation training for neural networks (see 2020 paper with B. Kosko and K. Audhkhasi).

I did some work on the effects of using approximate model functions (priors and likelihood functions) in Bayesian inference. I proved that Bayes theorem produces posterior pdfs equal in "approximation quality" to the model function approximations. This is the Bayesian Approximation theorem (BAT). I also demonstrated a robust method for approximating arbitrary priors or likelihood functions of compact support either from data or from expert contributions. The method can represent bounded closed-form model functions exactly and efficiently. So the method subsumes many traditional applications of Bayesian inference.

Check out my google scholar page for a more comprehensive sense of what I've been up to.