The thesis that agents should calibrate their beliefs in the face of higher-order evidence—i.e., should adjust their first-order beliefs in response to evidence suggesting that the reasoning underlying those beliefs is faulty—is sometimes thought to be in tension with Bayesian approaches to belief update: in order to obey Bayesian norms, it’s claimed, agents must remain steadfast in the face of higher-order evidence. But I argue that this claim is incorrect. In particular, I motivate a minimal constraint on a reasonable treatment of the evolution of self-locating beliefs over time and show that calibrationism is compatible with any generalized Bayesian approach that respects this constraint. I then use this result to argue that remaining steadfast isn’t the response to higher-order evidence that maximizes expected accuracy.
August 28, 2022 at 06:08PM