I would have loved to hear the experts discuss more exactly how they compute uncertainties for their predictions of earthquakes, and how they take into account the possibility that their models may be wrong. They mentioned at 33:19 that meteorologists will often compute uncertainty by running many simulations with slightly different initial conditions, and computing in what fraction of those the event happens. A similar approach could probably be applied to earthquakes, and would similarly spit out an uncertainty value. However, this wouldn't be fully accurate, and would depend on a number of factors not taken into account - how certain we are about the correctness of the computer software (no bugs is HARD!), how certain we are about the laws that guide earthquakes, and how certain we are that the data we collect is accurate (accuracy I'd imagine is much more difficult to calculate than precision). I'd love to know how these researchers account for all these variables, and how/if they weight them in their uncertainty calculation.
top of page
bottom of page
I think this is a super interesting topic to think about, especially considering that only recently they moved from observing data during an earthquake to looking at what may be happening a time before and after that may become indicators to when it could occur or to begin to see if no such precursor exists and they need to be looking at other data once again. I think the accuracy in this is predicting where in a general area the earthquake could occur while a possible uncertainty is predicting when it will occur. I also wonder if these uncertainties are due to known unknowns or just unknown unknowns in general.