Question: Given the inverted intensity from Tier V.B, when will the next emission event happen, and what’s the probability of an event during a specified window?
This is the operational layer — what an LDAR (Leak Detection and Repair) crew or a satellite-tasking dispatcher actually consumes. The full derivations of each metric live in methane_pod/notebooks/08_persistency; this page summarises the metrics and how they slot into the plumax API.
The four operational metrics¶
1. Expected wait time ¶
How long after time until the next event?
For a diurnal source, vastly different at noon vs. midnight.
Operational use. Dispatch decisions: arrive during a high-λ window and the next event is imminent (worth waiting); arrive during a low-λ window and you’d waste hours. Drives MARS-style dispatch suppression during dormant cycles.
2. Probability of occurrence ¶
What’s the chance of at least one event in ?
Operational use. “Wrench-turning” probability. If a maintenance window is 4 hours, what’s the chance the leak shows itself during that window? Drives whether to schedule the visit.
3. Conditional intensity given prior detection ¶
For a source with a known recent detection at , what’s the posterior intensity going forward?
For Poisson processes (no memory): unchanged. For Hawkes / self-exciting processes: bumped —
— captures the empirical observation that super-emitters “cluster”.
Operational use. Prioritisation: a source with a recent detection is more likely to repeat-emit in the next 24 h. Re-task a high-resolution satellite (GHGSat GHGSat Inc., 2016, Carbon Mapper Carbon Mapper, 2024) on top of a TROPOMI alert Veefkind et al., 2012.
4. Cumulative event count and credible bounds¶
Expected number of events in , with credible interval from the posterior on λ.
(Homogeneous: .)
Operational use. Annual reporting, regulatory compliance. “How many emission events should we expect this year at this facility class, with 95% credible interval?”
API shape¶
A thin wrapper around methane_pod.intensity:
from plume_simulation.population.persistency import (
expected_wait_time,
occurrence_probability,
cumulative_count,
next_event_quantile,
)
# Inputs: posterior samples of intensity parameters (from Tier V.B fit)
# Outputs: posterior samples of the operational metric
E_wait = expected_wait_time(intensity, t0=18.0, posterior_samples=mcmc.get_samples())
# → array of shape (n_samples,) in [hours]
P_occur = occurrence_probability(intensity, t1=8.0, t2=12.0,
posterior_samples=mcmc.get_samples())
# → array of shape (n_samples,) in [0, 1]The metric functions take an intensity callable (any of the 13 equinox modules from methane_pod.intensity), a query window, and a posterior sample of the intensity’s parameters. They return posterior samples of the metric — full UQ propagation, no point estimates.
Module layout¶
Table (1):Tier V.C module layout — concern, target module, status.
| Concern | Module | Status |
|---|---|---|
| Intensity functions | methane_pod.intensity | ✓ |
| Wait-time / occurrence / cumulative metrics | plume_simulation.population.persistency | ☐ |
| Posterior-aware metric wrappers | same module | ☐ |
| Operational dashboard / report templates | out of scope for plumax; lives in plumax-deploy (future) | — |
The integral over in the wait-time formula is closed-form for a few intensity choices (constant, exponential decay) and otherwise needs jax.scipy.integrate or a fixed quadrature. Worth wrapping once and reusing across metrics.
Validation strategy¶
- Homogeneous limit. For constant λ, all four metrics have closed-form formulas; the implementation should match to machine precision.
- MC self-consistency. Sample event times from a known via thinning, compute the empirical wait time / occurrence frequency, compare to the closed-form metric. Tests both the metric implementation and the simulator.
- Posterior coverage. For a synthetic source with known , the 95% credible interval on should contain the truth ~95% of the time across replicates.
- Diurnal sanity. A solar-heated tank with peak λ at 14:00 should have . Numerical sanity check, not a formal test, but catches sign errors.
Open questions¶
- GHGSat Inc. (2016). GHGSat WAF-P imaging spectrometer constellation. https://www.ghgsat.com/
- Carbon Mapper. (2024). Carbon Mapper: airborne and satellite imaging spectroscopy for greenhouse gas monitoring. https://carbonmapper.org/
- Veefkind, J. P., Aben, I., McMullan, K., Förster, H., de Vries, J., Otter, G., Claas, J., Eskes, H. J., de Haan, J. F., Kleipool, Q., & others. (2012). TROPOMI on the ESA Sentinel-5 Precursor: a GMES mission for global observations of the atmospheric composition for climate, air quality and ozone layer applications. Remote Sensing of Environment, 120, 70–83.