Continuing the discussion from this week's meeting. This is not restricted to COUNTER, since histograms can also be cumulative (which is another discussion -- there is a difference between "type" (int, histogram etc) and "kind" (gauge, cumulative) of a metric).
In this model a cumulative point is a tuple (t_start, t, v) where t is the time when value v was sampled and t_start is when the timeseries had value 0.
The motivation for the start timestamp is that it is completely describes the cumulative point and does not assume anything about when the last point was collected. This guards against various inaccuracies caused by the monitoring backend losing points or the source being monitored restarting (or clearing its metric state to save memory), and allows for pushing metrics to the monitoring backend.
For example, consider a source generating counter points at timestamp 0min, 10min, 20min, with values 5, 10, 15 and then it crashes, and finally restarts at 57m, and reports a point with value 17 at 60min. We don't want to incorrectly assume that the counter has increased from 15 to 17 over the interval [20m, 60m]. With a perfect failure detector (which of course doesn't exist) a backend could narrow that interval to [50m, 60m] but it is still less accurate than the real t_start=57min, which is easy for the source to report.