I recently had the experience of trying to orchestrate Prometheus metrics in a Function-As-A-Service (FAAS) application, which turned out to be a bit of a harrowing experience. Here’s what I learned.
Prometheus in a FAAS World
In a “standard” architecture, you have a long running service running on some machine somewhere. That service exposes an HTTP(S) endpoint that Prometheus discovers (through some service discovery mechanism), and periodically sends GET requests to, parsing the metrics that your application generates. This “pull based” model relies on several properties of this architecture - applications live long enough to coincide with when Prometheus decides to scrape them, and to be able to count things on their own.
In a FAAS world, this property of a long running service doesn’t exist - functions only serve one request and only live for the length of that request. They certainly don’t live long enough to wait for Prometheus to scrape them, and there’s no way (in any provider that I know of) to expose a seperate metrics HTTP endpoint even if they did. So this presents a problem - we want to get metrics out of these applications for general alerting and monitoring, but they don’t fit into the paradigm that Prometheus expects.
Metric Types
To outline the use case, there’s several “types” of metrics I want to track in my application:
- Counters - pretty simple, things like the number of requests we’ve served
- Gauges - numbers that can go up and down, like the amount of memory we used to service the request
- “Booleans” - encoded as a Gauge of value 1, things like version information
As a Prometheus scrape (In text exposition format), this looks like:
# HELP my_app_request_total The number of requests we've served
# TYPE my_app_request_total counter
my_app_request_total 1
# HELP my_app_memory_bytes The Number of bytes of memory we've used
# TYPE my_app_memory_bytes gauge
my_app_memory_bytes 31415.0
# HELP build_info Build and Version Information
# TYPE build_info gauge
build_info{app_version="1.0"} 1
Prometheus Push Gateway
The first solution we can look at is the Prometheus Push Gateway. This allows you to send your metrics in an HTTP POST to the Push Gateway, which outlives each function and is able to be scraped by Prometheus. So we can push our metrics and be done with it, right? Unfortunatly not. The Push Gateway docs even call out this scenario explicitly as an example of when not to use the Push Gateway, but for completeness, the main reason we can’t use it here is that the Push Gateway doesn’t “aggregate”, so there’s no way to easily count things like the number of requests we’ve served (each function would report it’s served one request, and ideally we’d sum these)
If we were to push the above text exposition format twice, this is what the Push Gateway would produce:
# HELP my_app_build_info Build and Version Information
# TYPE my_app_build_info gauge
my_app_build_info{app_version="1.1",instance="",job="test"} 1
# HELP my_app_memory_bytes The Number of bytes of memory we've used
# TYPE my_app_memory_bytes gauge
my_app_memory_bytes{instance="",job="test"} 31415
# HELP my_app_request_total The number of requests we've served
# TYPE my_app_request_total counter
my_app_request_total{instance="",job="test"} 1
Which makes sense - push gateway just returns the latest push. This works for some things, but ideally we would want a running total of our counter, rather than only a 1. We could do something like creating a job per request and summing over them, but that results in a huge cardinality explosion and would be tremendously inefficient. Let’s look elsewhere
WeaveWorks Aggregation Gateway
The WeaveWorks Aggregation Gateway, despite looking abandoned, is the officially recommended way to achieve the aggregation we want. Perhaps the problem with it however is that it aggregates too much. Let’s do a couple of pushes again and see what we get out:
# HELP my_app_build_info Build and Version Information
# TYPE my_app_build_info gauge
my_app_build_info{app_version="1.1"} 2
# HELP my_app_memory_bytes The Number of bytes of memory we've used
# TYPE my_app_memory_bytes gauge
my_app_memory_bytes 62830
# HELP my_app_request_total The number of requests we've served
# TYPE my_app_request_total counter
my_app_request_total 2
We now aggregate the request total, but we also aggregate everything else! We break our boolean idea by aggregating it to a 2, and we also sum our memory (which might be more correct?). We also have no way of deleting series, so if we push a new version, we get:
# HELP my_app_build_info Build and Version Information
# TYPE my_app_build_info gauge
my_app_build_info{app_version="1.1"} 2
my_app_build_info{app_version="1.2"} 1
Two versions! Ideally we would replace the old one with the new.
Open Metrics
It seems the root cause of our problems is the fact that we’re trying to squeeze a whole bunch of different semantics, with only a small amount of metadata available to the push gateway to be able to make aggregation decisions. Thankfully, there might be a saviour in the future in the form of the OpenMetrics standard. The OpenMetrics standard provides a much larger set of metrics types, namely the Info
type (which we could use for our version info above), and StateSet for other types of boolean values. This still doesn’t help us with the memory case - there’s a number of different ways for a gauge to be aggregated (max, min, median, just adding them all as seperate timeseries) and there’s no real way to determine which is valid for a given gauge. Still, I hope that we will see aggregation gateways that are OpenMetrics compliant in the near future.
So What Can We Do?
It seems that there isn’t a solution out there that allows us to handle all of the different types of metrics and aggregations that we want. I’d really be interested in what people are doing for this, because at the moment I only see a couple of solutions:
- Use an existing gateway and limit the metrics you produce to only those that behave well with your choice
- Not use Prometheus, and rely on monitoring from another system (potentially supplied by your Cloud provider)
Neither of those seem particularly great. Maybe there’s another way? Please let me know, but for now I guess we’ll wait until proper OpenMetrics support comes into the mainstream. Or maybe I’ll write my own :)
I'm on Twitter: @sinkingpoint and BlueSky: @colindou.ch. Come yell at me!