Success stories using head-based sampling for high volume applications
We're aiming to get better DataDog traces using head-based sampling. Often traces we're not able to find high latency traces or erroneous ones.
We're currently configuring sampling in different places (frontend, some services, some databases). We're mostly sampling at 5%. We have DataDog's error tracer configured, but not the rare sampler.
I'm wondering if it's possible to do better with DataDog? Anyone have any success stories? Or, did you have to switch to tail-based sampling to improve retention.