Another point from the trenches of the real world:
I've had great success with metrics over logging in many situations.
Logs are really nice for knowing what went wrong in a system. And if the logs have some kind of structure (JSON, S-exps, Erlang Terms, ...) you can also dump all context information relevant to the log entry. Modern systems such as Kibana can read such structure and build a search index over them. It is very useful when trying to figure out what went wrong. In a distributed system, you keep a unique request id in the structural log so you can join log entries from multiple subsystems easily in the centralized logging platform.
Likewise, whenever a given event occurs, you bump a counter or add an entry to a histogram (HdrHistogram comes to mind). This allows you to export counts to a foreign system for plotting of how the system fares internally. It is much cheaper than rendering a log line in the system, and it is almost as informative if you don't need the log line, but rather its occurrence. Live timing histograms also has the advantage that problems tend to show up in the small before a catastrophe. So you can alter the operating point of the system dynamically long before the catastrophe occur in the real world.
I almost never have debug-style logging lines in my code anymore. I much prefer adding a live tracing to the system instead (Erlang has tracing facilities, DTrace is useful on Illumos and FreeBSD, etc). Granted, you can't do this posthumously of an error, but on the other hand, you can tailor it to the situation you have. It also takes care of the need to recompile and redeploy with debug logging (in which case you can't posthumously handle an error).