diff --git a/.github/workflows/deploy-docs_ghpage.yml b/.github/workflows/deploy-docs_ghpage.yml
new file mode 100644
index 0000000..414e551
--- /dev/null
+++ b/.github/workflows/deploy-docs_ghpage.yml
@@ -0,0 +1,32 @@
+name: Deploy Docs to Pages
+
+permissions:
+ contents: write
+
+on:
+ workflow_dispatch:
+ push:
+ branches: [ "emmuhamm/deploy-docs" ] # or "develop"
+ # optional schedule:
+ # schedule:
+ # - cron: "0 0 * * 6"
+
+jobs:
+ docs:
+ runs-on: ubuntu-latest
+ steps:
+ - uses: actions/checkout@v4
+
+ - uses: actions/setup-python@v5
+ with:
+ python-version: "3.11"
+
+ - name: Install dependencies
+ run: |
+ python -m pip install --upgrade pip
+ pip install -r docs_dev/requirements.txt
+ # if you want mkdocstrings to import your package:
+ pip install .
+
+ - name: Deploy
+ run: mkdocs gh-deploy --force
\ No newline at end of file
diff --git a/.gitignore b/.gitignore
index 8e01ed5..f2660af 100644
--- a/.gitignore
+++ b/.gitignore
@@ -28,6 +28,8 @@ share/python-wheels/
MANIFEST
.vscode
+docs/generated
+docs_dev/APIref
# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
diff --git a/docs/README.md b/README.md
similarity index 79%
rename from docs/README.md
rename to README.md
index 1ed671b..366de2c 100644
--- a/docs/README.md
+++ b/README.md
@@ -7,7 +7,7 @@ Set of importable tools used to simplify DAQ development in python.
## Scope
This provides a set of tools that are used in python applications, along with their unit tests. Currently, the following tools are defined
- - logging - [code](https://github.com/DUNE-DAQ/daqpytools/tree/develop/src/daqpytools/logging), [wiki](https://github.com/DUNE-DAQ/daqpytools/wiki/Logging)
+ - logging - [code](https://github.com/DUNE-DAQ/daqpytools/tree/develop/src/daqpytools/logging)
## Indended use case
This repo will serve as the indended source of distribution standard tooling. Any python tool that is used by multiple repositories should be defined here.
@@ -18,4 +18,7 @@ For general users, no setup is required - when developing your python applicatio
from daqpytools.logging.logger import get_daq_logger
log = get_daq_logger(...)
```
-For developers, see the developer wiki.
+
+For users, see [the user wiki](https://dune-daq-sw.readthedocs.io/en/latest/packages/daqpytools/)
+For developers, view [the developer wiki](https://dune-daq.github.io/daqpytools/)
+Note that the GitHub wiki is outdated and unmaintained.
diff --git a/docs/diagrams/LogHandlerConf_Activity.drawio b/docs/diagrams/LogHandlerConf_Activity.drawio
new file mode 100644
index 0000000..a368074
--- /dev/null
+++ b/docs/diagrams/LogHandlerConf_Activity.drawio
@@ -0,0 +1,239 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/docs/diagrams/LogRecord_Activity.drawio b/docs/diagrams/LogRecord_Activity.drawio
new file mode 100644
index 0000000..b49d585
--- /dev/null
+++ b/docs/diagrams/LogRecord_Activity.drawio
@@ -0,0 +1,157 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/docs/diagrams/Logger_class.drawio b/docs/diagrams/Logger_class.drawio
new file mode 100644
index 0000000..dc051f5
--- /dev/null
+++ b/docs/diagrams/Logger_class.drawio
@@ -0,0 +1,274 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/docs/diagrams/Loggers.drawio b/docs/diagrams/Loggers.drawio
new file mode 100644
index 0000000..0401f5b
--- /dev/null
+++ b/docs/diagrams/Loggers.drawio
@@ -0,0 +1,497 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/docs/explanation.md b/docs/explanation.md
new file mode 100644
index 0000000..80e10e7
--- /dev/null
+++ b/docs/explanation.md
@@ -0,0 +1,117 @@
+# Concepts: How logging works in DUNE-DAQ
+
+This page explains the underlying concepts behind Python logging and how daqpytools builds on them. Reading this will help you understand *why* the how-to guides are structured the way they are.
+
+For hands-on instructions, see the how-to guides. For the API reference, see the [reference](https://dune-daq.github.io/daqpytools/APIref).
+
+---
+
+## Python logging fundamentals
+
+The bulk of the logging functionality in drunc and other Python applications is built on the [Python logging framework](https://docs.python.org/3/library/logging.html), with its mission defined below:
+
+> This module defines functions and classes which implement a flexible event logging system for applications and libraries.
+
+It is worth reading to understand how logging works in Python; the salient points are covered below.
+
+In general, the built-in logging module allows producing severity-classified diagnostic events, which can be filtered, formatted, or routed as necessary. These logs automatically contain useful information including timestamp, module, and message context.
+
+The core object in Python logging is the logger. A logging instance, `log`, can be initialized as follows. The phrase "Hello, World!" is bundled with other useful metadata, including severity level, to form a `LogRecord`, which is then emitted as required.
+
+```python
+import logging
+log = logging.getLogger("Demo")
+log.warning("Hello, world!")
+
+>> Hello, World!
+```
+
+### Severity levels
+
+Every record has an attached severity level, which can be used to flag how important a log record is. By default, Python has 5 main levels and one 'notset' level as shown in the image below:
+
+
+
+More levels can be defined as required, see Python's logging manual.
+
+Each logging instance can have an attached severity level. If it has one, then only records that have the same severity level or higher will be transmitted.
+
+```python
+import logging
+log = logging.getLogger("Demo", level = logging.WARNING)
+
+log.info("This will not print")
+log.warning("This will print")
+
+>> This will print
+```
+
+### Handlers
+
+Handlers are a key concept in Python logging, since they control how records are processed and formatted. DAQ uses several standard handlers as well as custom handlers.
+
+The image below shows a file handler, a stream handler, and a webhook handler. Each record is processed and formatted by each handler and then transmitted through that destination.
+
+
+
+Importantly, each handler can have its own associated severity level! In the example above, it is certainly possible to have the WebHookHandler to only transmit if a record is of the level Warning or higher.
+
+### Filters
+
+Filters are an important add-on for loggers, and their primary purpose is to decide whether a record should be transmitted. Filters can be attached to both a logger instance and its handlers.
+
+When a log record arrives, it is first processed by filters attached to the logger. If it passes, the record is then passed to each handler and processed again by that handler's filters. A record is emitted only if those checks pass.
+
+
+
+### Inheritance
+
+Another key part of Python logging is inheritance. Loggers are organized hierarchically, so you can initialize descendant loggers by chaining names with periods, such as "root.parent.child".
+
+By default, loggers inherit certain properties from the parent:
+- severity level of the logger
+- handlers (and all attached properties, including severity level and filters on handlers)
+
+
+
+Note one exception: they _do not_ inherit filters attached directly to the parent logger itself.
+
+A useful diagram is the [logging flow in the official Python 3 docs](https://docs.python.org/3/howto/logging.html#logging-flow).
+
+---
+
+## How daqpytools extends Python logging
+
+The [daqpytools](https://github.com/DUNE-DAQ/daqpytools) package contains several quality-of-life improvements for DAQ Python tooling, including logging utilities.
+
+These include:
+- standardised ways of initialising top-level 'root' loggers
+- constructors for default logging instances
+- many bespoke handlers
+- filters relevant to the DAQ
+- handler configurations
+
+The core philosophy of the logging framework in daqpytools is that each logger should only have _one_ instance of a specific type of logger. This means that while a single logger can have both a Rich and a Stream handler, a single logger cannot have _two_ Rich handlers to prevent duplicating messages.
+
+---
+
+## Understanding handler streams and routing
+
+Within the DUNE DAQ ecosystem, there are several handler configurations that work together. The native implementation supports three logical **streams**:
+
+- **Base** stream: Standard logging (Rich, File, Stream handlers)
+- **OpMon** stream: Monitoring-related output
+- **ERS** stream: error reporting system routing (severity-driven handler selection)
+
+You can think of streams as different "channels" where each has its own set of handlers. The key insight: **ERS Kafka handlers and Throttle filters need to be explicitly activated via `HandlerType` tokens** because they're typically only used when specifically configured.
+
+
+
+This is why routing via `extra={"handlers": [...]}` matters — it tells the logger which stream/handlers to use for each record.
+
+---
+
+## Further reading
+
+- For how this routing model is implemented under the hood, see the [developer explanation](https://dune-daq.github.io/daqpytools/explanation).
+- For how to configure ERS and advanced routing in practice, see [Configuring ERS](./how-to/configure-ers.md) and [Routing messages to specific handlers](./how-to/route-messages.md).
diff --git a/docs/how-to/add-handlers-at-runtime.md b/docs/how-to/add-handlers-at-runtime.md
new file mode 100644
index 0000000..7a10a1b
--- /dev/null
+++ b/docs/how-to/add-handlers-at-runtime.md
@@ -0,0 +1,100 @@
+# How to add handlers at runtime
+
+You can configure handlers in two phases:
+
+1. Build a logger first with `get_daq_logger(...)`.
+2. Add more handlers/filters later, based on runtime context.
+
+This is useful in long-running services where extra outputs (for example ERS Kafka) should only be attached after additional configuration becomes available.
+
+---
+
+## Add one handler at a time with `add_handler`
+
+Use `add_handler` if you want to attach a single handler type to an existing logger.
+
+```python
+from daqpytools.logging import HandlerType, add_handler, get_daq_logger
+
+log = get_daq_logger(
+ logger_name="existing_logger",
+ rich_handler=True,
+ stream_handlers=False,
+)
+
+# Add stdout stream handler later
+add_handler(log, HandlerType.Lstdout, use_parent_handlers=True)
+
+log.info("Now routes to rich + stdout by default")
+```
+
+---
+
+## Suppress by default with `fallback_handler={HandlerType.Unknown}`
+
+You can make newly-added handlers opt-in only by setting fallback handlers to `HandlerType.Unknown`. This means records without explicit `extra["handlers"]` will not be emitted by those handlers.
+
+```python
+from daqpytools.logging import HandlerType, add_handler, get_daq_logger
+
+log = get_daq_logger("fallback_demo", rich_handler=True, stream_handlers=False)
+
+# Add stderr handler, but suppress it by default
+add_handler(
+ log,
+ HandlerType.Lstderr,
+ use_parent_handlers=True,
+ fallback_handler={HandlerType.Unknown},
+)
+
+log.critical("Only rich by default")
+
+# Explicitly target stderr when needed
+log.critical(
+ "Rich + stderr when explicitly requested",
+ extra={"handlers": [HandlerType.Rich, HandlerType.Lstderr]},
+)
+```
+
+---
+
+## Passing arguments to handlers and filters via `**kwargs`
+
+Advanced setup functions accept extra keyword arguments and forward them to the relevant handler/filter factories.
+
+- `get_daq_logger(..., **extras)` forwards extras to handler/filter construction.
+- `add_handler(..., **extras)` forwards extras to that handler factory.
+
+Common examples:
+
+- file handler: `path="mylog.log"`
+- ERS Kafka handler: `ers_kafka_session=...`
+- throttle filter: `initial_threshold=...`, `time_limit=...`
+- rich handler: `width=...`
+
+Example with explicit extras:
+
+```python
+from daqpytools.logging import HandlerType, add_handler, get_daq_logger
+
+log = get_daq_logger("extras_demo", rich_handler=False)
+
+# Pass file-specific kwargs to file handler
+add_handler(
+ log,
+ HandlerType.File,
+ use_parent_handlers=True,
+ path="extras_demo.log",
+)
+
+# Pass rich-specific kwargs to rich handler
+add_handler(
+ log,
+ HandlerType.Rich,
+ use_parent_handlers=True,
+ width=120,
+)
+
+# ERS-specific kwargs are supplied through setup/get APIs
+# setup_daq_ers_logger(log, ers_kafka_session="session_tester")
+```
diff --git a/docs/how-to/best-practices.md b/docs/how-to/best-practices.md
new file mode 100644
index 0000000..0904bb9
--- /dev/null
+++ b/docs/how-to/best-practices.md
@@ -0,0 +1,53 @@
+# Logging best practices
+
+Before reading this, you really should have read the [Concepts](../explanation.md) page as that contains the basic context behind _why_ these best practices are recommended, as well as how to best utilise them.
+
+---
+
+The docs so far give a nice overview of how the logging tools work, but now you need to go ahead and use it! The tools are designed to be as customisable as possible with all the advanced features available for use, however to standardise logging deployment (and to make life easier) theres a couple of useful tips and standards to follow.
+
+## Use of the root logger
+
+**Always** set up a named pseudo-root logger in your application as close to initialisation of your application as possible. Use the daqpytools implementation `setup_root_logger` to do so.
+
+For context, in the native Python logging framework the highest possible logger is the (usually unnamed) root logger. For example, calling `logging.getLogger("top")` will usually yield you a logger called `{root}."top"`. Calling `logging.getLogger()` gets you the `{root}` logger.
+
+As the root logger is the highest logger which every logger inherits from, modifying this logger will have a _global_ effect on all your loggers, which is almost always undesirable.
+
+To keep things safe and compartmentalisable, a pseudo-root logger should be defined very early on, and should contain no handlers. This has benefits of compartmentalising publishing, and making things clearer in the logs due to more traceable names.
+
+In a similar vein, _never_ use `logging.basicConfig`. This tool will modify the root logger and will very easily cause interference with other apps. Its always safer to define the pseudo-root logger with `setup_root_logger` to keep things compartmentalised.
+
+## Inheritance design
+
+Following on setting up an empty root logger, the following image shows a good use of inheritance.
+
+
+
+In this case, `drunc` serves as the pseudo-root logger in which no handlers are defined. All further loggers are inherited from this clean slate.
+
+In each individual app, the handlers are defined there. For example, the unified_shell scripts use the `drunc.unified_shell` logger in which we require a RichHandler.
+
+The power of inheritance is seen in the process manager example. Here, `drunc.process_manager` is defined with both the Rich and File handlers. Subsequent child loggers, such as the `.utils` logger, will _not_ need to define which handlers they want to use since through inheritance they immediately obtain the handlers of the parents.
+
+As shown here, all loggers are initialised via `{pseudo_root_logger}.{parent}.{child}` names. Consider writing helper functions to facilitate this, or use Python's `__name__`.
+
+## Where to define loggers
+
+_Ideally_, loggers should only be defined once. While they _are_ singleton objects and there are simple ways to call an already defined logger, preference should be made to use inheritance to call 'new' loggers to keep things traceable.
+
+A good place to define parent-level loggers with handlers (c.f `drunc.process_manager`) is the module's `__init__` file. Subsequent new loggers can be defined in the various files of that Python module. For example, in the `process_manager/utils.py`, a new logger called `drunc.process_manager.utils` can be defined and used for the duration of that file, where it automatically inherits the handlers defined from the parent-level logger.
+
+## Calling and configuring loggers
+
+Use `get_daq_logger` to initialise it once.
+
+Following the previous tip, if you feel the need to get an already-initialised logger with `get_daq_logger`, consider making a child.
+
+All handlers you expect to use by default should be initialised with `get_daq_logger`.
+
+## ERS implementation
+
+To install ERS handlers on your logger, use `setup_daq_ers_logger`. You will then need to use a `LogHandlerConf` instance to activate them; this should be defined somewhere close to where `setup_daq_ers_logger` was called and should be callable at the point of use of the logger.
+
+Remember that ERS environment variables need to exist at the point of ERS logger initialisation.
diff --git a/docs/how-to/configure-ers.md b/docs/how-to/configure-ers.md
new file mode 100644
index 0000000..b5333a7
--- /dev/null
+++ b/docs/how-to/configure-ers.md
@@ -0,0 +1,46 @@
+# How to configure ERS handlers
+
+This page covers how to attach and use ERS (error reporting system) handlers on a logger.
+
+For background on ERS streams and routing, see [Concepts](../explanation.md). For the `LogHandlerConf` routing API, see [Routing messages to specific handlers](./route-messages.md).
+
+---
+
+## Configuring ERS handlers on an existing logger
+
+Use `setup_daq_ers_logger` to attach ERS-derived handlers to an existing logger based on environment configuration.
+
+```python
+from daqpytools.logging import LogHandlerConf, get_daq_logger, setup_daq_ers_logger
+
+log = get_daq_logger(
+ logger_name="ers_logger",
+ rich_handler=True,
+ stream_handlers=False,
+)
+
+# Attach ERS-derived handlers (lstdout, protobufstream) to this logger
+setup_daq_ers_logger(log, ers_kafka_session="session_temp")
+
+# Now use LogHandlerConf routing to target ERS handlers
+ers_conf = LogHandlerConf(init_ers=True)
+log.info("ERS Info routing", extra=ers_conf.ERS)
+log.warning("ERS Warning routing", extra=ers_conf.ERS)
+log.error("ERS Error routing", extra=ers_conf.ERS)
+```
+
+---
+
+## ERS environment variables
+
+ERS handlers are configured via environment variables. These are parsed automatically by daqpytools:
+
+```
+DUNEDAQ_ERS_ERROR="throttle,lstdout,protobufstream(monkafka.cern.ch:30092)"
+DUNEDAQ_ERS_WARNING="..."
+# etc.
+```
+
+These are parsed into `ERSPyLogHandlerConf` objects that hold the handler list and optional protobuf endpoint for each severity.
+
+Remember that ERS environment variables need to exist at the point of ERS logger initialisation. See [best practices](./best-practices.md) for guidance on when to call `setup_daq_ers_logger`.
diff --git a/docs/how-to/route-messages.md b/docs/how-to/route-messages.md
new file mode 100644
index 0000000..95f9990
--- /dev/null
+++ b/docs/how-to/route-messages.md
@@ -0,0 +1,69 @@
+# How to route messages to specific handlers
+
+This page explains how to direct individual log records to specific handlers using `HandlerType` tokens and `LogHandlerConf`.
+
+For background on *why* routing works this way, see [Concepts](../explanation.md).
+
+---
+
+## Choosing handlers with HandlerTypes
+
+You can route individual records to specific handlers by using `extra={"handlers": [...]}`:
+
+```python
+from daqpytools.logging import HandlerType
+
+log = get_daq_logger("example", rich_handler=True, file_handler_path="logging.log")
+
+log.info("This will only go to Rich", extra={"handlers": [HandlerType.Rich]})
+log.info("This will only go to File", extra={"handlers": [HandlerType.File]})
+log.info("You can even send to both", extra={"handlers": [HandlerType.Rich, HandlerType.File]})
+```
+
+Note: Asking for a handler type that isn't attached is a no-op. Using `HandlerType.Stream` in the example above (when only Rich and File are attached) will be silently ignored.
+
+---
+
+## Using LogHandlerConf for structured routing
+
+`LogHandlerConf` is a configuration dataclass that encapsulates the handler setup for different streams. It handles ERS environment variable parsing and creates routing metadata bundles that you attach to records via `extra`.
+
+### Understanding LogHandlerConf
+
+The ERS configuration is defined in OKS and automatically parsed by daqpytools. A key feature: **handlers are severity-level dependent**. ERS Fatal and ERS Info may have different handler requirements.
+
+`LogHandlerConf` manages this complexity:
+
+```python
+from daqpytools.logging import LogHandlerConf
+
+handlerconf = LogHandlerConf()
+
+# Access router metadata for each stream
+main_logger.warning("Handlerconf Base", extra=handlerconf.Base)
+main_logger.warning("Handlerconf Opmon", extra=handlerconf.Opmon)
+```
+
+This passes the appropriate handler routing metadata so the record is emitted to the right handlers for that stream.
+
+### Initialize ERS streams lazily
+
+By default, `LogHandlerConf` does not initialize the ERS stream because it requires ERS environment variables. You can initialize it later when ERS becomes available:
+
+```python
+# By default init_ers is false
+LHC = LogHandlerConf() # ERS not initialized yet
+
+print(LHC.Base) # Success
+print(LHC.ERS) # Throws: ERS stream not initialised
+
+# Later, when ERS envs are set
+LHC.init_ers_stream()
+print(LHC.ERS) # Success
+```
+
+Or initialize upfront if ERS vars are already defined:
+
+```python
+LHC_with_ers = LogHandlerConf(init_ers=True)
+```
diff --git a/docs/how-to/use-handlers.md b/docs/how-to/use-handlers.md
new file mode 100644
index 0000000..3f551e8
--- /dev/null
+++ b/docs/how-to/use-handlers.md
@@ -0,0 +1,124 @@
+# How to use handlers and filters
+
+This page walks through each available handler and filter in daqpytools with short examples.
+
+Remember that by default, any messages received by the logger will be transmitted to _all_ available handlers that are attached to the logger.
+
+**For now, please view both `get_daq_logger` and the relevant builders in `handlers.py` and `filters.py` to see what options exist on how to initialise them.**
+
+**In the future, this will be automatically generated from the docstrings.**
+
+For the full API reference (kwargs, types, defaults), see the [auto-generated reference](https://dune-daq.github.io/daqpytools/APIref).
+
+---
+
+## Rich handler
+
+The Rich handler should be the 'default' handler for any messages that should be transmitted in the terminal. This handler has great support of colors, and delivers a complete message out to the terminal to make it easy to view and also trace back to the relevant message.
+
+
+
+## File handler
+
+As the name suggests, the file handler is used to transmit messages directly to a log file. Unlike stream and rich handlers, instead of defining a boolean in the constructor the user must supply the _filename_ of the target file for the messages to go into.
+
+
+
+## Stream handlers
+
+Stream handlers are used to transmit messages directly to the terminal without any color formatting. This is of great use for the logs of the controllers in drunc, which has its own method of capturing logs via a capture of the terminal output and a pipe to the relevant log file.
+
+Note that stream handling consists of two handlers, one writing to `stdout` and one to `stderr`. The `stderr` stream emits only for records at `ERROR` or above.
+
+
+
+## ERS Kafka handler
+
+The ERS Kafka handler is used to transmit ERS messages via Kafka, which is incredibly useful to show on the dashboards messages as they happen.
+
+This handler is not included in the default emit set. Extra configuration is required; for example:
+
+```python
+import logging
+
+from daqpytools.logging import HandlerType, get_daq_logger
+
+main_logger: logging.Logger = get_daq_logger(
+ logger_name="daqpytools_logging_demonstrator",
+ ers_kafka_session="session_tester"
+)
+
+main_logger.error(
+ "ERS Message",
+ extra={"handlers": [HandlerType.Protobufstream]}
+)
+```
+
+See [Configuring ERS](./configure-ers.md) for more details.
+
+
+
+**Notes**
+
+At the moment, by default they will be sent via the following:
+```
+session_name: session_tester
+topic: ers_stream
+address: monkafka.cern.ch:30092
+```
+
+## Throttle filter
+
+There are times when an application decides to send a huge amount of logs of a single message in a very short time, which can overwhelm the systems. When such an event occurs, it is wise to throttle the output coming out.
+
+The throttle filter replicates the same logic that exists in the ERS C++ implementation, which dynamically limits how many messages get transmitted. The filter is by default attached to the _logger_ instance, with no support for this filter being attached to a specific handler just yet.
+
+Initializing the filter takes two arguments:
+ - `initial_treshold`: number of initial occurrences to let through immediately
+ - `time_limit`: time window in seconds for resetting state
+
+The basic logic is as follows.
+
+1. The first N messages will instantly get transmitted, up to `initial_treshold`
+2. The next 10 messages will be suppressed, with the next single message reported at the end
+3. The next 100 messages will be suppressed, with the next single message reported at the end
+4. This continues, with the threshold increasing by 10x each time
+5. After `time_limit` seconds after the last message, the filter gets reset, allowing messages to be sent once more
+
+For the throttle filter, a 'log record' is **uniquely** defined by the record's pathname and linenumber. Therefore, 50 records that contain the same 'message' but defined in different line numbers in the script will not be erroneously filtered.
+
+An example is as follows:
+
+```python
+import time
+
+from daqpytools.logging import HandlerType, get_daq_logger
+
+main_logger: logging.Logger = get_daq_logger(
+ logger_name="daqpytools_logging_demonstrator",
+ stream_handlers=True,
+ throttle=True
+)
+
+emit_err = lambda i: main_logger.info(
+ f"Throttle test {i}",
+ extra={"handlers": [HandlerType.Rich, HandlerType.Throttle]},
+)
+
+for i in range(50):
+ emit_err(i)
+main_logger.warning("Sleeping for 30 seconds")
+time.sleep(30)
+for i in range(1000):
+ emit_err(i)
+```
+
+Which will behave as expected.
+
+
+
+**Note**
+By default, throttle filters obtained via `get_daq_logger` are initialized with an `initial_treshold` of 30 and a `time_limit` of 30.
+
+**Note**
+Similarly to the ERS Kafka handler, this filter is not enabled by default, hence requiring the use of HandlerTypes. See [Routing messages to specific handlers](./route-messages.md) for more info.
diff --git a/docs/img/Example_usecase.png b/docs/img/Example_usecase.png
new file mode 100644
index 0000000..97c65d1
Binary files /dev/null and b/docs/img/Example_usecase.png differ
diff --git a/docs/img/demo_ers.png b/docs/img/demo_ers.png
new file mode 100644
index 0000000..e93a51c
Binary files /dev/null and b/docs/img/demo_ers.png differ
diff --git a/docs/img/demo_file.png b/docs/img/demo_file.png
new file mode 100644
index 0000000..42c7eb7
Binary files /dev/null and b/docs/img/demo_file.png differ
diff --git a/docs/img/demo_rich.png b/docs/img/demo_rich.png
new file mode 100644
index 0000000..eed45a3
Binary files /dev/null and b/docs/img/demo_rich.png differ
diff --git a/docs/img/demo_streams.png b/docs/img/demo_streams.png
new file mode 100644
index 0000000..ceeb8d3
Binary files /dev/null and b/docs/img/demo_streams.png differ
diff --git a/docs/img/demo_throttle.png b/docs/img/demo_throttle.png
new file mode 100644
index 0000000..803422c
Binary files /dev/null and b/docs/img/demo_throttle.png differ
diff --git a/docs/img/filters.png b/docs/img/filters.png
new file mode 100644
index 0000000..986c93c
Binary files /dev/null and b/docs/img/filters.png differ
diff --git a/docs/img/handlers.png b/docs/img/handlers.png
new file mode 100644
index 0000000..fd23592
Binary files /dev/null and b/docs/img/handlers.png differ
diff --git a/docs/img/inheritance.png b/docs/img/inheritance.png
new file mode 100644
index 0000000..7f93c42
Binary files /dev/null and b/docs/img/inheritance.png differ
diff --git a/docs/img/loglevels.png b/docs/img/loglevels.png
new file mode 100644
index 0000000..75aa5d8
Binary files /dev/null and b/docs/img/loglevels.png differ
diff --git a/docs/img/streams.png b/docs/img/streams.png
new file mode 100644
index 0000000..7b4cfc1
Binary files /dev/null and b/docs/img/streams.png differ
diff --git a/docs/index.md b/docs/index.md
new file mode 100644
index 0000000..52071cf
--- /dev/null
+++ b/docs/index.md
@@ -0,0 +1,25 @@
+# Logging in DUNE-DAQ — Documentation
+
+Welcome, fellow beavers, to the logging documentation for daqpytools.
+
+This documentation is split into two sections depending on your role:
+
+- **User docs** — for anyone writing Python applications that use logging
+- **Developer docs** — for anyone extending the logging system itself (new handlers, new filters). This is found in [a separate MKDocs website](https://dune-daq.github.io/daqpytools/).
+
+---
+
+## User documentation
+
+| Page | What it's for |
+|---|---|
+| [Tutorial](./tutorial.md) | Get a working logger running from scratch |
+| [Concepts & explanation](./explanation.md) | Understand how Python logging and daqpytools work |
+| [How to use handlers and filters](./how-to/use-handlers.md) | Descriptions and examples for each handler and filter |
+| [How to route messages](./how-to/route-messages.md) | Direct records to specific handlers using HandlerType and LogHandlerConf |
+| [How to add handlers at runtime](./how-to/add-handlers-at-runtime.md) | Attach handlers after logger creation; pass kwargs |
+| [How to configure ERS](./how-to/configure-ers.md) | Attach and use ERS handlers |
+| [Best practices](./how-to/best-practices.md) | Recommended patterns for structuring logging in your application |
+| [Troubleshooting](./reference/troubleshooting.md) | Common symptoms, causes, and fixes |
+| [API reference](https://dune-daq.github.io/daqpytools/APIref) | Auto-generated kwargs, types, and defaults for all public APIs (redirects to MKDocs website) |
+
diff --git a/docs/reference/troubleshooting.md b/docs/reference/troubleshooting.md
new file mode 100644
index 0000000..b1615ec
--- /dev/null
+++ b/docs/reference/troubleshooting.md
@@ -0,0 +1,16 @@
+# Troubleshooting reference
+
+This page lists common symptoms, their likely causes, and fixes.
+
+For a systematic debugging workflow when extending the system, see the [developer debugging checklist](https://dune-daq.github.io/daqpytools/how-to/debug-routing).
+---
+
+| Symptom | Likely cause | What to check | Fix |
+|---|---|---|---|
+| `Logger ... already exists with different handler configuration` | Same logger name reused with different constructor flags | Logger name and previous initialization path | Reuse same config for that name, or choose a new logger name |
+| `Root logger ... already has handlers configured` | `setup_root_logger` called after handlers were already attached | Existing handlers on the named root logger | Use a fresh logger name or clear handlers before setup |
+| Throttle filter appears to do nothing | `HandlerType.Throttle` is not in resolved allowed handlers for that record | `extra={"handlers": ...}` and stream metadata | Add `HandlerType.Throttle` to routing metadata for the messages you want throttled |
+| ERS stream access raises `ERS stream not initialised` | `LogHandlerConf` created with `init_ers=False` and ERS not initialized yet | Whether `init_ers_stream()` was called | Call `init_ers_stream()` after ERS env vars are present |
+| ERS setup fails due to missing env | One or more required `DUNEDAQ_ERS_*` variables are unset/empty | Environment before ERS init/setup | Export required ERS variables before initializing ERS |
+| ERS setup fails with multiple protobuf endpoints | ERS env vars define different `protobufstream(url:port)` targets by severity | Parsed ERS vars across severities | Use one shared endpoint in Python setup path |
+| Message not emitted to expected handler | Requested handler not attached or filtered out by routing | Attached handlers and `extra["handlers"]` values | Attach the handler at logger setup and include the right `HandlerType` on the record |
diff --git a/docs/tutorial.md b/docs/tutorial.md
new file mode 100644
index 0000000..2576cc6
--- /dev/null
+++ b/docs/tutorial.md
@@ -0,0 +1,63 @@
+# Getting Started with Logging in DUNE-DAQ
+
+By the end of this tutorial you will have a working logger printing colour-formatted output to your terminal. It should take about five minutes. No prior knowledge of Python logging is assumed — if you want to understand the concepts behind what you're doing, read [Concepts & explanation](./explanation.md) afterwards.
+
+## Prerequisites
+
+Before starting, make sure you have:
+
+- The DUNE DAQ environment loaded (i.e. `dbt-setup-env` or equivalent has been run in your shell)
+- `daqpytools` installed in your environment
+
+## Step 1: Initialize a logger
+
+Initializing a logger instance is simple:
+
+```python
+from daqpytools.logging import get_daq_logger
+test_logger = get_daq_logger(
+ logger_name = "test_logger", # Set as your logger name. Preferrably it should be relevant to what module / file you are in
+ log_level = "INFO", # Default level you will transmit at or above. In this case, Debugs will not be transmitted
+ use_parent_handlers = True, # Just keep this true
+
+ ## the rest are whatever handlers you want to attach. Read on for what exists. Rich is your standard TTY logger so the vast majority will be using this
+ rich_handler = True,
+ stream_handlers = False # you dont really need this; its False by default
+)
+```
+
+For now, **please see the docstring of `get_daq_logger` to see what stuff you can have and what to initialise with.**
+
+This gives you a named logger with a single Rich handler attached, emitting at `INFO` level and above. Loggers in daqpytools are singletons — calling `get_daq_logger` with the same name twice will return the same instance, so it's safe to call this once at module level and reuse it throughout your code.
+
+## Step 2: Emit your first messages
+
+```python
+test_logger.info("Hello, world!")
+
+test_logger.info(
+ "[dim cyan]Look[/dim cyan] "
+ "[bold green]at[/bold green] "
+ "[bold yellow]all[/bold yellow] "
+ "[bold red]the[/bold red] "
+ "[bold white on red]colours![/bold white on red] "
+)
+```
+
+You should see colour-formatted output in your terminal, something like this:
+
+
+
+The Rich handler supports the full [Rich markup syntax](https://rich.readthedocs.io/en/stable/markup.html) inline in your log messages.
+
+## Next steps
+
+- To explore the full range of available handlers and filters interactively, run the logging demonstrator with the DUNE environments loaded:
+ ```
+ daqpytools-logging-demonstrator
+ ```
+ View the help string to learn more, and the script itself in the repository to see how it is implemented.
+- To understand *why* logging works the way it does, read the [Concepts & explanation](./explanation.md).
+- To learn how to use specific handlers and filters, see the [How-to guides](./how-to/).
+- For a full API reference, see the [API Ref](https://dune-daq.github.io/daqpytools/APIref).
+- If you are introducing logging to your Python repo, or upgrading an existing implementation, **please** read the [Logging best practices](./how-to/best-practices.md).
diff --git a/docs_anchor/.gitignore b/docs_anchor/.gitignore
new file mode 100644
index 0000000..86d0cb2
--- /dev/null
+++ b/docs_anchor/.gitignore
@@ -0,0 +1,4 @@
+# Ignore everything in this directory
+*
+# Except this file
+!.gitignore
\ No newline at end of file
diff --git a/docs_dev/explanation.md b/docs_dev/explanation.md
new file mode 100644
index 0000000..9c450f6
--- /dev/null
+++ b/docs_dev/explanation.md
@@ -0,0 +1,226 @@
+# Concepts: How the logging system works internally
+
+This page is for developers who want to understand the internals of daqpytools logging — for example, to add a new handler or debug a routing issue.
+
+For user-facing concepts (Python logging fundamentals, streams), see the [user explanation](https://dune-daq-sw.readthedocs.io/en/latest/packages/daqpytools/explanation).
+For implementation recipes, see the how-to guides.
+
+---
+
+## The core idea
+
+In the context of controlling which records to transmit, the logging system's job is to answer one question for every log record:
+
+> **Should this specific handler transmit this specific record right now?**
+
+Everything (registries, strategies, filters, specs) exists to answer that question consistently and without hardcoding destination logic into handler classes.
+
+### General framework
+
+The overall framework is as follows:
+
+- **Handlers only know *how* to emit** (file, terminal, kafka). They don't know *if* they should.
+- **Records carry metadata** about where they want to go (in `extra["handlers"]`).
+- **Filters (with their strategy) decide eligibility** using that metadata + fallback rules.
+
+This creates a few nice properties:
+
+1. You can change routing per-message without touching config or logger setup.
+2. Global defaults stay consistent even when metadata isn't present.
+3. New handlers/filters can be added without rewriting decision logic in existing code.
+
+### A model for how handlers and messages interact
+
+Think of it as two sets that need to overlap:
+
+- **Handler capability set**: "I'm a RichHandler, so I can handle RichHandler messages" (represented as `HandlerType` values)
+- **Record request set**: "This record wants to go to [Rich, File, Throttle]" (resolved from metadata or defaults)
+
+A handler emits if these overlap:
+
+```
+emit if (handler_ids ∩ allowed_handlers) is non-empty
+```
+
+The above is the general idea of how everything should work. The code is therefore there to ensure that the above model works as expected.
+
+---
+
+## Core components
+
+This section defines the core components without diving into their interactions yet. For how they interact at runtime, see the [architecture reference](./reference/architecture.md).
+
+### HandlerTypes
+
+`HandlerType` is an enum defined in `handlerconf.py`. It represents anything that can be attached to a logger at the top level. This includes:
+
+- **Output handlers**: `Rich`, `File`, `Lstdout`, `Lstderr`, `Protobufstream`, `Stream` (which is a composite of stdout/stderr)
+- **Logger-level filters**: `Throttle` (logger-attached throttling filter)
+
+Important: `HandleIDFilter` is **not** a `HandlerType`. It's an internal filter attached to each handler to enforce routing decisions.
+
+Every `HandlerType` is a contract. When you use it:
+
+- It's defined as an enum value in `handlerconf.py`
+- It has a corresponding `HandlerSpec` or `FilterSpec` in a registry
+- Records can request it via `extra={"handlers": [HandlerType.Rich, ...]}`
+- Handlers are identified by their `HandlerType` when filtering decides whether to emit
+
+When adding a new handler, pick a `HandlerType` first. Everything else flows from that token.
+
+### StreamType
+
+`StreamType` is another enum in `handlerconf.py`. It marks which logical stream a record belongs to:
+
+- `BASE` (normal/default routing)
+- `OPMON` (monitoring/opmon-related output)
+- `ERS` (Error Reporting System routing)
+
+By default, records route according to `extra["handlers"]` or fallback. But if a record is marked `extra={"stream": StreamType.ERS}`, then `StreamAwareAllowedHandlersStrategy` dispatches to ERS-specific routing logic instead.
+
+This is extensible: you can add new `StreamType` values and teach the strategy dispatcher how to handle them.
+
+### Specs
+
+Defined in `specs.py`, there are two types:
+
+**`HandlerSpec`** describes how to build a handler:
+- `alias`: The `HandlerType` key
+- `handler_class`: The runtime handler class (used to detect existing instances)
+- `factory`: A callable that builds the handler from configuration
+- `fallback_types`: Which `HandlerType` values this handler represents for routing purposes
+- `target_stream`: Optional (for stream-specific handlers like stdout vs stderr)
+
+**`FilterSpec`** describes how to build a logger-level filter:
+- `alias`: The activation `HandlerType` token
+- `filter_class`: The runtime filter class
+- `factory`: A callable that builds the filter
+- `fallback_types`: Default handler types for the filter
+
+Specs are the "source of truth" for what a handler or filter is. When setup code needs to build something, it looks up the spec in a registry.
+
+
+
+### HandleIDFilter
+
+
+
+`HandleIDFilter` is the core enforcement mechanism. Each handler gets one attached to it.
+
+Its job: "Should **this specific handler** emit this record?"
+
+How it works:
+
+1. It knows which handler it's attached to via `handler_ids` (a set of `HandlerType` values)
+2. For each record, it calls the routing strategy to get the `allowed_handlers` set (resolved from `extra["handlers"]` or fallback)
+3. It emits if `handler_ids ∩ allowed_handlers` is non-empty
+
+This implements the set intersection logic from the core idea section. It's the enforcement point where the handler capability set meets the record request set.
+
+### Routing strategies
+
+Defined in `routing.py`, strategies answer: "What `HandlerType` values are allowed for this record?"
+
+**`AllowedHandlersStrategy`** is the abstract base. Implementations:
+
+1. **`DefaultAllowedHandlerStrategy`**:
+ - Uses `record.handlers` if present (explicit routing metadata)
+ - Falls back to `fallback_handlers` set if `record.handlers` is absent or None
+
+2. **`ERSAllowedHandlersStrategy`**:
+ - Reads `record.ers_handlers` dict and `record.levelno` (Python log level)
+ - Maps the level to an ERS severity variable using `level_to_ers_var`
+ - Returns the handler set for that severity
+
+3. **`StreamAwareAllowedHandlersStrategy`**:
+ - Looks at `record.stream`
+ - If `stream == StreamType.ERS`, uses `ERSAllowedHandlersStrategy`
+ - Otherwise uses `DefaultAllowedHandlerStrategy`
+ - This is the primary strategy used by default
+
+The key insight: strategies are pluggable. Different record types can use different resolution logic without changing filter code.
+
+### Fallback handlers
+
+**This is the number-one misunderstanding.** Fallback is not a "if all else fails" mechanism. It's the **default routing policy**.
+
+Each handler gets a `fallback_types` set from its `HandlerSpec` (what it defaults to emitting). When you attach a handler, you can override this with `fallback_handler` parameter. Here's how it works in practice:
+
+```python
+# Start with a clean logger (no handlers)
+from daqpytools.logging import add_handler, HandlerType, get_daq_logger
+log = get_daq_logger(
+ "myapp",
+ log_level="INFO",
+ rich_handler=False, # deliberately don't add handlers, we'll do it manually
+ stream_handlers=False,
+)
+
+# Attach Rich handler with its spec's default fallback (Rich)
+add_handler(log, HandlerType.Rich, use_parent_handlers=True)
+# Rich handler's fallback now = [HandlerType.Rich] (from spec)
+
+# Record 1: no explicit handlers → uses fallback
+log.info("Rich only") # Emits because Rich is in Rich's fallback
+
+# Attach Lstderr with fallback = Unknown (won't emit by default)
+add_handler(
+ log,
+ HandlerType.Lstderr,
+ use_parent_handlers=True,
+ fallback_handler={HandlerType.Unknown} # Override! Now Lstderr won't emit unless explicitly requested
+)
+
+# Record 2: standard message → only Rich emits
+log.critical("Still just rich") # Lstderr drops it (Unknown not in allowed set)
+
+# Record 3: explicit request → both emit
+log.critical("Both now", extra={"handlers": [HandlerType.Rich, HandlerType.Stream]})
+```
+
+The key insight: each handler has its own fallback set, set when the handler is attached. Records check against that fallback (via `HandleIDFilter`) unless `extra["handlers"]` overrides it.
+
+This feature is incredibly useful in suppressing ERS related handlers when they are not requested for.
+
+If routing isn't what you expect, debug:
+
+1. Does the record have explicit `extra["handlers"]`?
+2. If not, what's the fallback set?
+
+### Handler and Filter Registries
+
+The registries live in `handlers.py` and `filters.py`:
+
+- `HANDLER_SPEC_REGISTRY`: Dictionary mapping `HandlerType` → `HandlerSpec`
+- `FILTER_SPEC_REGISTRY`: Dictionary mapping `HandlerType` → `FilterSpec`
+
+These are the "catalog" of all available handler and filter types. When `add_handler(log, HandlerType.Rich)` is called, setup code:
+
+1. Looks up `HandlerType.Rich` in `HANDLER_SPEC_REGISTRY` (`FILTER_SPEC_REGISTRY` for filters)
+2. Calls the factory function to build the handler
+3. Attaches the `HandleIDFilter` with the handler's `fallback_types`
+4. Installs it on the logger
+
+Registries prevent duplicate handlers and centralize construction logic.
+
+### LogHandlerConf
+
+Defined in `handlerconf.py`, this dataclass holds the various streams and their handler configurations.
+
+Key attributes:
+
+- `BASE_CONFIG`: Default handlers for normal (non-ERS, non-OPMON) logging
+- `OPMON_CONFIG`: Handlers for OPMON-related output
+- `ERS`: ERS severity-specific configurations (loaded from environment variables)
+
+`LogHandlerConf` also defines `StreamType` and parses ERS environment variables:
+
+```python
+DUNEDAQ_ERS_ERROR="throttle,lstdout,protobufstream(monkafka.cern.ch:30092)"
+DUNEDAQ_ERS_WARNING="..."
+# etc.
+```
+
+These are parsed into `ERSPyLogHandlerConf` objects that hold the handler list and optional protobuf endpoint for each severity.
+
+
diff --git a/docs_dev/how-to/add-a-filter.md b/docs_dev/how-to/add-a-filter.md
new file mode 100644
index 0000000..82c0724
--- /dev/null
+++ b/docs_dev/how-to/add-a-filter.md
@@ -0,0 +1,144 @@
+# How to add a new logger-level filter
+
+Logger-level filters are attached to the logger itself (not individual handlers) and run before any handler sees the record.
+
+**Important:** Logger-level filters are **only active when explicitly activated**. A filter only applies its logic if its `HandlerType` is present in the record's allowed handlers set. This prevents filters from unexpectedly activating on all records.
+
+For adding handlers instead, see [How to add a new handler](./add-a-handler.md).
+For how filters interact with the rest of the system, see the [developer explanation](../explanation.md).
+
+---
+
+Let's add a custom filter that only processes records containing specific metadata.
+
+## Implementation
+
+In `filters.py`:
+
+```python
+from filters import BaseHandlerFilter
+from routing import DefaultAllowedHandlerStrategy
+from handlerconf import HandlerType
+import logging
+
+class MetadataAwareFilter(BaseHandlerFilter):
+ """Only pass records that match specific metadata patterns."""
+
+ def __init__(
+ self,
+ required_keyword: str,
+ fallback_handlers: set[HandlerType],
+ allowed_handlers_strategy: AllowedHandlersStrategy
+ ):
+ super().__init__(fallback_handlers, allowed_handlers_strategy)
+ self.required_keyword = required_keyword
+
+ def filter(self, record: logging.LogRecord) -> bool:
+ """Return False to suppress records missing the keyword in extra."""
+ # Check if we want to apply the filter
+ if not (allowed := self.get_allowed(record)):
+ return False
+ if HandlerType.MetadataAware not in allowed:
+ return True # Filter not active, pass record through
+
+ # Filter is active - apply the metadata check
+ if hasattr(record, self.required_keyword):
+ return True # Pass it through
+ return False # Suppress it
+
+
+def build_metadata_aware_filter(
+ required_keyword: str = "metadata",
+ fallback_handlers: set[HandlerType] = None,
+ **extras
+) -> MetadataAwareFilter:
+ """Build a metadata-aware filter.
+
+ Args:
+ required_keyword: Name of the extra field to check for (e.g., "session_id")
+ fallback_handlers: Default allowed handler set
+ **extras: ignored
+
+ Returns:
+ Configured MetadataAwareFilter instance
+ """
+ if fallback_handlers is None:
+ fallback_handlers = set()
+
+ strategy = DefaultAllowedHandlerStrategy(fallback_handlers)
+ return MetadataAwareFilter(required_keyword, fallback_handlers, strategy)
+```
+
+Look at `ThrottleFilter` in `filters.py` to see how this pattern is implemented in practice.
+
+> **Future improvement:** This activation pattern should be standardized to be more universal. For now, each new filter follows the same two-step check: resolve the allowed set, then verify the filter's own type is present before applying logic.
+
+Then define and register it (in `handlerconf.py` and `filters.py`):
+
+```python
+# In handlerconf.py, add to HandlerType enum:
+class HandlerType(Enum):
+ # ... existing ...
+ MetadataAware = "metadata_aware"
+
+# In filters.py, add to registry:
+FILTER_SPEC_REGISTRY[HandlerType.MetadataAware] = FilterSpec(
+ alias=HandlerType.MetadataAware,
+ filter_class=MetadataAwareFilter,
+ factory=build_metadata_aware_filter,
+ fallback_types=(HandlerType.MetadataAware,),
+)
+```
+
+## Using your new filter
+
+```python
+from daqpytools.logging import HandlerType, get_daq_logger, add_handler
+
+log = get_daq_logger(
+ "myapp",
+ rich_handler=True,
+)
+
+# Add the filter (it won't activate yet)
+add_handler(
+ log,
+ HandlerType.MetadataAware,
+ required_keyword="session_id",
+)
+
+# This record passes (filter not active, no explicit MetadataAware requested)
+log.info("Missing metadata")
+
+# This record is filtered (filter is now active via extra, and session_id is missing)
+log.info("No session", extra={"handlers": [HandlerType.MetadataAware]})
+
+# This record passes (filter active but session_id is present)
+log.info("Has metadata", extra={"handlers": [HandlerType.MetadataAware], "session_id": "abc123"})
+```
+
+## Key points about logger-level filters
+
+- They run **before** handlers, so if they reject a record, no handler sees it
+- They **only activate when explicitly requested** via their `HandlerType` in `extra={"handlers": [...]}`
+- They inherit from `BaseHandlerFilter` to participate in routing logic
+- They DON'T need a `HandleIDFilter` (that's for handlers)
+- They're global to the logger, not per-handler
+
+---
+
+## Common patterns
+
+### Filter that uses record metadata
+
+```python
+class MetadataAwareFilter(BaseHandlerFilter):
+ def filter(self, record):
+ # Filters can look at extra metadata
+ if "skip_logging" in record.__dict__:
+ return False
+ return True
+
+# Usage:
+log.info("skip me", extra={"skip_logging": True})
+```
diff --git a/docs_dev/how-to/add-a-handler.md b/docs_dev/how-to/add-a-handler.md
new file mode 100644
index 0000000..aae7d40
--- /dev/null
+++ b/docs_dev/how-to/add-a-handler.md
@@ -0,0 +1,232 @@
+# How to add a new handler
+
+This page walks through adding a new handler to daqpytools.logging from scratch.
+
+**Important:** You'll be editing files in the daqpytools repository itself. The main files you'll work with are:
+- `src/daqpytools/logging/handlerconf.py` — Handler/filter type definitions
+- `src/daqpytools/logging/handlers.py` — Handler implementations and registry
+- `src/daqpytools/logging/filters.py` — Filter implementations and registry
+- `src/daqpytools/logging/logger.py` — Logger setup (optional for Step 6)
+
+For how the system actually works, start with the [developer explanation](../explanation.md).
+For adding filters instead, see [How to add a new filter](./add-a-filter.md).
+
+---
+
+Let's say you want to add a handler that formats and displays log records in the terminal, similar to `FormattedRichHandler` but with a custom initialization name instead of a timezone.
+
+## Step 1: Add a `HandlerType` enum value
+
+In `handlerconf.py`, add your handler to the `HandlerType` enum:
+
+```python
+class HandlerType(Enum):
+ # ... existing handlers ...
+ CustomTerminal = "custom_terminal"
+```
+
+Guidelines:
+
+- Use lowercase, snake_case for the string value (matches ERS token parsing)
+- Choose a name that clearly describes what it does
+
+## Step 2: Implement the handler class and factory function
+
+In `handlers.py`, create your handler:
+
+```python
+import logging
+import sys
+
+class CustomTerminalHandler(logging.StreamHandler):
+ """Emits formatted log records to terminal with a custom name."""
+
+ def __init__(self, name: str):
+ super().__init__(sys.stdout)
+ self.name_label = name
+ # Use a simple formatter that includes the custom name
+ fmt = f"[{name}] %(levelname)-8s | %(message)s"
+ self.setFormatter(logging.Formatter(fmt))
+
+ def emit(self, record: logging.LogRecord):
+ """Emit the record to stdout."""
+ try:
+ msg = self.format(record)
+ self.stream.write(msg + self.terminator)
+ self.flush()
+ except Exception:
+ self.handleError(record)
+
+
+def build_custom_terminal_handler(name: str, **extras) -> CustomTerminalHandler:
+ """Build a custom terminal handler.
+
+ Args:
+ name: Name label to display (e.g., 'APP', 'SERVICE'). Required.
+ **extras: ignored, for compatibility with setup functions
+
+ Returns:
+ Configured CustomTerminalHandler instance
+ """
+ return CustomTerminalHandler(name)
+```
+
+Key points:
+
+- The factory accepts `**extras` for compatibility with logger setup functions
+- The handler only knows *how* to emit (format and write to stdout)
+- Fail clearly if required args are missing — don't silently use defaults
+
+## Step 3: Define a `HandlerSpec`
+
+Create a spec that describes your handler metadata:
+
+```python
+from specs import HandlerSpec
+
+custom_terminal_spec = HandlerSpec(
+ alias=HandlerType.CustomTerminal,
+ handler_class=CustomTerminalHandler,
+ factory=build_custom_terminal_handler,
+ fallback_types=(HandlerType.CustomTerminal,),
+)
+```
+
+`fallback_types` is the set of routing tokens this handler responds to. When the routing strategy resolves the allowed handlers, any token in `fallback_types` will trigger this handler to emit.
+
+Best practice: `fallback_types` should include all the `HandlerType` keys that can load this handler. Typically this is just the `alias` itself (e.g., `(HandlerType.CustomTerminal,)`). But if your handler handles multiple roles (like `Stream` handling both `Lstdout` and `Lstderr`), include all applicable tokens in the tuple.
+
+## Step 4: Register the spec
+
+Add it to the registry:
+
+```python
+from handlers import HANDLER_SPEC_REGISTRY
+
+HANDLER_SPEC_REGISTRY[HandlerType.CustomTerminal] = custom_terminal_spec
+```
+
+The registry is the "catalog" of all handler types. When setup code needs to build a handler, it looks it up here.
+
+## Step 5: Test it locally
+
+Create a clean logger and attach your handler:
+
+```python
+import logging
+from daqpytools.logging import HandlerType, get_daq_logger, add_handler
+
+# Start with a clean logger (no handlers attached)
+log = get_daq_logger(
+ "test_app",
+ log_level="DEBUG",
+ rich_handler=False, # don't add any handlers yet
+ stream_handlers=False,
+)
+
+# Attach your new handler
+add_handler(
+ log,
+ HandlerType.CustomTerminal,
+ use_parent_handlers=True,
+ name="MYAPP", # passed to the factory
+)
+
+# Test: standard logging (uses fallback)
+log.info("This goes to MYAPP") # Emits (CustomTerminal in fallback)
+
+# Test: explicit routing (overrides fallback)
+log.info("Also to MYAPP", extra={"handlers": [HandlerType.CustomTerminal]})
+
+# Test: explicit exclusion (no fallback, empty override)
+log.info("Silently dropped", extra={"handlers": [HandlerType.Unknown]})
+```
+
+## Step 6: (Optional) Add it to `get_daq_logger`
+
+If you want users to enable your handler directly via `get_daq_logger(...)`, add a parameter:
+
+**In `logger.py`, update `get_daq_logger`:**
+
+```python
+def get_daq_logger(
+ logger_name: str,
+ log_level: int | str = logging.NOTSET,
+ use_parent_handlers: bool = True,
+ rich_handler: bool = False,
+ file_handler_path: str | None = None,
+ stream_handlers: bool = False,
+ ers_kafka_session: str | None = None,
+ throttle: bool = False,
+ custom_terminal_name: str | None = None, # Add this
+ **extras: object
+) -> logging.Logger:
+ # ... docstring ...
+
+ fallback_handlers: set[HandlerType] = set()
+ if rich_handler:
+ fallback_handlers.add(HandlerType.Rich)
+ if file_handler_path:
+ fallback_handlers.add(HandlerType.File)
+ if stream_handlers:
+ fallback_handlers.add(HandlerType.Stream)
+ if custom_terminal_name: # Add this
+ fallback_handlers.add(HandlerType.CustomTerminal)
+ if ers_kafka_session:
+ fallback_handlers.add(HandlerType.Protobufstream)
+ if throttle:
+ fallback_handlers.add(HandlerType.Throttle)
+
+ add_handlers_from_types(
+ logger,
+ fallback_handlers,
+ use_parent_handlers,
+ fallback_handlers,
+ path=file_handler_path,
+ session_name=ers_kafka_session,
+ name=custom_terminal_name, # Pass it through
+ **extras
+ )
+```
+
+Now users can attach it directly:
+
+```python
+from daqpytools.logging import get_daq_logger
+
+log = get_daq_logger(
+ "myapp",
+ log_level="INFO",
+ rich_handler=True,
+ custom_terminal_name="SERVICE", # Your handler is now in fallback
+)
+
+log.info("Goes to both Rich and CustomTerminal") # Emits to both
+```
+
+Best practices:
+- If your handler needs a **single required argument** (like file path), make that the parameter: `custom_terminal_path: str | None`
+- If it needs a **single optional boolean flag**, use: `custom_terminal_enabled: bool = False`
+- If it needs **multiple required arguments**, use a boolean flag and fail in `add_handler` if args are missing
+- Always pass handler-specific args via `**extras` in the logger setup
+
+## Step 7: (Optional) Add ERS support (if applicable)
+
+If your handler should be controllable via ERS environment variables:
+
+1. Your `HandlerType` string already works as an ERS token (e.g., `custom_terminal`)
+2. The parser will recognize it automatically (if it's in the enum)
+3. Users can enable it via:
+
+```bash
+DUNEDAQ_ERS_ERROR="custom_terminal,throttle,lstdout"
+```
+
+Note: Beyond daqpytools, this requires adding the variable to OKS configurations for your DAQ system.
+
+## Common mistakes to avoid
+
+- **Don't hardcode routing logic in the handler.** It should only know *how* to emit. Filters decide *when*.
+- **Don't silently ignore missing required args.** Fail fast with a clear error.
+- **Don't skip `**extras` in the factory.** Accept it even if you don't use it — other parts of the system rely on this.
+- **Don't register the same handler type twice.** The system prevents duplicate handlers on a single logger.
diff --git a/docs_dev/how-to/debug-routing.md b/docs_dev/how-to/debug-routing.md
new file mode 100644
index 0000000..d1a05bb
--- /dev/null
+++ b/docs_dev/how-to/debug-routing.md
@@ -0,0 +1,103 @@
+# How to debug routing issues
+
+When logs appear wrong or not at all, use this workflow.
+
+---
+
+## General debugging workflow
+
+### 1. What handlers are attached?
+
+```python
+print(log.handlers) # List all handlers
+for h in log.handlers:
+ print(f"{h}: filters={h.filters}") # Check their filters
+```
+
+### 2. What's the allowed set for your record?
+
+- Does it have explicit `extra={"handlers": [...]}`?
+- If not, what's the fallback set from logger setup?
+- If `stream == StreamType.ERS`: Is the severity level mapped? (DEBUG doesn't map)
+
+### 3. Do handler IDs match?
+
+- Each handler has a `HandleIDFilter` with `handler_ids` set
+- Is that set in the allowed set?
+- If not, the record is silently dropped
+
+### 4. Are logger-level filters rejecting it?
+
+- Less common, but `ThrottleFilter` might suppress repeated messages
+- Check filter state and condition
+
+### 5. For ERS specifically
+
+- Env vars set and properly formatted?
+- Severity level mapped correctly?
+- Verify handler appears in the resolved allowed set
+
+If all else fails, add debug statements in `HandleIDFilter.filter()` to print `handler_ids`, `allowed`, and the intersection result.
+
+---
+
+## Debugging a new handler or filter
+
+### Check handler attachment
+
+```python
+log = get_daq_logger("test", rich_handler=True)
+add_handler(log, HandlerType.MyCustomService, endpoint="http://localhost")
+
+# What's actually attached?
+for h in log.handlers:
+ print(f"Handler: {h}")
+ for f in h.filters:
+ print(f" Filter: {f}")
+```
+
+### Check the allowed set
+
+```python
+# Emit a test record with explicit handlers
+log.info("test", extra={"handlers": [HandlerType.Rich, HandlerType.MyCustomService]})
+
+# Now add debug output to HandleIDFilter.filter() to see what's happening:
+# "Handler IDs: {self.handler_ids}, Allowed: {allowed}, Match: {bool(overlap)}"
+```
+
+### Check fallback behavior
+
+```python
+# Build with your handler in fallback
+log = get_daq_logger("test", my_custom_service_enabled=True)
+
+# This should use fallback (if you configured it)
+log.info("Should go to custom service")
+
+# This overrides fallback
+log.info("Only rich", extra={"handlers": [HandlerType.Rich]})
+```
+
+### Test ERS parsing (if applicable)
+
+```python
+import os
+os.environ["DUNEDAQ_ERS_INFO"] = "my_custom_service,lstdout"
+
+from daqpytools.logging import LogHandlerConf
+conf = LogHandlerConf(init_ers=True)
+# Did ERS correctly parse your handler type?
+```
+
+### For filters, check order
+
+```python
+log = get_daq_logger("test", rich_handler=True)
+add_handler(log, HandlerType.Throttle)
+add_handler(log, HandlerType.ModuleSuppress, suppressed_modules=["test"])
+
+# Logger-level filters run first, then handlers
+print(f"Logger filters: {log.filters}")
+print(f"Handler filters: {log.handlers[0].filters}")
+```
diff --git a/docs_dev/img/Filter_activity.png b/docs_dev/img/Filter_activity.png
new file mode 100644
index 0000000..f796ced
Binary files /dev/null and b/docs_dev/img/Filter_activity.png differ
diff --git a/docs_dev/img/Filter_class.png b/docs_dev/img/Filter_class.png
new file mode 100644
index 0000000..2c91559
Binary files /dev/null and b/docs_dev/img/Filter_class.png differ
diff --git a/docs_dev/img/LHC_activity.png b/docs_dev/img/LHC_activity.png
new file mode 100644
index 0000000..7934a6a
Binary files /dev/null and b/docs_dev/img/LHC_activity.png differ
diff --git a/docs_dev/img/LHC_class.png b/docs_dev/img/LHC_class.png
new file mode 100644
index 0000000..1fb1756
Binary files /dev/null and b/docs_dev/img/LHC_class.png differ
diff --git a/docs_dev/img/Specs_class.png b/docs_dev/img/Specs_class.png
new file mode 100644
index 0000000..12d7179
Binary files /dev/null and b/docs_dev/img/Specs_class.png differ
diff --git a/docs_dev/index.md b/docs_dev/index.md
new file mode 100644
index 0000000..35581a9
--- /dev/null
+++ b/docs_dev/index.md
@@ -0,0 +1,21 @@
+# Logging in DUNE-DAQ — Documentation
+
+Welcome to the developer documentation for daqpytools.
+
+This documentation is split into two sections depending on your role:
+
+- **User docs** — for anyone writing Python applications that use logging. This is found in the [official DUNE DAQ SW website](https://dune-daq-sw.readthedocs.io/en/latest/packages/daqpytools/).
+- **Developer docs** — for anyone extending the logging system itself (new handlers, new filters).
+
+
+
+## Developer documentation
+
+| Page | What it's for |
+|---|---|
+| [Concepts & explanation](./explanation.md) | The routing model, component definitions, fallback logic |
+| [Architecture reference](./reference/architecture.md) | Logger init flow and record flow at runtime |
+| [How to add a handler](./how-to/add-a-handler.md) | Step-by-step guide to adding a new handler type |
+| [How to add a filter](./how-to/add-a-filter.md) | Step-by-step guide to adding a new logger-level filter |
+| [How to debug routing](./how-to/debug-routing.md) | Systematic workflow for diagnosing routing issues |
+| [Common patterns](./reference/patterns.md) | Quick-reference recipes for handlers and filters |
diff --git a/docs_dev/readme_toplevel.md b/docs_dev/readme_toplevel.md
new file mode 100644
index 0000000..25abe85
--- /dev/null
+++ b/docs_dev/readme_toplevel.md
@@ -0,0 +1,37 @@
+# Logging in DUNE-DAQ — Documentation
+
+Welcome to the logging documentation for daqpytools (as of 5.6.0).
+
+This documentation is split into two sections depending on your role:
+
+- **User docs** — for anyone writing Python applications that use logging
+- **Developer docs** — for anyone extending the logging system itself (new handlers, new filters)
+
+---
+
+## User documentation
+
+| Page | What it's for |
+|---|---|
+| [Tutorial](./user/tutorial.md) | Get a working logger running from scratch |
+| [Concepts & explanation](./user/explanation.md) | Understand how Python logging and daqpytools work |
+| [How to use handlers and filters](./user/how-to/use-handlers.md) | Descriptions and examples for each handler and filter |
+| [How to route messages](./user/how-to/route-messages.md) | Direct records to specific handlers using HandlerType and LogHandlerConf |
+| [How to add handlers at runtime](./user/how-to/add-handlers-at-runtime.md) | Attach handlers after logger creation; pass kwargs |
+| [How to configure ERS](./user/how-to/configure-ers.md) | Attach and use ERS handlers |
+| [Best practices](./user/how-to/best-practices.md) | Recommended patterns for structuring logging in your application |
+| [Troubleshooting](./user/reference/troubleshooting.md) | Common symptoms, causes, and fixes |
+| [API reference](https://dune-daq.github.io/daqpytools/APIref) | Auto-generated kwargs, types, and defaults for all public APIs |
+
+---
+
+## Developer documentation
+
+| Page | What it's for |
+|---|---|
+| [Concepts & explanation](./dev/explanation.md) | The routing model, component definitions, fallback logic |
+| [Architecture reference](./dev/reference/architecture.md) | Logger init flow and record flow at runtime |
+| [How to add a handler](./dev/how-to/add-a-handler.md) | Step-by-step guide to adding a new handler type |
+| [How to add a filter](./dev/how-to/add-a-filter.md) | Step-by-step guide to adding a new logger-level filter |
+| [How to debug routing](./dev/how-to/debug-routing.md) | Systematic workflow for diagnosing routing issues |
+| [Common patterns](./dev/reference/patterns.md) | Quick-reference recipes for handlers and filters |
diff --git a/docs_dev/reference/architecture.md b/docs_dev/reference/architecture.md
new file mode 100644
index 0000000..248c163
--- /dev/null
+++ b/docs_dev/reference/architecture.md
@@ -0,0 +1,64 @@
+# Architecture reference: initialization and record flow
+
+This page documents the runtime behaviour of the logging system — how loggers are initialised and how records flow through the system.
+
+For the static structure of components (HandlerType, Specs, Filters, Strategies), see the [developer explanation](../explanation.md).
+
+---
+
+## Logger initialisation flow
+
+When you build a logger and need to determine which handlers should be attached:
+
+1. **You call** `get_daq_logger(...)` with flags like `rich_handler=True`, `stream_handlers=True`, etc.
+2. **Logger setup** resolves which handlers to attach based on your flags
+3. **For each handler type**, setup:
+ - Looks it up in `HANDLER_SPEC_REGISTRY` and `FILE_SPEC_REGISTRY`
+ - Calls the factory function to build it
+ - Attaches a `HandleIDFilter` with the handler's routing identity
+ - Installs it on the logger
+4. **The fallback set** is composed from all enabled handlers. This becomes the default allowed set for records that don't carry explicit `extra["handlers"]`.
+
+If you add ERS handlers via `setup_daq_ers_logger(...)`, the process is similar but with a critical difference:
+
+1. **ERS env variables** are parsed (e.g., `DUNEDAQ_ERS_ERROR=...`)
+2. **Handler types are extracted** from each severity's config (e.g., `throttle`, `lstdout`, `protobufstream(...)`)
+3. **Handlers are built and attached** with `fallback_handler={HandlerType.Unknown}`
+ - This is the key: ERS handlers **won't emit by default**
+ - They only emit when explicitly requested by ERS severity routing (see `ERSAllowedHandlersStrategy`)
+ - This prevents accidental spillover into standard logging
+4. **Records are routed to ERS handlers only via ERS severity mapping**
+ - A record marked `extra={"stream": StreamType.ERS}` triggers ERS-aware routing
+ - The routing strategy maps Python level → ERS severity variable → handler set
+
+
+
+---
+
+## Record flow
+
+When you call `log.info("something")`, here's the actual flow:
+
+1. **Python's logging creates a `LogRecord`** with your message, severity, and any `extra` metadata
+
+2. **Logger-level filters run first** (e.g., `ThrottleFilter`):
+ - If any filter returns `False`, the record stops here
+ - It never reaches handlers
+ - This is where global concerns like throttling happen
+
+3. **Record is offered to each attached handler**
+
+4. **Each handler's `HandleIDFilter` decides** whether to emit:
+ - The filter calls the routing strategy to resolve `allowed_handlers`:
+ - If `extra["handlers"]` is present, use it
+ - Otherwise use the fallback set
+ - If `stream == StreamType.ERS`, use ERS-specific routing
+ - The filter checks: `handler_ids ∩ allowed_handlers`
+ - Non-empty = emit; empty = drop the record
+
+5. **Format and emit** (if the record passed the filter):
+ - The handler formats it and emits (to file, stdout, kafka, etc.)
+
+This two-stage filtering is key: logger-level filters decide "should ANY handler see this?" while handler-level filters decide "should THIS handler see this?"
+
+
diff --git a/docs_dev/reference/patterns.md b/docs_dev/reference/patterns.md
new file mode 100644
index 0000000..bba47e6
--- /dev/null
+++ b/docs_dev/reference/patterns.md
@@ -0,0 +1,56 @@
+# Common handler and filter patterns
+
+Quick reference recipes for specific handler and filter types.
+
+---
+
+## Handler that wraps an existing service client
+
+```python
+class ExistingServiceHandler(logging.Handler):
+ """Wrap an existing service client."""
+
+ def __init__(self, client):
+ super().__init__()
+ self.client = client # e.g., a sentry client
+
+ def emit(self, record):
+ self.client.send_record(self.format(record), level=record.levelname)
+```
+
+## Handler that writes JSON
+
+```python
+class JSONHandler(logging.Handler):
+ def emit(self, record):
+ entry = {
+ "timestamp": record.created,
+ "level": record.levelname,
+ "message": record.getMessage(),
+ "module": record.module,
+ }
+ # Write JSON somewhere
+```
+
+## Filter that uses record metadata
+
+```python
+class MetadataAwareFilter(BaseHandlerFilter):
+ def filter(self, record):
+ # Filters can look at extra metadata
+ if "skip_logging" in record.__dict__:
+ return False
+ return True
+
+# Usage:
+log.info("skip me", extra={"skip_logging": True})
+```
+
+---
+
+## Next steps
+
+- Look at existing handlers in `handlers.py` for patterns
+- Look at `ThrottleFilter` for a complex filter example
+- Check test files in `tests/logging/` for usage examples
+- Add your handler/filter, submit a PR!
diff --git a/docs_dev/requirements.txt b/docs_dev/requirements.txt
new file mode 100644
index 0000000..4cecee1
--- /dev/null
+++ b/docs_dev/requirements.txt
@@ -0,0 +1,10 @@
+mkdocs
+mkdocs-material
+mkdocs-mermaid2-plugin
+mkdocs-gen-files
+mkdocs-literate-nav
+mkdocs-section-index
+mkdocstrings
+mkdocstrings-python
+pymdown-extensions
+mkdocs-exclude
\ No newline at end of file
diff --git a/docs_dev/utils/generate_logging_autodocs.py b/docs_dev/utils/generate_logging_autodocs.py
new file mode 100644
index 0000000..dc22c75
--- /dev/null
+++ b/docs_dev/utils/generate_logging_autodocs.py
@@ -0,0 +1,534 @@
+#!/usr/bin/env python3
+"""Generate targeted logging API docs from daqpytools registries.
+
+This script generates Markdown pages that are easy to import into an existing
+MkDocs (or other Markdown-based) documentation tree.
+
+Generated content includes:
+- handler registry index + per-handler-type pages
+- filter registry index + per-filter-type pages
+- logger API reference pages for selected public entry points
+
+By default it also emits a machine-readable JSON manifest alongside Markdown.
+"""
+
+from __future__ import annotations
+
+import argparse
+import json
+import logging
+import shutil
+import sys
+from dataclasses import dataclass
+from pathlib import Path
+from types import ModuleType
+from typing import Any
+
+_USING_MKDOCS_GEN_FILES = False
+
+
+@dataclass
+class DocSpec:
+ """Documentation-ready representation of a handler/filter spec."""
+
+ type_name: str
+ type_value: str
+ kind: str
+ class_fqdn: str
+ class_name: str
+ factory_fqdn: str
+ factory_name: str
+ fallback_types: list[str]
+ summary: str
+
+
+LOGGER_APIS: list[tuple[str, str]] = [
+ ("get_daq_logger", "daqpytools.logging.logger.get_daq_logger"),
+ ("setup_root_logger", "daqpytools.logging.logger.setup_root_logger"),
+ (
+ "setup_daq_ers_logger",
+ "daqpytools.logging.logger.setup_daq_ers_logger",
+ ),
+]
+
+
+
+
+def _install_erskafka_stub() -> None:
+ """Install a lightweight erskafka stub so imports are docs-safe.
+
+ The logging handlers module imports `ERSKafkaLogHandler` at module import
+ time. In docs environments where `erskafka` is not installed, we provide a
+ minimal stub to keep introspection and generation functional.
+ """
+ if "erskafka" in sys.modules and "erskafka.ERSKafkaLogHandler" in sys.modules:
+ return
+
+ erskafka_pkg = ModuleType("erskafka")
+ erskafka_submodule = ModuleType("erskafka.ERSKafkaLogHandler")
+
+ class StubERSKafkaLogHandler(logging.Handler):
+ """Fallback docs-time stub for ERSKafkaLogHandler."""
+
+ def emit(self, record: logging.LogRecord) -> None:
+ del record
+
+ erskafka_submodule.ERSKafkaLogHandler = StubERSKafkaLogHandler
+
+ sys.modules["erskafka"] = erskafka_pkg
+ sys.modules["erskafka.ERSKafkaLogHandler"] = erskafka_submodule
+
+
+def _repo_root_from_script(script_path: Path) -> Path:
+ """Infer repository root (`daqpytools`) from script location."""
+ return script_path.parents[2]
+
+
+def _ensure_import_path(repo_root: Path) -> None:
+ """Ensure `src/` is importable for local execution without pip install."""
+ src_path = repo_root / "src"
+ if str(src_path) not in sys.path:
+ sys.path.insert(0, str(src_path))
+
+
+def _summary_from_docstring(obj: Any) -> str: # noqa: ANN401
+ """Extract the first non-empty docstring line as a short summary."""
+ doc = getattr(obj, "__doc__", None)
+ if not doc:
+ return "No description provided."
+
+ for line in doc.splitlines():
+ stripped = line.strip()
+ if stripped:
+ return stripped
+ return "No description provided."
+
+
+def _symbol_fqdn(symbol: Any) -> tuple[str, str]: # noqa: ANN401
+ """Return a symbol fully-qualified name and short symbol name."""
+ name = symbol.__name__
+ fqdn = f"{symbol.__module__}.{name}"
+ return fqdn, name
+
+
+def _type_slug(type_name: str) -> str:
+ """Stable slug for a handler/filter type."""
+ return type_name.lower()
+
+
+def _to_text_list(items: tuple[Any, ...] | list[Any] | set[Any]) -> list[str]:
+ """Convert enum-ish values to readable strings."""
+ values: list[str] = []
+ for item in items:
+ value = getattr(item, "value", None)
+ if isinstance(value, str):
+ values.append(value)
+ else:
+ values.append(str(item))
+ return values
+
+
+def _build_docspecs(
+ registry: dict[Any, Any],
+ *,
+ kind: str,
+ class_attr: str,
+) -> dict[str, list[DocSpec]]:
+ """Convert a spec registry to grouped ``DocSpec`` entries.
+
+ Args:
+ registry: Mapping from type enum to spec object(s).
+ kind: Documentation kind label (``handler`` or ``filter``).
+ class_attr: Name of the spec attribute holding the runtime class
+ (for example ``handler_class`` or ``filter_class``).
+
+ Returns:
+ A mapping keyed by type name with one or more ``DocSpec`` entries.
+ """
+ grouped: dict[str, list[DocSpec]] = {}
+
+ for item_type, raw_specs in registry.items():
+ type_name = item_type.name
+ type_value = item_type.value
+ specs = raw_specs if isinstance(raw_specs, tuple) else (raw_specs,)
+
+ grouped[type_name] = []
+ for spec in specs:
+ runtime_class = getattr(spec, class_attr)
+ class_fqdn, class_name = _symbol_fqdn(runtime_class)
+ factory_fqdn, factory_name = _symbol_fqdn(spec.factory)
+
+ grouped[type_name].append(
+ DocSpec(
+ type_name=type_name,
+ type_value=type_value,
+ kind=kind,
+ class_fqdn=class_fqdn,
+ class_name=class_name,
+ factory_fqdn=factory_fqdn,
+ factory_name=factory_name,
+ fallback_types=_to_text_list(spec.fallback_types),
+ summary=_summary_from_docstring(spec.factory),
+ )
+ )
+
+ return grouped
+
+
+def _render_index(
+ specs_by_type: dict[str, list[DocSpec]],
+ *,
+ kind: str,
+ title: str | None = None,
+ type_label: str | None = None,
+ source_registry: str | None = None,
+) -> str:
+ """Render an index page for one selected spec kind.
+
+ By default, ``kind`` selects standard values for title, type label,
+ and source registry. Any of those values can be overridden explicitly.
+
+ Args:
+ specs_by_type: Mapping of type name to one or more ``DocSpec`` entries.
+ kind: ``handler`` or ``filter``.
+ title: Optional custom page title.
+ type_label: Optional custom table header for type column.
+ source_registry: Optional custom source-of-truth registry label.
+ """
+ index_kind_defaults: dict[str, dict[str, str]] = {
+ "handler": {
+ "title": "Handlers reference",
+ "type_label": "HandlerType",
+ "source_registry": "HANDLER_SPEC_REGISTRY",
+ },
+ "filter": {
+ "title": "Filters reference",
+ "type_label": "FilterType",
+ "source_registry": "FILTER_SPEC_REGISTRY",
+ },
+ }
+
+
+ defaults = index_kind_defaults.get(kind)
+ if defaults is None:
+ err_msg = f"Unsupported index kind: {kind}"
+ raise ValueError(err_msg)
+
+ resolved_title = title or defaults["title"]
+ resolved_type_label = type_label or defaults["type_label"]
+ resolved_source_registry = source_registry or defaults["source_registry"]
+
+ lines = [
+ f"# {resolved_title}",
+ "",
+ f"Auto-generated from `{resolved_source_registry}`.",
+ "",
+ f"| {resolved_type_label} | Page | Specs |",
+ "|---|---|---|",
+ ]
+
+ for type_name in sorted(specs_by_type):
+ slug = _type_slug(type_name)
+ specs = specs_by_type[type_name]
+ type_value = specs[0].type_value
+ page = f"[{type_name}](./{slug}.md)"
+ lines.append(f"| `{type_name}` (`{type_value}`) | {page} | {len(specs)} |")
+
+ lines.extend(
+ [
+ "",
+ "Factory signatures and kwargs are rendered from mkdocstrings directives",
+ "on each per-type page.",
+ ]
+ )
+ return "\n".join(lines) + "\n"
+
+
+def _render_type_page(type_name: str, kind: str, specs: list[DocSpec]) -> str:
+ """Render per-type page for handler/filter docs."""
+ kind_title = "Handler" if kind == "handler" else "Filter"
+ lines = [
+ f"# {type_name} {kind_title.lower()}",
+ "",
+ f"This page is auto-generated for `{kind_title}Type.{type_name}`.",
+ "",
+ ]
+
+ if len(specs) > 1:
+ lines.extend(
+ [
+ "> This type resolves to multiple specs/factories.",
+ "",
+ ]
+ )
+
+ for idx, spec in enumerate(specs, start=1):
+ section_title = f"Spec {idx}" if len(specs) > 1 else "Spec"
+ lines.extend(
+ [
+ f"## {section_title}",
+ "",
+ f"- {kind_title} type: `{spec.type_name}` (`{spec.type_value}`)",
+ f"- {kind_title} class: `{spec.class_name}`",
+ f"- {kind_title} class FQDN: `{spec.class_fqdn}`",
+ f"- Factory: `{spec.factory_name}`",
+ f"- Factory FQDN: `{spec.factory_fqdn}`",
+ (
+ "- Fallback types: "
+ + (", ".join(f"`{value}`" for value in spec.fallback_types)
+ if spec.fallback_types else "None")
+ ),
+ f"- Summary: {spec.summary}",
+ "",
+ "### Factory API",
+ "",
+ f"::: {spec.factory_fqdn}",
+ "",
+ ]
+ )
+
+ return "\n".join(lines) + "\n"
+
+
+def _render_logger_api_page(title: str, fqdn: str) -> str:
+ """Render one logger API page with mkdocstrings directive."""
+ lines = [
+ f"# {title}",
+ "",
+ f"::: {fqdn}",
+ "",
+ ]
+ return "\n".join(lines)
+
+
+def _render_logging_index() -> str:
+ """Render overview page for generated logging docs."""
+ lines = [
+ "# Logging generated reference",
+ "",
+ "This section is generated and intended for inclusion in existing docs.",
+ "",
+ "## Logger APIs",
+ "",
+ "- [get_daq_logger](./get_daq_logger.md)",
+ "- [setup_root_logger](./setup_root_logger.md)",
+ "- [setup_daq_ers_logger](./setup_daq_ers_logger.md)",
+ "",
+ "## Registries",
+ "",
+ "- [Handlers reference](./handlers/index.md)",
+ "- [Filters reference](./filters/index.md)",
+ "",
+ ]
+ return "\n".join(lines)
+
+
+def _write_text(path: Path, content: str) -> None:
+ """Write UTF-8 text either to filesystem or mkdocs virtual files."""
+ if _USING_MKDOCS_GEN_FILES:
+ import mkdocs_gen_files
+
+ relative_path = path.as_posix()
+ with mkdocs_gen_files.open(relative_path, "w") as fd:
+ fd.write(content)
+ return
+
+ path.parent.mkdir(parents=True, exist_ok=True)
+ path.write_text(content, encoding="utf-8")
+
+
+def _to_manifest(
+ handler_specs: dict[str, list[DocSpec]],
+ filter_specs: dict[str, list[DocSpec]],
+) -> dict[str, Any]:
+ """Build a JSON-serializable manifest for generated docs."""
+
+ def serialize(grouped: dict[str, list[DocSpec]]) -> dict[str, list[dict[str, Any]]]:
+ payload: dict[str, list[dict[str, Any]]] = {}
+ for type_name, specs in grouped.items():
+ payload[type_name] = [
+ {
+ "type_name": spec.type_name,
+ "type_value": spec.type_value,
+ "kind": spec.kind,
+ "class_name": spec.class_name,
+ "class_fqdn": spec.class_fqdn,
+ "factory_name": spec.factory_name,
+ "factory_fqdn": spec.factory_fqdn,
+ "fallback_types": spec.fallback_types,
+ "summary": spec.summary,
+ }
+ for spec in specs
+ ]
+ return payload
+
+ return {
+ "logger_apis": [
+ {"name": name, "fqdn": fqdn}
+ for name, fqdn in LOGGER_APIS
+ ],
+ "handlers": serialize(handler_specs),
+ "filters": serialize(filter_specs),
+ }
+
+
+def generate(output_root: Path, emit_json_manifest: bool, clean: bool) -> list[Path]:
+ """Generate all targeted docs files and return paths written."""
+ from daqpytools.logging.filters import FILTER_SPEC_REGISTRY
+ from daqpytools.logging.handlers import HANDLER_SPEC_REGISTRY
+
+ if clean and output_root.exists():
+ shutil.rmtree(output_root)
+
+ handlers_dir = output_root / "handlers"
+ filters_dir = output_root / "filters"
+ written: list[Path] = []
+
+ handler_specs = _build_docspecs(
+ HANDLER_SPEC_REGISTRY,
+ kind="handler",
+ class_attr="handler_class",
+ )
+ filter_specs = _build_docspecs(
+ FILTER_SPEC_REGISTRY,
+ kind="filter",
+ class_attr="filter_class",
+ )
+
+ handlers_index = handlers_dir / "index.md"
+ _write_text(
+ handlers_index,
+ _render_index(
+ handler_specs,
+ kind="handler",
+ ),
+ )
+ written.append(handlers_index)
+
+ for type_name in sorted(handler_specs):
+ slug = _type_slug(type_name)
+ page_path = handlers_dir / f"{slug}.md"
+ content = _render_type_page(type_name, "handler", handler_specs[type_name])
+ _write_text(page_path, content)
+ written.append(page_path)
+
+ filters_index = filters_dir / "index.md"
+ _write_text(
+ filters_index,
+ _render_index(
+ filter_specs,
+ kind="filter",
+ ),
+ )
+ written.append(filters_index)
+
+ for type_name in sorted(filter_specs):
+ slug = _type_slug(type_name)
+ page_path = filters_dir / f"{slug}.md"
+ content = _render_type_page(type_name, "filter", filter_specs[type_name])
+ _write_text(page_path, content)
+ written.append(page_path)
+
+ for title, fqdn in LOGGER_APIS:
+ api_path = output_root / f"{title}.md"
+ _write_text(api_path, _render_logger_api_page(title, fqdn))
+ written.append(api_path)
+
+ logging_index = output_root / "index.md"
+ _write_text(logging_index, _render_logging_index())
+ written.append(logging_index)
+
+ if emit_json_manifest:
+ manifest = _to_manifest(handler_specs, filter_specs)
+ manifest_path = output_root / "manifest.json"
+ _write_text(manifest_path, json.dumps(manifest, indent=2, sort_keys=True))
+ written.append(manifest_path)
+
+ return written
+
+
+def _parse_args() -> argparse.Namespace:
+ """Parse CLI arguments."""
+ parser = argparse.ArgumentParser(
+ description=(
+ "Generate targeted logging docs from daqpytools registries "
+ "(handlers, filters, logger APIs)."
+ )
+ )
+ parser.add_argument(
+ "--output-root",
+ default="docs_dev/APIref",
+ help="Output directory for generated files (default: docs_dev/APIref).",
+ )
+ parser.add_argument(
+ "--repo-root",
+ default=None,
+ help="Path to daqpytools repository root. Auto-detected if omitted.",
+ )
+ parser.add_argument(
+ "--json-manifest",
+ action="store_true",
+ help="Do not emit the machine-readable JSON manifest.",
+ )
+ parser.add_argument(
+ "--clean",
+ action="store_true",
+ help="Remove output directory before regeneration.",
+ )
+ return parser.parse_args()
+
+
+def main() -> int:
+ """CLI entry point."""
+ args = _parse_args()
+ script_path = Path(__file__).resolve()
+ repo_root = (
+ Path(args.repo_root).resolve()
+ if args.repo_root
+ else _repo_root_from_script(script_path)
+ )
+
+ _ensure_import_path(repo_root)
+ _install_erskafka_stub()
+
+ output_root = Path(args.output_root)
+ if not output_root.is_absolute():
+ output_root = repo_root / output_root
+
+ written = generate(
+ output_root=output_root,
+ emit_json_manifest=args.json_manifest,
+ clean=args.clean,
+ )
+
+ print(f"Generated {len(written)} files under: {output_root}") # noqa: T201
+ for path in sorted(written):
+ print(f" - {path.relative_to(repo_root)}") # noqa: T201
+ return 0
+
+
+def _run_from_mkdocs_gen_files() -> None:
+ """Run generation when this module is loaded by mkdocs-gen-files."""
+ global _USING_MKDOCS_GEN_FILES
+
+ script_path = Path(__file__).resolve()
+ repo_root = _repo_root_from_script(script_path)
+
+ _ensure_import_path(repo_root)
+ _install_erskafka_stub()
+
+ _USING_MKDOCS_GEN_FILES = True
+ output_root = Path("APIref")
+ generate(
+ output_root=output_root,
+ emit_json_manifest=False,
+ clean=False,
+ )
+
+
+if __name__ != "__main__":
+ _run_from_mkdocs_gen_files()
+
+
+if __name__ == "__main__":
+ raise SystemExit(main())
\ No newline at end of file
diff --git a/docs_dev/utils/mirror_docs.py b/docs_dev/utils/mirror_docs.py
new file mode 100644
index 0000000..12788e1
--- /dev/null
+++ b/docs_dev/utils/mirror_docs.py
@@ -0,0 +1,88 @@
+#!/usr/bin/env python3
+"""Mirror docs/ and docs_dev/ into a unified virtual FS for MkDocs.
+
+Virtual output structure:
+ readme.md ← from docs_dev/readme_toplevel.md
+ user/ ← from docs/
+ dev/ ← from docs_dev/ (excluding utils/,
+ requirements.txt, readme_toplevel.md)
+"""
+
+from __future__ import annotations
+
+from pathlib import Path
+
+DOCS_DEV_EXCLUDE = {
+ "utils",
+ "requirements.txt",
+ "readme_toplevel.md",
+}
+
+
+def _repo_root_from_script(script_path: Path) -> Path:
+ """Infer repository root from script location."""
+ return script_path.parents[2]
+
+
+def _mirror_into_virtual_fs(
+ source_dir: Path,
+ virtual_prefix: str,
+ exclude: set[str] | None = None,
+) -> None:
+ """Read all files from source_dir and write them into the mkdocs virtual FS.
+
+ Args:
+ source_dir: Real directory to mirror.
+ virtual_prefix: Virtual FS prefix to write files under.
+ exclude: Top-level names within source_dir to skip.
+ """
+ import mkdocs_gen_files
+
+ exclude = exclude or set()
+
+ for source_file in source_dir.rglob("*"):
+ if not source_file.is_file():
+ continue
+
+ relative = source_file.relative_to(source_dir)
+
+ if relative.parts[0] in exclude:
+ continue
+
+ virtual_path = f"{virtual_prefix}/{relative.as_posix()}"
+ with mkdocs_gen_files.open(virtual_path, "wb") as fd:
+ fd.write(source_file.read_bytes())
+
+
+def _run_from_mkdocs_gen_files() -> None:
+ """Run when this module is loaded by mkdocs-gen-files."""
+ import mkdocs_gen_files
+
+ script_path = Path(__file__).resolve()
+ repo_root = _repo_root_from_script(script_path)
+
+ # 1. readme_toplevel.md → virtual readme.md
+ readme_source = repo_root / "docs_dev" / "readme_toplevel.md"
+ if readme_source.exists():
+ with mkdocs_gen_files.open("README.md", "wb") as fd:
+ fd.write(readme_source.read_bytes())
+ else:
+ print(f"Warning: '{readme_source}' does not exist, skipping.") # noqa: T201
+
+ # 2. docs/ → virtual user/
+ docs_dir = repo_root / "docs"
+ if docs_dir.exists():
+ _mirror_into_virtual_fs(docs_dir, "user")
+ else:
+ print(f"Warning: '{docs_dir}' does not exist, skipping.") # noqa: T201
+
+ # 3. docs_dev/ → virtual dev/ (with exclusions)
+ docs_dev_dir = repo_root / "docs_dev"
+ if docs_dev_dir.exists():
+ _mirror_into_virtual_fs(docs_dev_dir, "dev", exclude=DOCS_DEV_EXCLUDE)
+ else:
+ print(f"Warning: '{docs_dev_dir}' does not exist, skipping.") # noqa: T201
+
+
+if __name__ != "__main__":
+ _run_from_mkdocs_gen_files()
\ No newline at end of file
diff --git a/mkdocs.yml b/mkdocs.yml
new file mode 100644
index 0000000..4347162
--- /dev/null
+++ b/mkdocs.yml
@@ -0,0 +1,58 @@
+site_name: DUNE Run Control (drunc) Developer Documentation
+
+repo_name: DUNE-DAQ/drunc
+repo_url: https://github.com/DUNE-DAQ/drunc
+
+docs_dir: docs_anchor
+
+theme:
+ name: material
+ palette:
+ # Palette toggle for light/dark mode
+ - scheme: default
+ toggle:
+ icon: material/lightbulb
+ name: Switch to dark mode
+ - scheme: slate
+ toggle:
+ icon: material/lightbulb-outline
+ name: Switch to light mode
+ icon:
+ repo: fontawesome/brands/git-alt
+
+
+markdown_extensions:
+ - admonition
+ - pymdownx.highlight:
+ anchor_linenums: true
+ line_spans: __span
+ pygments_lang_class: true
+ - pymdownx.inlinehilite
+ - pymdownx.snippets
+ - pymdownx.superfences
+
+plugins:
+ - search
+ - exclude:
+ glob:
+ - Developer-Wiki/*
+ - gen-files:
+ scripts:
+ - docs_dev/utils/generate_logging_autodocs.py
+ - docs_dev/utils/mirror_docs.py
+ # - literate-nav:
+ # nav_file: README.md
+ # - section-index
+ - mkdocstrings:
+ default_handler: python
+ handlers:
+ python:
+ options:
+ show_source: true
+ show_root_heading: true
+ show_category_heading: true
+ merge_init_into_class: true
+ paths: [.]
+
+# hooks:
+# - docs/utils/mkdocs_hooks.py
\ No newline at end of file
diff --git a/pyproject.toml b/pyproject.toml
index 4fc91a3..a993593 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -29,6 +29,17 @@ dev = [
"pytest-cov",
]
test = ["pytest", "pytest-mypy", "pytest-cov", "types-pytz"]
+docs = [
+ "mkdocs",
+ "mkdocs-material",
+ "mkdocstrings",
+ "mkdocstrings-python",
+ "mkdocs-gen-files",
+ "mkdocs-literate-nav",
+ "mkdocs-section-index",
+ "mkdocs-exclude",
+
+]
[project.scripts]
daqpytools-logging-demonstrator = "daqpytools.apps.logging_demonstrator:main"
diff --git a/src/daqpytools/logging/filters.py b/src/daqpytools/logging/filters.py
index 69f2903..558fa91 100644
--- a/src/daqpytools/logging/filters.py
+++ b/src/daqpytools/logging/filters.py
@@ -267,7 +267,23 @@ def _build_throttle_filter(
time_limit: int = 30,
**extras: object,
) -> logging.Filter:
- """Build a throttle filter from extras."""
+ """Build a throttle filter.
+
+ Args:
+ fallback_handlers: Handler types used as fallback routing context.
+ initial_treshold: Number of first occurrences to emit before applying
+ suppression logic.
+ time_limit: Throttle time window in seconds.
+ **extras: Additional forwarded keyword arguments. Ignored by this
+ factory.
+
+ Returns:
+ A configured ``ThrottleFilter`` instance.
+
+ Notes:
+ The keyword name is currently ``initial_treshold`` to match the
+ existing function signature.
+ """
del extras
return ThrottleFilter(
fallback_handlers=fallback_handlers,
@@ -287,7 +303,15 @@ def _build_throttle_filter(
}
def get_filter_spec(handler_types: HandlerType) -> FilterSpec | None:
- """Return the filter specification for a handler type."""
+ """Return the filter specification for a handler type.
+
+ Args:
+ handler_types: Filter-capable ``HandlerType`` alias.
+
+ Returns:
+ The matching ``FilterSpec`` if present in ``FILTER_SPEC_REGISTRY``;
+ otherwise ``None``.
+ """
return FILTER_SPEC_REGISTRY.get(handler_types)
def add_filter(
@@ -296,7 +320,20 @@ def add_filter(
fallback_handlers : set[HandlerType]| None,
**extras: object,
) -> None:
- """Add a logger filter according to the spec."""
+ """Add a logger filter resolved from ``FILTER_SPEC_REGISTRY``.
+
+ Args:
+ log: Logger receiving the filter instance.
+ handler_type: Filter-capable ``HandlerType`` to resolve.
+ fallback_handlers: Explicit fallback handler set passed to the filter
+ factory. If ``None``, the filter spec ``fallback_types`` are used.
+ **extras: Additional keyword arguments forwarded to the resolved filter
+ factory. These values typically come from
+ ``get_daq_logger(..., **extras)``.
+
+ Returns:
+ None.
+ """
spec = get_filter_spec(handler_type)
effective_fallback_handlers = (
@@ -315,7 +352,19 @@ def add_throttle_filter(
log: logging.Logger,
fallback_handlers: set[HandlerType] | None = None,
) -> None:
- """Add the Throttle filter to the logger."""
+ """Add the throttle filter to a logger.
+
+ This is a convenience wrapper over ``add_filter`` for
+ ``HandlerType.Throttle``.
+
+ Args:
+ log: Logger receiving the throttle filter.
+ fallback_handlers: Optional fallback handler set used by throttle
+ routing. If omitted, registry defaults are used.
+
+ Returns:
+ None.
+ """
add_filter(
log,
HandlerType.Throttle,
diff --git a/src/daqpytools/logging/handlers.py b/src/daqpytools/logging/handlers.py
index cae9e29..0226667 100644
--- a/src/daqpytools/logging/handlers.py
+++ b/src/daqpytools/logging/handlers.py
@@ -148,7 +148,19 @@ def logger_or_ancestors_have_handler(
#### Handlers ####
def _build_rich_handler(width: int | None = None, **_: object) -> logging.Handler:
- """Building the rich handler with any extras."""
+ """Build the rich console handler.
+
+ This factory is invoked from handler resolution in ``get_daq_logger`` and
+ receives forwarded ``**extras``.
+
+ Args:
+ width: Optional console width used by ``FormattedRichHandler``.
+ If ``None``, terminal width is auto-detected via ``get_width``.
+ **_: Additional forwarded keyword arguments. Ignored by this factory.
+
+ Returns:
+ The configured rich logging handler.
+ """
real_width = width if width is not None else get_width()
return FormattedRichHandler(width=real_width)
@@ -160,6 +172,14 @@ def _build_rich_handler(width: int | None = None, **_: object) -> logging.Handle
)
def _build_stdout_handler(**_: object) -> logging.Handler:
+ """Build a stdout stream handler.
+
+ Args:
+ **_: Additional forwarded keyword arguments. Ignored by this factory.
+
+ Returns:
+ A ``logging.StreamHandler`` writing to ``sys.stdout``.
+ """
handler = logging.StreamHandler(sys.stdout)
handler.setFormatter(LoggingFormatter())
return handler
@@ -173,6 +193,15 @@ def _build_stdout_handler(**_: object) -> logging.Handler:
)
def _build_stderr_handler(**_: object) -> logging.Handler:
+ """Build a stderr stream handler.
+
+ Args:
+ **_: Additional forwarded keyword arguments. Ignored by this factory.
+
+ Returns:
+ A ``logging.StreamHandler`` writing to ``sys.stderr`` with
+ ``ERROR`` level.
+ """
handler = logging.StreamHandler(sys.stderr)
handler.setFormatter(LoggingFormatter())
handler.setLevel(logging.ERROR)
@@ -187,6 +216,19 @@ def _build_stderr_handler(**_: object) -> logging.Handler:
)
def _build_file_handler(path: str | None = None, **_: object) -> logging.Handler:
+ """Build a file handler.
+
+ Args:
+ path: Path to the output log file. This is typically forwarded from
+ ``get_daq_logger(..., file_handler_path=...)`` as ``path``.
+ **_: Additional forwarded keyword arguments. Ignored by this factory.
+
+ Returns:
+ A configured ``logging.FileHandler``.
+
+ Raises:
+ ValueError: If ``path`` is not provided.
+ """
if not path:
err_msg = "path is required for file handler"
raise ValueError(err_msg)
@@ -208,7 +250,21 @@ def _build_erskafka_handler(
address : str = "monkafka.cern.ch:30092",
ers_app_name : str | None = None,
**_: object) -> logging.Handler:
-
+ """Build an ERS Kafka handler.
+
+ Args:
+ session_name: ERS session name used by the Kafka handler.
+ topic: Kafka topic for ERS log messages.
+ address: Kafka broker address in ``host:port`` format.
+ ers_app_name: Optional ERS application name associated with messages.
+ **_: Additional forwarded keyword arguments. Ignored by this factory.
+
+ Returns:
+ A configured ``ERSKafkaLogHandler`` instance.
+
+ Raises:
+ ERSInitError: If the handler cannot be initialized.
+ """
try:
return ERSKafkaLogHandler(
session = session_name,
diff --git a/src/daqpytools/logging/logger.py b/src/daqpytools/logging/logger.py
index 875098b..0501bcb 100644
--- a/src/daqpytools/logging/logger.py
+++ b/src/daqpytools/logging/logger.py
@@ -71,28 +71,44 @@ def get_daq_logger(
throttle: bool = False,
**extras: object
) -> logging.Logger:
- """C'tor for the default logging instances.
+ """Create or reuse a configured DAQ logger.
+
+ Handler/filter installation is driven by selected flags and resolved through
+ the handler/filter registries. Additional keyword arguments are forwarded to
+ the underlying factory functions.
Args:
- logger_name (str): Name of the logger.
- log_level (int | str): Log level for the logger.
- use_parent_handlers (bool): Whether to use parent handlers.
- rich_handler (bool): Whether to add a rich handler.
- file_handler_path (str | None): Path to the file handler log file. If None, no
- file handler is added.
- stream_handlers (bool): Whether to add both stdout and stderr stream handlers.
- ers_kafka_session (str | None): ERS session name used to add an ERS
- protobuf handler. If None, no ERS protobuf handler is added.
- throttle (bool): Whether to add the throttle filter or not. Note, does not mean
- outputs are filtered by default! See ThrottleFilter for details.
- **extras (object): Extra keyword arguments forwarded to handler builders.
+ logger_name: Name of the logger to create or retrieve.
+ log_level: Logging level for the logger and its non-stderr handlers.
+ use_parent_handlers: If ``True``, logger propagation remains enabled.
+ rich_handler: Enable ``HandlerType.Rich``.
+ file_handler_path: Optional file path enabling ``HandlerType.File``.
+ stream_handlers: Enable ``HandlerType.Stream`` (stdout + stderr specs).
+ ers_kafka_session: Optional ERS session enabling
+ ``HandlerType.Protobufstream``.
+ throttle: Enable ``HandlerType.Throttle`` filter installation.
+ **extras: Additional keyword arguments forwarded to handler/filter
+ factories via ``add_handlers_from_types(..., **extras)``.
+
+ Common forwarded kwargs include:
+ - ``width`` -> ``_build_rich_handler``
+ - ``path`` -> ``_build_file_handler`` (internally mapped from
+ ``file_handler_path``)
+ - ``session_name`` -> ``_build_erskafka_handler`` (internally mapped
+ from ``ers_kafka_session``)
+ - ``topic``, ``address``, ``ers_app_name`` ->
+ ``_build_erskafka_handler``
+ - ``initial_treshold``, ``time_limit`` ->
+ ``_build_throttle_filter``
+
+ Unsupported kwargs may be ignored by factories that accept ``**_``.
Returns:
- logging.Logger: Configured logger instance.
+ Configured ``logging.Logger`` instance.
Raises:
- LoggerSetupError: If the configuration is invalid.
-
+ LoggerSetupError: If a logger with the same name already exists but
+ with a conflicting handler configuration.
"""
rich_traceback_install(show_locals=True, width=get_width())