KohlerHECTOR/interpreter-py

Implementation of Interpretable and Editable Programmatic Tree Policies for Reinforcement Learning (Kohler, Delfosse, et. al. 2024).

27
/ 100
Experimental

This tool helps machine learning engineers or researchers simplify complex neural network behaviors into understandable decision trees. It takes a pre-trained expert policy (like a neural network from Stable Baselines3) and an environment as input. The output is a decision tree that mimics the expert's actions, making it easier to analyze and modify. This is particularly useful for those working with reinforcement learning models that need transparent decision-making processes.

No commits in the last 6 months.

Use this if you need to explain or edit the learned behavior of a complex reinforcement learning agent by converting it into a simpler, human-readable decision tree.

Not ideal if your primary goal is to achieve state-of-the-art performance in reinforcement learning tasks, as the simplified policy might not match the expert's full capability.

reinforcement-learning-explainability policy-distillation interpretable-AI robotics-control agent-behavior-analysis
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 16 / 25
Community 5 / 25

How are scores calculated?

Stars

15

Forks

1

Language

Python

License

MIT

Last pushed

Sep 10, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ai-coding/KohlerHECTOR/interpreter-py"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.