How we think about AI governance in HR
The adverse-decision rule, the prompt log, and why humans stay in the loop.
The easiest way to build a demo-worthy HR AI is to let it make decisions. The easiest way to build a trustworthy HR AI is to refuse to.
Those two goals pull in opposite directions — so we wrote down where we stand.
The adverse-decision rule
The Lenavio AI Assistant will never, autonomously, do any of the following:
- Terminate, demote, or discipline an employee
- Approve or deny a leave request, expense, or offer letter
- Close a complaint case
- Change compensation or title
- Flag an employee as a compliance risk without surfacing the underlying evidence
If an action materially affects someone's employment or record, a human has to take it. The AI can draft, suggest, summarize, and surface — but the mouse-click is ours.
What it does instead
Three things, mostly:
- Surface. Pull signal out of cross-module data. "Which managers have approved >40 hours overtime three weeks running?" — that's a question worth answering, and it used to take a BI analyst.
- Draft. PIP templates, policy updates, complaint responses, offer letters. A human edits. A human sends.
- Explain. When a pattern is worth flagging, the AI shows its work — the underlying query, the rows, the time window. No black-box vibes.
The prompt log
Every prompt and every output is logged, per-tenant, for 90 days (configurable). Logs are searchable by your admins, redactable per our DPA, and never used to train external models.
If you're asking "can you show us the logs?" — the answer is yes, any time.
Why this matters now
The HR software industry is entering its AI moment. A lot of products will ship a lot of features quickly. Some of those features will be fine. Some will make decisions they shouldn't.
We'd rather be the boring one that keeps humans in the loop than the magical one that quietly takes them out.