Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AI isn’t a yes/no decision. The question is where control actually exists. Most systems: → optimize outputs → scale capability → improve prediction But none of that guarantees control at execution. The boundary is simple: If a system can act without independently verifiable conditions at the moment of action, then it’s operating on inherited assumptions, not present reality. That’s where risk enters. AI should be implemented where: → actions require proof at execution → conditions are re-established in real time → refusal is structurally possible AI should not be implemented where: → outputs can act on stale or assumed validity → authority is carried forward without re-verification → execution cannot be blocked when proof fails This isn’t about capability. It’s about whether the system can say “no” when it matters. No present-state proof → no execution.
youtube AI Governance 2026-04-24T14:1…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionunclear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwCKkFitvRPyKJS6nR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"unclear"}, {"id":"ytc_UgxzJjiqdpdtgWCVjNR4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgylfFqJibsP-ZYskYd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_Ugxg-upCa51egrEFkeB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzuXBO_aWL1FZEFEeF4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgzSllxQTQwqyJydZWJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgysLZl-6Gk5hQ5dq4V4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwixabSKdmq_ru3X354AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzJuuNC6prztMjSoI54AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugz-uAdLLWOhBxh1n-R4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"} ]