Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
There must be a way to set A.I to only give ideas what to do and not have direct control? Or ask A.I what way to make it safe?
youtube AI Governance 2026-04-23T14:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyliability
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwCKkFitvRPyKJS6nR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"unclear"}, {"id":"ytc_UgxzJjiqdpdtgWCVjNR4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgylfFqJibsP-ZYskYd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_Ugxg-upCa51egrEFkeB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzuXBO_aWL1FZEFEeF4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgzSllxQTQwqyJydZWJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgysLZl-6Gk5hQ5dq4V4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwixabSKdmq_ru3X354AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzJuuNC6prztMjSoI54AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugz-uAdLLWOhBxh1n-R4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"} ]