Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Our guardrail as humans is our conscience. It acts like a second brain that filters what our faster, more primal brain wants to do. AI safety might require something similar: a second system whose job is to monitor and filter outputs that don’t align with the principles we want the AI to follow
youtube AI Moral Status 2026-03-04T16:4…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyregulate
Emotionapproval
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgyXT3xyxO58fmJJm3t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzFXm9xjI61-yzgkpR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugx13Y9mHcoom1AOEGt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyvsjnSO8pt0noQnz14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugyy1Qyk0nOnRml1KP14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxbKZkeCsEOUnKjCe94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugw5Qz7sS_8BtoMvevV4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgwdN02-9aUpMceUfE54AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwgUl3RBmW333u78I94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugxt5cVb1OzEKKLH4PF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"} ]