Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
You can't guarantee safety, just like you can't prove anything (given you can't know every variable) - you can only disprove something. So it's impossible to prove AI is safe.
youtube AI Governance 2025-12-11T03:1…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningconsequentialist
Policyunclear
Emotionresignation
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzLaYMnzbpQaXPV7g54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugz5z_Yg7AsBOZeZhih4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzcZEpd0EdO7X2z1114AaABAg","responsibility":"company","reasoning":"deontological","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_UgxyD_oaV2YtjV5kvap4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugyp3aTu5sVIOqXCoDN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugxgog57TM0Kv23JzU14AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_Ugzp8MlnwiJrrQAPdpR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugz2WKZcpiCPdDZ5_Ld4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugz97yKBhSVU4FsK7Kp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgzdS6jG4aqWc9EV03t4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"} ]