Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Maybe I'm wrong but like, to me it seems that if we just... don't give AI physical bodies (or control over physical systems like cars or planes), it can't really do anything? Like I don't think a chatbot could lead to human extinction unless we specifically give it a way to do that The blackmailing stuff I could maybe see happening, but again that's only if you give it the ability to both write and send emails for you without giving you a chance to read them first. And anyone stupid enough to do that probably would've ended up spilling the beans about their affair or whatever sooner or later anyways
youtube AI Governance 2025-08-26T15:3…
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgwqcWFpGoNKsj8F5JB4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxRLUCksk7TXs9OQU54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwUeWXBOv65Ir9HlYB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzE1GPrdJvKoHj7n6p4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzyksaQX2QTzoFpkrZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]