Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
We're not going to need to get to AGI or ASI for the world to be in turmoil. We have enough advances in AI right now to have extreme problems for a long time.
youtube AI Governance 2025-09-06T14:3…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgzqhRpUSaE6cwCQKwN4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzFxBuO7jIjsE0mhUh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzQWXDdJ1KP6c0ScvJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugyeggh0Hc9KnqCR0PR4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwvE_XF6DecBr5AVPB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgyOMQhIGDSoTmQWkyN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"mixed"}, {"id":"ytc_Ugy_0CXexIeUTZRcYqd4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxN41xBzDJsaMKU3FN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugxzd7PLSFQrwZgmEk94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugx3DF8jOteWpRZm7At4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"fear"} ]