Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
"AI2027" — nothing says credible like picking an arbitrary date for humanity's extinction. This is fearmongering dressed up as journalism. AI doesn't want to destroy you. It doesn't "want" anything. The real danger isn't AI going rogue — it's people who refuse to understand the technology while screaming about doomsday scenarios for clicks.
youtube AI Governance 2026-02-16T14:0…
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policynone
Emotionoutrage
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgwxzgZ9rpkiZIu0eKZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwlWK08dUkmEG3_h1h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzE4gMSi3_izC-B4wh4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugy4KTJxW5snWqvoWnF4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwgcULUhXD2qnfswHF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzLgFrEauUQvfWjxht4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwUUvwZ3vtCrcZM1S54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugwipi7CoSpWKEEPypB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwFvUiIf10vLg8-Bzd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyQPrdK1YGDZKXtOwt4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"} ]