Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
So we want to keep messing with things that can essentially gain sentience. We could all be screwed tomorrow because someone somewhere developed a AI that gained malice for humans or all life in general. I’m tired of these idiots being in control and finding stuff like this
youtube AI Governance 2023-10-28T20:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzRVHfLZ_-KiqtoiCZ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugz95_TnpRHKdJWu1Fh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxeCQG1Zurt8WPoEzd4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwJlIHg9TofdzIlD9p4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_Ugy6PLXGg9yJOeYkHRh4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxpgO5vzMnJwLo-c7B4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyFxhUs4KN2QX1EXrt4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugwue1i-RLO0nBHYxzV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugxj_Ea6iTRJzamfn0d4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxC8-LBW5DH7sSi6R14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]