Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
If AI is smart enough to know that humans can pull the plug, it's also smart enough to know that killing all humans would be suicide since the infrastructure that allows AI to exist would collapse. This means if AI decided we were a threat it would need to find a way to control us rather than kill us. I'm not sure which option is less terrifying.
youtube AI Governance 2024-01-19T18:0…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyOJaqdMtiMlE6Tkc14AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxX_PYoYXnWOhLkF8d4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzJ1Kdl2JCtu1Edl6B4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugwif5MnnX3eMejge4x4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgzXFO_xz2RQkzmRqOl4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugxs6OhNLX15u03LJit4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwIjG7v6u-eP9QQ4v14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgzPfoX_-k9r-tmNlWV4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugx98Xfvow1Uztt-5QV4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugx5U8HSTotrTPx42kt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"} ]