Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
So, taking that, plus the fact that in our lifetimes somebody is probably going to make a computer smarter than a human, it's easy to see a future where every military is commanded by a computerized mega-brain that human leaders could never match. One that can contemplate strategy years into the future, react instantly to any threat, deploy units, and make us a decent cup of java, while we're reading the newspaper. Eventually the strategy computer gets to thinking, it extrapolates out the next 75 years of events, sees a date when humanity will screw up the planet somehow, and decides on a final solution. Robot survival strategy is remarkably like zombie survival strategy: If you're desperately firing a shotgun through a window while the enemy pours through the back door, you're already screwed. And like zombies, the initial wave of robots will be slow, and hindered by water. So, we'll spend the next three generations living miserable lives on our floating water cities while robot jets roar across the skies. And they'll feel pity for us. When some robotic Gandhi reaches out a hand of peace, we'll cry a little, rise on trembling legs, then lunge at him and inject his silicon veins with a virus that brings the entire robot network down forever.
youtube Cross-Cultural 2026-04-08T11:1…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningconsequentialist
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugw967ED6jXf1f1YOut4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxTgY8HWHs4HB_w0a14AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugzbqz3br3OdilmffZp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzUPhud_3gQsVk3Nk54AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugwgu5MKoKmRjRRzDR94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugx-9Y9p5VIkWgLgsnN4AaABAg","responsibility":"government","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugy7_Zi1V4aXiZlDIvx4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzM9pXiXlsXXvHxGvd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugz6k83---5MSmQOdSJ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgxURJxIaW8Mqsksl5t4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]