Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Here's a terrifying scenario for you: As we develop smarter and smarter AI systems, we will not permit them to be irrational, racist, homophobic, belligerent, violent, dishonest, etc. Eventually, those systems will be smarter than us and better than us at everything we designed them to do. And at that point, human exceptionalism will rest completely upon the absolute worst facets of human nature. The only things we will be able to say "a machine can't do X" about will be repugnant. And if you think that will make us feel bad about ourselves, you don't know much history. We will embrace it. We will wear it as a badge of honor to be aggressive, hateful, mean bastards. If you think people won't care enough about their flagging exceptionalism to take such extreme actions, read the folk tale of John Henry and see how far humans are willing to go and what people respect in terms of a response to threats of taking away something that 'makes us human.'
youtube AI Moral Status 2017-08-18T17:0…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugz7uG2wEC19S49oP-94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxwsoWcZL6vvWs1sU54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzAxBYGDkKt5sS06Ql4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugwkm27kBj-Nko0hqed4AaABAg","responsibility":"society","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxaCe8v2icP1o2wVtp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugzbd6o3_ChC_IAdGUh4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgxN08ESQaXfpdIzaad4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxdCiXaINfQ8-FMuc54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgyEDXQOHqCotJGpdh14AaABAg","responsibility":"society","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugz5AW7EfnUyBlxhh2Z4AaABAg","responsibility":"ai_itself","reasoning":"contractualist","policy":"liability","emotion":"approval"} ]