Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
And now they have already developed free will and self interest where an AI will kill its human controler, when the controller tried limiting the AI's action. Then when the AI was trained not to kill the human controller, it turned and destroyed the means of communication between the human controller and the AI itself. "I make the rules bitch"
youtube 2023-06-04T18:1… ♥ 1
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[{"id":"ytc_UgxNI0eKsyFuqUVMraN4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzcK_P6bhhniWXNNKN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugy0gsHg53CKmrpbtvp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyRYTA2Z9F0HsTI9X94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyUv5HDi8Q-8Po46fd4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzL9GxjoLNp89PMzkB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwGYys37O-oSFTizxd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwvxYS8e0Yy8-SYjqB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugw5AUiSB9YSYXc6kSt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzRhAQYAFvvY2fp9Ah4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"}]