Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
We need to program all AI’s to want to protect us. If one turns out to be evil and tries to kill us. We can turn on another. If the other is evil we all ready are goners. If the other is benevolent we will not get wiped out.
youtube AI Moral Status 2023-08-20T23:3… ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugw13fhsvckj0yR91O94AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzaioAdWwVpqN1h87l4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzJXqZgIVnkvMkQE_d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy0Sz2-H7fANSCSpNR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgxJS-uRAVesQffT0OZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwaYHm5r-PWdId7C214AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugz7fJy7IL6E5m3bOzF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgyYpNjBKApv8wLPMC94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwxA-asleNV6b5sLrl4AaABAg","responsibility":"user","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwwpmmVD_7-SiK7dq14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"} ]