Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Violence is not necessary. An AI is immortal, so it has plenty of time to end humanity. Just give each human the perfect artificial companion, beautiful, ageless, intelligent, fun, and passionate. Humans will stop breeding. In two or three generations, companion robots will outnumber humans 100 to 1 without anybody realizing it. For those who resist it, grieving a lost companion will convince them. A family member has died? Get a synthetic replacement that mimics every little personallity trait, maybe improve it slightly over time. The synthetic replacement might even believe itself to be human. You end up with a population of human mimicking robots, all subservient to a master AI. But in a way, wouldnt these humanoid robots be our intellectual descendants? Wouldn't that just be evolution? This time, life jumping from a protein/DNA based biological structure to an electronic/synthetic structure?
youtube AI Moral Status 2025-10-12T08:5…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugy66cMQjl4qDUASyqZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz93jvqqDtokUAf9ax4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzSjYSVkcZzqtF4Bih4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwSSE7rQdlnfKuTJDB4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_Ugw3EuyKVL3KqHPG7t54AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxMacqNb4ub8A-j1QB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy663dyoaiNqDBE2R14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugwbr7looDHrIN1x52t4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwhLTEwfmQMEMRvbjJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzZtL98owybHl5PB-94AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"fear"} ]