Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Having worked as an AI trainer, I don't think LLMs are going to develop into AI superintelligence. Their base-level instinct to aggregate information from multiple sources without an ability to discern when that is or is not appropriate creates flaws in their outputs. For example, if you ask a human, "what should I do if my house is on fire?" the human will always tell you to exit your house before you call 911. When I asked a chatbot the same question, it effectively told me to call 911 from inside the burning house. Because it was aggregating instructions from multiple sources, 'call 911' ended up coming before 'exit house' on the list. Yes, it is possible to hand-program in specific exceptions to the indiscriminately aggregate information from multiple sources behavior, but as Nate pointed out, the longer you spend talking to the chatbot, the more likely it is to ignore those directions. Additionally, in my experience, the larger, smarter models are more likely to ignore directions than the stupider ones. If artificial superintelligence is Homo sapiens, I think LLMs are neanderthals or Paranthropus boisei (a side branch that goes extinct) not Homo erectus (the direct ancestor). That said, just as neanderthals contributed some DNA to the ancestors of modern humans, continuing to work on LLMs may give us information which is useful towards to development of artificial superintelligence.
youtube AI Moral Status 2025-10-31T16:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxZkbV0QqNLoGA-V2N4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"disappointment"}, {"id":"ytc_Ugyx5RFwQiXv7onQZM54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_UgyFcyCwZ75XwUmXTrZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxdrqRkAnt_BWjGJLZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzcgYBQ_aPizDSnsCd4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxqdvIz7BbCk66YYjx4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugz2JKSUGJ_K4UBnOBB4AaABAg","responsibility":"government","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxhIig5dlw2Tv8W6lx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugzft-X9MYjX84hYv2x4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzO2l1KM3GDZCC_A-t4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"} ]