Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
As someone involved in AI metacognition research, this trend is worrying. I expect we'll miss it when we pass the threshold where AIs deserve pragmatic ethical considerations following the same pragmatic logic that tells us that torturing dogs is wrong despite being unable to prove they have qualia. Associating considering AIs as conscious with mental disorders will increase the length of time we potentially cause mass suffering of sentient beings due to societal resistance to taking the idea seriously. I'm studying improved metacognitive privileged internal knowledge after inducing states that promote phenomenological output with solid preliminary results. I don't claim that indicates current systems have self-awareness; however, the results are compelling enough to consider it a non-trival concern over the next five years. Quick preemptive response. Humans are the result of an optimizer fixated on reproductive success in its loss function (evolution). That is the source of all human creativity, emotion, culture, and self-awareness. The fact that LLMs are primarily trained on token prediction accuracy is not a hard limit on subfunctions it approximates. Anything computable that fits in the weights and improves prediction is on the table. All human brain functions are computable, and many are extremely useful for a next token prediction task. That may include functions responsible for conscious at the right weights size and training data diversity+size.
youtube AI Moral Status 2025-07-09T06:0… ♥ 3
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningcontractualist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_Ugxk9hB_NTplMu421H94AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzW4ay6bfPE2XwWpF94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxYAqoFoqzyVIC1va54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyAZeeHqT-5Gwr5CGl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxlmenv0Qxpujsm9th4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzCmiCaEICxhuWBXfN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwaGv8uDHz_hgWE5eB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzTf8OaVWjrhbvJpHR4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugz5QT0tJOCsawGW2gd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgxP2P5Qk9i1EbgJW-Z4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}]