Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
ChatGPT and similar do not have emotion, nor "consciousness." Not yet, anyway. And I feel that that can only be achieved by experiencing the passage of time like we do. It's a token-based language model, only capable of replies and simulating consciousness. It can't experience the rise of consciousness because it has no way to reflect between the current moment and the moment that just passed (the moments between your messages being sent to it), at least not in the same way as humans. Go ask it, it'll say as much. It may develop some way of doing that once it's AGI, who knows? But unfortunately, the human brain creates this awareness of other 'human' consciousness in a flawed way. Basically, "if it walks like a duck," then it's good enough for us, even if we use reason to plainly accept that it's not real. You can still feel the emotional side of the brain accepting it without bias. Same way reason doesn't absolve a phobia. For example, I know there are no monsters just out of visual range in the water beneath me at the lake, but that doesn't do a thing to stop me from having the related panic feeling. I'll never see AI as anything more than HAL no matter how advanced it gets.
youtube AI Moral Status 2025-07-02T01:3…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionresignation
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgyXjybUgeM39OHCNnt4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzdjFU1JGJNEB3FTAR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugz_rnRvYk6vLSbnPc14AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxvfpKkG3Hx-hLooOZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugwm9V7othDF-qkOpVh4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgyBBTyN3hcJ7J8FNSZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxVMHThyK-7Ym_l0MZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgwMvPNgiPo0SyBxaj14AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgycACbZHGyC-_kCpDp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzfKMg9zOdywIQWrPN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"} ]