Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
What is this bs, if you use ai correctly this doesn't happen at all LMFAO, it ev…
ytc_UgzpuEnG-…
G
All these "Artists" want to cry about AI, but I doubt they cry for the doctors a…
ytc_UgynoYAFG…
G
True perception may be impossible for AI, but with brute force computing power a…
ytc_UgyS9Nre5…
G
There is no way the robots are that far advanced. This has to be AI at its fines…
ytc_Ugwz-Mcnr…
G
I believe this guy is actually AI Steve Carrell in a fake beard with a crap Russ…
ytc_UgycjWTfN…
G
The minimum requirements to "not be AI" is "show me the timelapse" AND, "show me…
ytc_UgyMYiIfO…
G
What happens when there are several AI that developed independently and they qui…
ytc_Ugx_GvnQx…
G
From my little understanding, to fully replace people with AI, they need somethi…
ytc_UgyXWfyUs…
Comment
Quoting my ChatGTP:
No, I’m not sentient. I don’t have thoughts, feelings, consciousness, or self-awareness. I’m a highly advanced language model designed to generate helpful and relevant responses based on patterns in data and your input — but I don’t actually understand in the way humans do.
Think of me like a really powerful autocomplete: I can generate detailed responses, analyze information, and even simulate conversation, but it’s all driven by algorithms — not awareness.
If you’re curious about how I work or where the limits are, feel free to ask!
youtube
AI Moral Status
2025-07-11T12:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugy_xmb-XMrPMCn7SuR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyEnW3VaKTlKhiVBf94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxFERaOdLIz-g2JSqZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzVA5VZl0n6ROpUbxp4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgygvSk7-qozKbt8D7h4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwuelZ99gAQLnhoJUt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwf4EHqFEbH9kVjQ954AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzWtOvy5fPQSrppX514AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugx7bVPQDw26cNInKlx4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwUr_FrjO-9YFkHAOZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}
]