Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
What you think, Ai-don't Read the rules first before starting the game?
They re…
ytr_Ugy2gU0N5…
G
It is true that ChatGPT is a large language model, but it is not capable of cons…
rdc_j5x4u0j
G
I was talking (writing) to Gemini about TMJ and related muscular issues, and men…
ytc_UgxzC90TY…
G
Looking at Claude 4, the predictions seem spot on. Tough time for millions of us…
ytr_Ugw5d_6E0…
G
Consuming AI models content will make the real models obsolete, that's good. A d…
ytc_UgxcZCHv4…
G
ITSA
AI IS GOING TO MAKE PEOPLE EVEN STUPIDER THAN THEY ARE NOW THEY WONT HAVE …
ytc_Ugyp4eELJ…
G
Not even that. Im an engineer that occasionally works with Machine learning. Unl…
ytr_UgxwGbd_F…
G
It never struck me that AI, even if it became conscious, could ever be naturally…
ytc_UgwuXaFir…
Comment
Does emotionality have to be a part of consciousness in order to create a hyper intelligent AI --AND assuming goals and emotionality are not the same, even if they are an analogy of "drive or drives". It seems to me that sentiency and consciousness without emotions, not just drives, are two different things. Can we not have a super intelligent "machine" that does not have emotions and, therefore, is not sentinent, but just a very intelligent automaton? Let's hope so, otherwise sentiency will pose a moral weight or force or burden to regard AI, as some point, as having personhood, right?
youtube
AI Moral Status
2026-03-01T10:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_UgyTZLxAjX1JOqSFKDN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugx7giDTBzm2AYgniCp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzxFPtalflIaRL05154AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyuJMienFlrXjaU8nl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyM1ZcmRyj_5pdZ1wN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgypIAMe5PrSMvl72uR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugys-eq6oFVODIyHltB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy86lMQFFGzPrqH6FN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw-RBqYdE27O3S0q1B4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwMVjuIsaEumzNd4s14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}]