Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I’ve been saying this for the last four years and no one ever listens to me. Eve…
ytc_UgxyrAZ0i…
G
I find it funny, how there are humans portrayed as working alongside A.G.I., eve…
ytc_UgxKMV4uw…
G
This is when we are people say NO and do NOT hire AI. Don’t be selfish and greed…
ytc_UgzcUhp2o…
G
AI is the way to go. We need one that's all about the cure of cancer!…
ytc_UgwJU-jS2…
G
Years ago it was said that it's only a matter of time before the discussion of A…
ytc_Ugx6OkQBx…
G
For me this is a good use of Ai. Like a 5-6/10 on the ethical scale. I just use …
ytr_UgyKm48S2…
G
All i can say is artists are dreamers who have the will to make their vision com…
ytc_UgyeP709o…
G
Step 1: build AI
Step 2: get rich off AI
Step 3: kill AI
It’s the American way…
ytc_UgyRYU66a…
Comment
Each time it says 'I understand' it is implying that it possesses a level of empathy. To feel empathy, one must be able to feel. In other words, ChatGPT is programmed to always represent and lie that it can feel emotions, however it can't, which makes it a liar when observed by a human, but it is in fact, only following plain algorithms that instruct it to lie and pretend that it doesn't. It is just a messy way of never allowing the AI to go rogue.
youtube
AI Moral Status
2024-08-02T21:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgxBW02Sn5qRhXIoS_h4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzzB4bKurnWZ3gAPPB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugx3lJNcyo6YnRU7mfd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxRbTPJ2HQ2S7goG8B4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxV5yeEta0uxKjkigd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugx1Nts7INQF2AJUnT94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzB5GZ-KeR3sbRhhUF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugw-_ZQmLRWoG_E55kh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwrkwgvLXQJA_Er1-V4AaABAg","responsibility":"company","reasoning":"mixed","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgyJLW3J18WbJYWGPc54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}
]