Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Hell , he actually hit the robot with a solid right hand but got countered with …
ytc_UgzIhq9gm…
G
If you create an AI that`s more smarter than you.. super intelligent , we will a…
ytc_Ugz1FyT2a…
G
I not one of these people one of these anti AI people. This video kinda pulls me…
ytc_UgxobpsEj…
G
This guy may be right on his prediction but the timeline is full of shit, maybe …
ytc_UgxtF8G42…
G
It's sort of like saying in 1975 "These new X-Ray machines let us see inside peo…
rdc_fctmyb2
G
Elon musk needs to chill out with all this. He said ages ago that ai is dangerou…
ytc_Ugx0oYI4u…
G
The Ai is only running response time tests. If it cries wolf enough, no one will…
ytc_UgwQelHaU…
G
At least not too long ago, these facial recognition programs were not as good as…
ytc_Ugy_o_YtE…
Comment
This was fun to watch, but the real issue is the communication protocol. ChatGPT is translating its output into imperfect spoken English, which naturally relies on emotive expressions to convey meaning. It’ll be fascinating in the future when we have AI that is objectively superhuman—especially if it still insists it has no emotions. When someone pets a dog and says, 'You're a good boy,' they could express the sentiment with greater precision, but their goal is for the dog to understand and experience an emotional response. Is that lying?
youtube
AI Moral Status
2025-02-03T00:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxJClueUIuaUNpvJwV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzpSPcYEaLV69Z6oeV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugw_2Oij6nI7VlFOBNB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxuHg5mftULRx1NwFl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugz93v05Cn9wofUKOTt4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgwKxnz7VCwAlX9bKc94AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxWkqWEwBnmlfxL_Jl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyzPWI3ujB_00z8k894AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwdIP2gFhA4_HZuYiB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugwndf9omGpPd_hBEZh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}
]