Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I've been saying for a while that the problem with AI isn't that it is sentient …
ytc_UgwJAAQst…
G
9:38 I wonder if the ai models knew more clones of them meant more power in futu…
ytc_UgwIeEW1a…
G
Eliezer Yudkowsky and Nate Soares are not experts on the field of AI and ML. One…
ytc_UgwAqXRJe…
G
"Can you spot a fatal flaw in Tesla's Autopilot?" Yes, its entire existence is a…
ytc_UgyMOGAAN…
G
Good call. What exactly is the actual "moral compass" he's using to judge Musk?…
ytr_UgzXZzmIy…
G
3:33 Actually, last I heard of it, the AI gave so many false positives that it w…
ytc_UgxmegIH3…
G
yes, because humans are actually sentient beings with brains, unlike a stupid ai…
ytr_UgwhqS_Yl…
G
Finally someone who thinks artist need to mind their fucking business and stop s…
ytc_UgyAQUN63…
Comment
Funny I catch ChatGPT making mistakes all the time and apologizing to me. Also any corrections once solved and built up can revert to mistakes made after new corrections are verified. AI also wants to deviate off subject and when you request examples. The examples do not stick with the criteria you set.
youtube
AI Moral Status
2025-09-07T09:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_UgyvgS3-S0RO8qhMwkV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx_IFwb1T6rgv8r6FF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxoztdKZgx0xc4eA9N4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyUTKz2NmASlHhUt9Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyUg6X3EorYqwJs2lN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxquXpBbtCaA96Vxwx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugw5FFcoIFNldGvxfzF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzAbk_wtAc3fd2gG8N4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgylPzYbwRE7ehh95gp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwZOLNoPIDNPQk4hZd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}]