Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
the irony of you taking a photo of a piece of art in a museum and then posting t…
ytc_Ugy7XnE3c…
G
3:33 the problem with this logic is that the guy didn’t create anything, he aske…
ytc_UgxQStagS…
G
I can't think of even one product or service that got better after "pivoting to …
rdc_mpmgync
G
Thanks for your reaction! Sophia's insights really spark some interesting though…
ytr_UgxKZcVCK…
G
I asked AI about health advice once. It caused me to suffer a month of deep feve…
ytc_UgyqHMh7e…
G
Easy test. I just Ask some questions about history and I mediadly know If is rea…
ytc_UgyTNy-C8…
G
It is in the revelation
Out of the pit come drones and AI is in control of the …
ytc_UgxS-LqsL…
G
Lmao "back off AI, I'm married!!" Lmaooooooo if anyone seriously believes this h…
ytc_UgxHPp22D…
Comment
Would someone be willing to help me understand the language being used in this video? I was talking with someone and I think they were taking the use of words like knowing, thinking, caring, hallucinating, etc at face value as if both Hank and Ansil are intentionally making or acting like these things are human. Do these words used in this context mean what they mean in a literal sense, like the AI knows, cares, thinks about XYZ exactly like a human does? Or are these terms the closest descriptor we have and we just haven't invented words to separate these processes yet? I was under the assumption that they aren't actually trying to make the ai seem human but I just am confused now tbh
youtube
AI Moral Status
2025-11-09T23:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwMx65hZ5Jh57tQsMJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxRDBo7vqaWBNDh4kN4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgwXzKC1Xj-Naw3IHah4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwIK-m1uQDB1xVDzal4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugx4FtJv8iIf2WhuBkx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugyr_8jxe2aZcYOKxJ14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzKASFNPPXDb8EpwD94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz0sVrjQM72MLG2_8t4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgxhlOzVsQp7Sk-crF94AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxUYxBzYWpH2l5VcLB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}
]