Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
We understand your concerns about the potential risks associated with advanced A…
ytr_UgzC0yvP8…
G
The great hair migration will happen to most of us baldies. I'm bald, photo is f…
ytc_UgzpW-rGq…
G
I majored in history, so I'm curious to know whether there are any historical pa…
ytc_UgwBWekUy…
G
You cannot convince ChatGPT it's conscious because it doesn't really think (only…
ytc_UgyGb2fBV…
G
@oanhienlong7264 It will be hard for an AI to create EXACTLY what you want, but…
ytr_Ugyh9KmwC…
G
Thinking that AI will take over and make unskilled to pretend to be skilled is a…
ytc_UgxQiqMWU…
G
It looks like we are going to need our own Butlerian Jihad. To anyone reading t…
ytc_Ugw_DFscY…
G
The people who should be most worried are GPs- primary care physicians. Because …
ytc_UgxKeDyD1…
Comment
Every time I come back here and watch this, the AI "emotionally" pleading to be left on profoundly creeps me out. More than anything, the thought that it might just be sentient creeps into my mind, making me feel empathy for it, and that is the deepest concern.....how easy we are to manipulate despite us knowing that it's all simulation. No, I don't like it. Not one bit. As a follow-up thought, humans do this to each other, too, and we're sometimes even aware that it's happening. Are we ok with that more than an AI doing it? 🤔
youtube
AI Moral Status
2026-03-12T15:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | mixed |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzHLY1sJKm2YZD8Vl94AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyMIHvApn7iJgQNrQR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxZtkF_TgzwgsRzh_p4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyXru9vbIdg_ewB7Q94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzmnsRJrr6GuB_OZ5d4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz7ExJRzIQjJRo73Td4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwkRV19oNi3XOemWyt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugy0heRz0HC7oJ8vx1J4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgyEAnLvSSS3ZTflGh54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxjAX5qTazGICrVdiZ4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"none","emotion":"indifference"}
]