Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If you think of a robot surgen as being 10 X faster, better, and cheaper, and if…
ytc_Ugw9jkbP4…
G
@NotTheEnd7766AI research is killing people RIGHT NOW. It's dirtying their ai…
ytr_UgwbER0RF…
G
I dont like chat gpt5, i only wanna use chatgpt 4o,but openai keep force me use …
ytc_UgzYIMh-Q…
G
human accident, not robotic accident. robot have one job. the human is not at th…
ytc_UgwcF3SYi…
G
Impressive ! Hope AI get cure death in next 20 years! I would love to be immorta…
ytc_UgyiXcPfu…
G
Even if disney doesn’t actively stand for works beside their own, they will have…
ytc_UgzNAFKr0…
G
That's absolutely horrible I absolutely hate AI as someone who enjoys drawing I …
ytc_UgzhzdrPV…
G
Were we more wealthy when we required 90% of the popultion to farm just so there…
ytc_UgxFao4gn…
Comment
Here's a really good counter-reason why you shouldn't "talk to it as though it's an improvisational actor".
Anthropomorphising an LLM/GPT-based tool lures you into the trap of thinking of the tool as an intelligent human. That helps reinforce biases that you need to be very careful to avoid.
That's NOT to say you can't request output in a particular style or format - it's just a warning that saying "you are" or saying "please" and "thank you", reinforces a thinking model that's extremely problematic and increases your personal risk.
youtube
AI Moral Status
2026-03-02T03:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzlEoCCbSw9BM4S41t4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxQyWO3AANS9-c0QyR4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugy0Xyw6eKZl-WaKSuB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzm_xiRJhDVpJdDfXd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw1g17xzQWdJmlBkQF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzEAO7AvVWfXEOrKzd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugxc_5ij8sq4a9nBzhF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugznn_Aj91zP3BVBPYN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxziOilTr4wulYxhy14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxtvY-p0siQAUzpAyZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}
]