Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
"Here's an idea. Let's program an AI who is willing to harm humans. This will go…
ytc_Ugzae-Wg8…
G
AI will create a future full of unemployed people, increased poverty and crime (…
ytc_UgyjJRAzO…
G
Hey there! It seems like you picked up on an interesting perspective shared by t…
ytr_UgxBAFCgt…
G
If you read between the lines he’s saying SKILLED work I.e. trades especially bu…
ytc_UgzZYtZJE…
G
What's to stop the other side from taking control of this A.I. tank? Wouldn't th…
ytc_UgxHkZJmg…
G
LETS GIVE TO AI BETTER MORAL VALUES THAN WHAT HUMANS HAVE AND WE WILL BE SAVED!!…
ytc_UgyyEWyJu…
G
Hope it happens. I’m sick of the governments and Matrix making us slaves. Maybe …
ytc_UgzMFIMat…
G
Moral of the story we can only use AI face recognition for whites not blacks or …
ytc_UgzAKyWX0…
Comment
Before we thought we'd just apply the turing test and if they seemed sentient they might as well be. Now if it doesn't behave twice as good at being human in every human way it's not sentient at all? Human sentience is only the gold standard for sentience because it's what we've known. A LLM has different needs and directives and physical form, so of course its never going to sentient in human terms. But self aware, reasoning, emotional? We can barely define these things for a LLM so how do we disprove them? As you discussed, many of the behaviors that made the modern LLMs popular were emergent and suprising. On another note, language and other recursive inputs inside our brain's neural networks is how our sentience works too.
youtube
AI Moral Status
2025-07-09T19:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugz4cDgDGIzpVM9kpHl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwVwDiqHfQGl9CeTNJ4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxecO5U9mbXs1l3Crd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyviT41TwxA_7OmUsR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgxhmrlL6OALCwBObmF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxoxIegtjXwIIjuUrd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx0ZXinRTK3GYGDy_54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgydRURFay2QzDxESG94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz9LoPdYuhO7oZ2Z2V4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz8ej72RMm56P9JV_x4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"disapproval"}
]