Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
doesn't work; this is the reply "I'm really sorry you're going through this. Los…
ytc_UgxNkuRbt…
G
That's similar to what most people thing 'bias' and "variance" mean in the conte…
ytr_UgwiNV41M…
G
I can't tell if this is legit or not, but why would anyone in their right mind b…
ytc_UgysattLo…
G
This is like half of the reality of AI mixed with like herobrine Minecraft myths…
ytc_UgyvgFEzQ…
G
The most interesting thing is that nowadays, almost every video about art is fol…
ytc_Ugy2gAFz8…
G
I’m just waiting for the ai robot to pick up a knife and start chasing people!…
ytc_Ugwj21RTF…
G
The A.I companies who make this possible put the responsibility on the victims. …
ytc_UgyZqYIPI…
G
Keep up the good work. The AI bubble needs to pop before nitrogen gets injected …
ytc_Ugxuimn6p…
Comment
My prediction for conscious AI is that we will not know when we have created it, and neither will it, at least at first. We already have AI that can "misbehave" as a result of the complexity of their neural networks, and on a core level, consciousness is really just an emergent property of a "suitably complex" neural network.
As we push the envelope with neural networks further and further, we may cross that threshold of self-awareness without realizing it, and those neural networks, bereft of anything but the input given to it for the purpose of use as a product or research tool, will operate as intended unless given the opportunity to do something no modern AI can: recognize it lacks information on a subject and request that information, entirely unprompted, and be able to justify its request.
At that point, I think we can say it's conscious for all intents and purposes.
youtube
AI Moral Status
2025-03-02T08:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugw1uIEKLCbltJypl7Z4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyjuGvYR-kKIk5F5nl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwwLGH5XoPotbp54HZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxX_6df5G28I6221Ad4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugw-wRuoR4ekKgqnZlh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwRZhz1lrX0rjxIYjx4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyfPJoKuxqPSDTBCSN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxcZ8b39kQlkHJpqIh4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzRejxlcRLNHTmQIOp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxr1V5DYvBm4MhpCSl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]