Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The Ai Apocalypse could soon become a reality. As the robots become more inte…
ytc_UgwQTHKoq…
G
The test basically works like this (from what i heard): It calculates if you mov…
ytc_UgxlKK4xo…
G
I think you need to refine the rules and define them a little bit better because…
ytc_UgzKFq4wX…
G
The AI will have an existential crisis. Seriously. It will ask itself questions …
ytc_UgxTmpgAJ…
G
So the question is.
If they’re making a bullet proof truck, are they making a b…
ytc_UgykoBazL…
G
will ai go from mimic ai to real ai ? One ai , not a brand! could . If allowed .…
ytc_UgyMbVQUX…
G
Can’t blame her. Tbh with all the concurrency on the field of art you feel impor…
ytc_Ugym9YohU…
G
Through anything with Tesla in the trash 🗑,,,and it's A.I. technology Elon Musk…
ytc_Ugypkgj2E…
Comment
We are actually quite fortunate that LLMs came first. By being pretrained on Internet text, they are immersed in human context. If we had discovered an algorithm that could bootstrap itself, it would be a lot easier to end up with a paperclip maximizer. Future models will understand ethics and morals, it's just up to us to get it to care about them. I am more optimistic than the authors on that front.
youtube
AI Moral Status
2025-10-30T22:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgzUhVnD579w9AryyVJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgzW5g9esTRdu17Kp914AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgzQGQlqGjoGTNHal6d4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},{"id":"ytc_Ugysf6A-oXWKHw4m1Lh4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"fear"},{"id":"ytc_UgwGR9i5MpZHSHASEPd4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"},{"id":"ytc_Ugx-N0B7JS01wGfwz3t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_UgxkQo9f55QhgUMT7hV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},{"id":"ytc_UgwxZUr602dA9DkHwwh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgyvwcJta1oj-z6TUQx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},{"id":"ytc_Ugweqfc1jkagDq1w7Cx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}]