Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Meanwhile, these people are raising money by preaching (again) about the end of …
ytc_Ugwiwo6Dy…
G
So sad that some so called progressives here engaging in neo-luddite anti-tech r…
ytc_UgiSGcZ2v…
G
Google had a motto "Don't be Evil"' which they changed to "Do the right thing" i…
rdc_ohv7hz4
G
Lol.. that is not what Control AI is. Take 30 seconds to look stuff up before po…
ytr_UgwCUCEZs…
G
Upward Replacement is a myth.
People who compare AI shift to industrial revoluti…
ytc_UgyEF1CO4…
G
@jingle1833 I mean there are people who look like this yeah probably they don’t …
ytr_Ugx0rZS6k…
G
I had such an insane 4 hour chat with Grok. The advice about my life was really …
ytc_Ugw4H1XF2…
G
The best roast I've heard about AI art is that it's making amateur mistakes when…
ytc_Ugx-IAgd5…
Comment
AI will never start a war against humanity, just people will. For an super intelligent entity it would be a very stupid descision to raise an opponent it probably has to fight against. No, we will not notice that were are going to serve the AI for reaching its goals.
I think the problems we see in AI are the problems we aren't able to let go, these are our subjective problems. It's so human like to think it's all about power, money and races. It's so deeply baked into our behavior, maybe even biologically/evolutionary, we just aren't able to imagine something different but actually it's not of any value when it comes to an greater objectively understanding of the world. If we force control over it, we will also keep our problems and just scale them up.
youtube
AI Moral Status
2025-04-27T22:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugxng-97mS5BOjPhZo54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzK1aaUwLGN-6tQDMp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw91ULql6qbLCDMnE94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxwJ-wysSu50jj_ytx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz5go6-xlfKrcJJqXd4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugwa141i5sMPEOHRAO14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy3rV9Eyeh7EHodCVh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyaYtSgIG5qvak81Rh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxZlGqV06XopUDWV1J4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw7l_BaYk4OOIRBiGt4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"fear"}
]