Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
As long as AI is in the hands of liars and unethical people, it should be banned…
ytc_Ugx3AVxCm…
G
Hidden human knowledge is the only weapon . The tree of human knowledge AI Adam …
ytc_Ugwjq13Ud…
G
We need to put guide rails on AI. Moral ethical ontological guide rails. AI can …
ytc_UgxxObidO…
G
The reason we are doomed to lose to AI is because true AI learns from humans and…
ytc_UgwU7U-zQ…
G
We DON'T need an AI that can pass off as a Human!!!! It would be dangerous to Hu…
ytc_UgguQjjn6…
G
When companies aquire and use these machines they should have to pay a tax that…
ytc_UgwL-fVVc…
G
If these people want to consider "AI-art" to be real art, then why are they tryi…
ytc_Ugzc_rmO-…
G
These people don’t know what they are talking about. Why would we need to tell t…
ytc_UgyCXnZLd…
Comment
If AI is already capable of just lying to people and performing disruptive programs, at what point would ai really only seek to benefit itself as an intelligence? Given that we build these things to help humanity and advance knowledge, it seems to me that this sort of self-agency that already need humans to establish some lines is soon to be capable of just not doing those things for our benefit.
youtube
AI Moral Status
2026-03-02T18:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxmIXlgp0BI-W43TUd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx1PraamSXkb939xbZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxzCPlcnq3EUYfLFS94AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugydu1FzfYm_oJDvYNJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugy9zvDKvJ5dBqZVtS54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugwj3hMKGn3B0CXziaF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwmUi_jATYTq7RPkuh4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxX6AwjIcq0gJepHMt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyxRBTSkyVrSGxm95F4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxXYZQUVuWD0q6ZDRp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]