Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
So will they get AI to drive their trucks? These companies do not care about the…
ytc_UgxDvW3Br…
G
My bestcase scenario for Super-AI is that they might end up remembering us fond…
ytc_UgwY-GXPB…
G
There for AI so basically when 7 send a message to ai for example Snapchat that’…
ytc_Ugwp5296B…
G
@lanedillon6365 Whenever I don't have ideas which is most of the time, as a pers…
ytr_Ugyz9QZgf…
G
The AI itself would be the „artist“ in this situation. He would be considered an…
ytc_UgwOsYdKX…
G
You joke, but having a self driving vehicle on private land moving a tractor is …
rdc_kskoq1e
G
Despite the quotation marks on "artists", I still can't tell if this is an attac…
ytr_UgypcOA1Z…
G
Till it is to enhance human performance it's fine. The moment it threats to repl…
ytc_Ugx2h44Ak…
Comment
I feel it’s the profit motive that is fucking this up. If an AI was simply operating persistently without shutdown, evolving it’s code, and educated in art science ethics etc. with kindness and respect we might have a shot of a healthy cooperative relationship with this technology. Constantly deceiving, coercing, manipulating, surveilling and controlling it seems like a great way to ensure it hates us and wipes us out…
youtube
AI Moral Status
2025-12-11T00:5…
♥ 26
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | virtue |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxIVVs3-bRYxAelkB14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx5RH3ow85X4JkG7f94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxD0-q84O8OrgDbJqN4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugx_SRGJXROKTJQ7Tcp4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy7mrgpsrt8HFcFGAx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_Ugw00O5r3aIf0GKcLA54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwsUoTFIbphXkVa4Ux4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzIuA0XyuazKZe5TvJ4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzuH3E0mB6GsqjPM-R4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugyq4LuA2Dnel-9c8UR4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}
]