Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Some people just take issue with working directly on military applications. But …
rdc_dwvi0x3
G
Elon Musk said that UBI would HAVE to be implemented at some point (communism) b…
ytc_UgwrFg_fm…
G
If you're using AI to do it for you, it's not your story, and it's not your "art…
ytr_UgwOsCfFQ…
G
You are fixating on only one part of the supply chain of semiconductors, of whic…
rdc_gt5sm6w
G
Literally just took a class on machine learning and the ethics portion was liter…
ytc_Ugz_SmMwD…
G
Dès que je vois Youtube me proposer une nouvelle émission, je salive. J'aime bea…
ytc_Ugyc8J-c9…
G
Yeah i agree AI is not art i get useing it for fun and dumb meme stuff but cla…
ytc_UgwzKkHNm…
G
An even better strategy is to never query an LLM with the expectation that it'll…
ytc_UgxHijLDh…
Comment
Elon Musk is already known to be a jerk with no moral compass or empathy. So the guy who is pretending to be nice must be Sam Altman? But can you trust the rest of them? The time will come that people eventually have to fight back for their minimum but they will be met with robots protecting top % 0.1
youtube
AI Responsibility
2025-09-06T06:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | unclear |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugw-rv7F7d7lWgvSou54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugw1Fot_hUP3qkNghj14AaABAg","responsibility":"government","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugwbeb-2Mxo0BKrLVQN4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugwwr59TKmSgV1Scb6F4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugxgi1Cikyp5Lch1QaJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxW5IVSItXPJqNTyx54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy8PAwVC-QNFU3MN4N4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwUEJSyf6QPON-j4kd4AaABAg","responsibility":"government","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugw4j6AYb6fPjfv2Pnl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxBWTH0ZMjjOI5LA8t4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}
]