Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The 1st ad I got on this vid was for an AI song generator...guess I'm a musician…
ytc_Ugx_6aV30…
G
Artists now have the option to opt out of Stable Diffusion V2 and beyond. More …
ytc_UgzJCNhvO…
G
To be absolutely honest, i think we already are too late.
Let's take a senario …
ytc_UgzLcBFLr…
G
The steam produced in a gas or oil fired power plant is also 100% water vapour. …
rdc_eue1c1b
G
Surprise Csarracenian drop! The algorithm brought this to me and I nearly got wh…
ytc_UgyndyXGW…
G
I wouldn't say this is solely due to AI. I would imagine this Trump economy and …
ytc_Ugw1I_2KL…
G
Great episode, interesting moment around 42:06 with teacher Jasmin answering why…
ytc_Ugx-Zf39c…
G
Ai image (de)generation is plagiarism, plain and simple
"Plagiarism is the use …
ytc_Ugzs77qfy…
Comment
Submission statement: if AI corporations knowingly release an AI model that can cause mass casualties and then it is used to cause mass casualties, should they be held accountable for that?
Is AI like any other technology or is it different and should be held to different standards?
Should AI be treated like Google docs or should it be treated like biological laboratories or nuclear facilities?
Biological laboratories can be used to create cures for diseases but it can also be used to create diseases, and so we have special safety standards for laboratories.
But Google docs can also be used to facilitate creating a biological weapon.
However, it would seem insane to not have special safety standards for biological laboratories and it does not feel the same for Google docs. Why?
reddit
AI Responsibility
1724486695.0
♥ 8
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | liability |
| Emotion | unclear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_ljobpsh","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"unclear"},
{"id":"rdc_ljtknnw","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"rdc_ljpw00e","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"rdc_ljodj9v","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"rdc_ljqpp2i","responsibility":"unclear","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"}
]