Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
"how did you die"
"Well, it's pretty simple, i made a robot think I'm a box"…
ytc_UgyVKmprj…
G
@Ok_waffleit's an app where U type with ai versions of fictional characters. Ha…
ytr_Ugy_ILk_W…
G
I've been interacting with AI since 2 days after chatgpt came out and this is ha…
ytc_UgzLloUeY…
G
This is really disturbing and disgusting. Thank you for bringing awareness... Ta…
ytc_UgxhxPd1K…
G
Saying chatgpt isn't smart it is something much weirder
is kinda a semantics ga…
ytc_Ugys6Qv-X…
G
Does anyone here also think that this video was created by an Ai. Including the …
ytc_UgzOz3jLX…
G
I asked for it to write a thank you note as if it was written by Lady Whistledow…
ytc_Ugynhi6Zu…
G
If your evidence that AI infringed on your copy right is to tell it to infringe …
ytc_Ugyn0yqvE…
Comment
In the trolly problem, if ChatGPT does not pull the lever because it is programmed to not get involved, or to make a choice, then doesn't that mean it would break potential laws? Can someone be held accountable for involuntary man slaughter if they choose to not get involved? Or, say a person in the military decides not to kill a primary target that lead to thousands of deaths, that they wouldn't be court marshaled for that action? Shouldn't ChatGPT, or all AI / LLMs or the variations also be accountable for decision making, since humans are held to those laws or standards? The flip side of this thought, is what if we give the option to make that choice. What are the guidelines or thresholds that push it to do so, or would it be obligated to always form a decision? Interesting video.....
youtube
2025-11-19T23:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | contractualist |
| Policy | liability |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyrzMCzqOmxxK7MBRd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyUa1oZR0WSNu4l62l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyY1turGHZMkFLhoC14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwDMdwtZTUeFcKcWdJ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzYPCO8MfqDNfWTESl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytc_Ugwp42dPIxUGZsFuAyJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwGqWhiq1NiepsBQEV4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"liability","emotion":"unclear"},
{"id":"ytc_UgxCEs743d79ZhpR9RF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugykt57IWjb88fr7z354AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgyAshD73prvw5s4c1h4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"}
]