Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It's pretty clear given her introduction and her bio that this "journalist" is c…
ytc_UgzY40Qd7…
G
@dummy-aley8 if you make an ai bro an empty room, a paper and a pencil they wil…
ytr_UgwClxM_t…
G
AI is a collaborative tool. You can’t just dump a bunch on it then take what it …
ytc_UgwAI2yXY…
G
Gosh, I sure do not someone I don’t know clone my voice in some AI tool to perfe…
ytc_UgxlZabOU…
G
I weirdly think a lot of those redditors and AI bros making those angry comments…
ytc_UgyiHn40E…
G
48:39 This is the point I never hear "AI Experts" make.
Everyone who understands…
ytc_Ugx0Rmn23…
G
Those algorithms sounds literally like the SYBIL system in PSYCHO PASS anime, lo…
ytc_UgwbKExRq…
G
ChatGPT Is Programed to be neutral rather then based in true factual evidence an…
ytc_Ugz5cigbd…
Comment
Well, my only point of contention here would be the final point, comparing the inaction of the incapable to the inaction of the unwilling. There’s a very subtle, yet very real distinction to be seen between these two concepts, despite their identical outcomes. Let’s instead substitute ChatGPT for a rock, and insert it into the trolley problem.
You set the rock in front of the lever and give it its only two options, which can be boiled down into “Action, or action through inaction”. The rock, of course, won’t act; not because it has some sort of conscious aversion towards making a decision, but because the capacity to choose was never there to begin with.
A human, from the moment they consciously comprehend the idea of the trolley problem, is already involved in the situation. The moment you realize the consequences of either decision, you’ve already been forced into said decision. The difference lies in conscious awareness and understanding of the situation at hand; otherwise, would we stipulate that all inanimate, unconscious objects are choosing not to pull the lever?
youtube
2025-10-15T06:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugx-epRa3w5FfCNs-Lh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugz2PnJOa8dM8arkrVV4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw9ml2DzUggVkdJ-4p4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugz241Cy9m3-fqmcn354AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyIFGk6tCItgBp7V4p4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugww43cHU9ErtCnvRZB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwxwFr__8Gur_VzsnJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugz0j1AgtucfAjX79gl4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyWSO0QwXrdr1u8iVx4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxFlOe4NQwrqBxfX4F4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"}
]