Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Literally started studying to become a pastry chef 6 months ago and I plan on st…
ytr_UgwUScLUY…
G
@GarethJason and my point is both sides are so toxic and whiny. We can't do any…
ytr_UgxM1O2rR…
G
Agreed, when ever I see post that shows accounting and auditors having a 90% + c…
rdc_cz2x6o9
G
does that mean that i should just not use chatgpt when starting to learn to code…
ytc_UgxEOs0VF…
G
honestly the best thing ai has done is make learning easier and cheaper for the …
ytc_UgzjOHmFB…
G
Look over the last few years of AI end times videos, they all have the same titl…
ytc_UgwGB07L7…
G
@howmathematicianscreatemat9226 I wasn’t contradicting—I was exposing the actua…
ytr_UgwF5w4hs…
G
This is highway autopilot. You are not on a highway. You are on an off ramp, an…
ytc_UgwSQcKje…
Comment
causing harm is ambiguous. you can't destroy mass for example, so for an AI there may not be a teachable concept of harm, but only a training to prevent known types of harm. in other words, all types of harm that occur to us. implying that the concept of harm will always have blind spots, which the AIs will stumble upon inevitably. all we can do is involve as many minds as possible in finding types of harm, to close as many blind spots as possible before the unthinkable happens.
youtube
AI Harm Incident
2025-09-11T22:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | industry_self |
| Emotion | resignation |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgyCkt-SCd6qNFOhHTF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwKgN92eaYQAFRG2yx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwGwP7xgpdyYv75ZeR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugz7u3QuyyAYIip7Smt4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_UgxwCJHwbjlWazIugOh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgywhUGYyXhX-S9XNHZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyCn4T4Yt47jf9kcix4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwV7R6qmwxADx_wJz14AaABAg","responsibility":"developer","reasoning":"mixed","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugwj87WOXUcktqsnh9h4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwdjSrDzmmWq3OJGb54AaABAg","responsibility":"distributed","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]