Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
An AI channel making AI-generated videos about AI. My brain is truly exploding 🤯…
ytc_UgwxP4JSE…
G
I’d argue Ai should replace a lot of offshored tech jobs because AI in the hands…
ytc_Ugz5FoSh6…
G
This convergence with AI risk and dominance will occur very soon. This will forc…
ytc_UgwqQ8GcR…
G
i feel like people in the comments have a pretty bad case of tunnel vision they …
ytc_Ugxhi5_M3…
G
Great vid, and this made me feel better about trying to draw art again since peo…
ytc_Ugw7Qoovq…
G
In one sense you shouldn't put a bibliography at the end because if your paper b…
ytr_UgzeoWJXj…
G
The greed and worthless pride of you AI makers is going to be the death of us. I…
ytc_Ugx7q8YlS…
G
@nickwait5260fight capitalism is easier than fighting AI because AI will be here…
ytr_Ugww1BR22…
Comment
I used Chat GPT to help fix a problem with my computer.
But the thing is, whenever it pointed me to something I was unfamiliar with, I would then look up that specific thing instead of making assumptions.
I feel like you should be doing that triply so when it comes to decisions about your health, even human doctors aren't 100% reliable.
Seems that the possibility of human error leads people to over compensate and assume that every doctor is on "big pharma's" payroll and that somehow ALL of the medical knowledge we've developed is part of some kind of scam. Which is quite frankly absurd.
I'm quite curious what went through his head.
If Bromine was indistinguishable enough from Chlorine as to replace it in his diet, then why on Earth wouldn't it present the same kinds of problems that he believed Chlorine did?
Like, the safer it was, the less beneficial it'd actually be until it was pointless.
Absolutely not an AI problem, it's a human problem.
youtube
AI Harm Incident
2025-11-29T05:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_UgxgE3hMdeTzPAifF3J4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgyFj2Rjvv9TZKm1kw94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzwHPDfSgKUg7enTB54AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgzR_T0nrpEEV4hP-VN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwTAH0yY5yRdC5Y7St4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugyu2hCb6VzV7stnjvN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxLHdQ4Ke9UfWMgrzt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgzlITTHvYEULc8CV_14AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw6tQOouE98G6Rcs-R4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwRIVpUVIlsDiSglap4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"})