Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If you’re genuinely concerned about where AI is heading but don’t know where to …
ytc_UgyzZ_Hkm…
G
Doesn't that say something if your teacher can't tell the difference between you…
ytr_UgxDYadPA…
G
Perhaps when AI becomes sentient they’ll realise that the top 1% run the world, …
ytr_UgxvSaXDO…
G
Pretty scary conversation. My buddy says in five years there will be no big chan…
ytc_UgyDLawfS…
G
No way!!! I hate dealing with that AI bs. We now call it AL too btw. So tell AL …
ytc_Ugx0ZW66T…
G
Whenever my classmate sent a selfie or a picture especially me
My classmate rema…
ytc_UgyyESRfD…
G
She's not poisoning AI prompts, she's poisoning the training sets. Ai needs real…
ytr_Ugwc7mHDX…
G
The other thing this video displays, is the scene where Jazza and Shad are fight…
ytr_UgzuRmafV…
Comment
It is well documented from autopilot use in aviation that humans are poor supervisors of automated processes. When automation works almost all the time, humans are quite simply not capable of reliably catching the few times the automation fails (and the better it works, but still short of 100%, the worse it gets). We lose concentration, we do other things, maybe we even fall asleep. I'm an Airline Transport Pilot and I know it happens all the time in the air, but up there seconds or even minutes don't matter so much so the consequences are very rarely serious. Driving a car is a completely different matter, and fractions of a second can matter. Not killing people MOST OF THE TIME is not an option!
youtube
AI Harm Incident
2025-10-27T12:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugz9vDL2mvm2LMeQRJF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyHb0HOalEEXQkIi7N4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzBL-P0xDlzz1mItV94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzlJqnrFphh0LcFcoZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugwm8FChJoZnF5CW1it4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxtL3xakW9ZzJKY1uJ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxNW37o2fREK4uak8F4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy8Q1-j_uNGGNf4HZh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugyw8b7T7tBb_qg7tVN4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxDXFQvv33XyoVTpsZ4AaABAg","responsibility":"user","reasoning":"mixed","policy":"ban","emotion":"outrage"}
]