Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It can't replace people. It can assist people. I use AI to help me write code. B…
ytc_Ugwqon39O…
G
And then there's some who defend it because of their beliefs.
Like they think a …
ytr_UgwJyIrcS…
G
Ai was the most hyped yet most disappointing and hated thing to have ever existe…
ytc_UgwB9OT6o…
G
if you think about star trek for a moment, advances in computers made cognition …
rdc_fcs733g
G
Basically it's the mark of the Beast. You will not longer be human, you will b…
ytc_UgyIfvFeF…
G
Hi from 2026. You fuckers were right all along. I'm not suprised, just dissapoin…
ytc_UgwfRB_F5…
G
Dandelion root tinture cured my husband's incurable cancers in less than 3 month…
ytc_Ugx81fqeE…
G
14:06 I'd actually give a different response here. As bad an artist as Jackson P…
ytc_Ugz0nBP3D…
Comment
are humans any better? its going over all these extreme hypotheticals but seriously would human beings do any better? like have humans never blackmailed someone? am I the only human being that randomly has malicious thoughts?
Like 'the AI was given a hypothetical scenario where a single human being is in mortal danger, and it had to decide between the entire american nations interest or a single human beings life'. What the fuck is the right answer here? Is there one? Like human militaries make this decision every single fucking day? Nation states do this on the regular? What are our expectations of AI?
youtube
AI Harm Incident
2025-07-23T18:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugwbux0NNkjdVKiRJLF4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxMbL2XYToViPAKQUh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwlEKq1Y7fzNMvQi394AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzheSfjtZlNDkJtNA94AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyVD9GjJyN58AtV-jp4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyzOmysSg0nl5hJj5d4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyEUDu_20utTr0QKX54AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgyC0D_L3clvsKTnfXJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy8BPVxwnUAAaB5dQh4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyP95xHKDhNP_KdByZ4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}
]