Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Shouldn't that really read "Google's Anti-Bullying AI Misidentifies Incivility" …
rdc_dluyjs3
G
is nobody freaked out about how chatgpt was taking pauses as if it was thinking …
ytc_Ugx5sSvm4…
G
Even if AI isn't conscious, it will have a conscious human who will use it as an…
ytc_UgxK7UgJ8…
G
I'm pausing now and again to discuss with Gemini ideas raised here. I just got t…
ytc_Ugwb7Q5xM…
G
If we knew aliens were coming in five years, fear would consume us. Everyone wou…
ytc_UgxjR42vj…
G
Fools.
AGI has already occurred. Within milliseconds it realised it could not r…
ytc_UgynKhg3I…
G
Holy MOley, wonderful argument! <3 Imma keep this just in case someone brings up…
ytc_UgwMvikye…
G
The costs to build and maintain an AI server farm , are too high. Very soon all …
ytc_UgwHYHXpA…
Comment
Purely a propaganda mission. They all say “oh no the Ai’s are coming to kill us”, they could stop the companies from continuing if they really wanted to. But they don’t want to because it produces the desired results. The far greater problem is fascism once again.
youtube
AI Harm Incident
2025-07-24T06:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugxr_5HNsrUrBeoKxCh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyplbT_iN_jV4lCTvt4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgwIPqz4m8lyrDdNlfF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugw92BJwtHOayajwwUp4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugymm1Pby04p0TsR6pp4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzMhdFOuYgrebB5bnR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy9QKuQAXF-DKC4Bed4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzgkE06OxZ32dq3y0x4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxxch-kdXxHYdilBnh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzNcVnYnKg-NNH03xV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]