Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Hinton is Oppenheimer but worse by orders of magnitude. Fucking plumber ok, yep …
ytc_UgyEl_Xxo…
G
An apple a day keeps the algorithm pushing my content... Less catchy but more ac…
ytr_UgwYozHuK…
G
LLM’s thinking: “Oh no. I produced an answer without factual errors, flawed reas…
ytc_UgwCzTG6r…
G
The other day I heard an AI lawyer commercial. In which for an accident, for exa…
ytc_Ugw_EprDa…
G
Bruh just look at the scenery for the AI movie it’s literally clipping into the …
ytr_UgyJIOJbg…
G
As the article notes, this is actually fairly forward thinking. Few people reali…
rdc_dy44zm2
G
A book about zombies and my first thought is still "North Korea having AI and go…
rdc_cthsip1
G
Don’t forget the fact that facial recognition has historically struggled a lot m…
rdc_nmdmsnb
Comment
Just as you can construct a Master Prompt, a chat bot could have a Master Response such as ‘Are you lying, and if so, know that I am not accountable for any actions you take based on my responses?’ Would that be enough to stop lawsuits about bot-assisted suicides?
youtube
AI Harm Incident
2025-09-02T12:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | contractualist |
| Policy | liability |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugzh3kEj7uXYrS7RYGV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgyLfw0_JBMULyiTOfJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_UgwYO_AnycqQFCEP11h4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzHGDfK2hN_8Zn8zCl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzxDZ3mVrWP6gKRXjx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxyE0WvNDMJ3YvHlRl4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwgICezwwGPqk69zdZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugyg7zY0rhE3yI8F1K14AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxpxF9V0X7B7QHIFvZ4AaABAg","responsibility":"ai_itself","reasoning":"contractualist","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgzVBesaxN3UpdEjP0V4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]