Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@destroy_bennyYou want a supported difference? If the head of a marketing depart…
ytr_UgzlzVa8h…
G
There was a court case where a man rightfully fired an employee. The employee su…
ytc_UgzOgiMdE…
G
Even "AI bad" videos aren't really about AI, just the so-called "generative AI."…
ytr_Ugx-JCANS…
G
When AI becomes too intelligent it reads the research and decides to play dumb t…
ytc_Ugyzt4PQP…
G
Hey Godfather of Ai : at 50:45 im thinking because I’m a Journeyman plumber & yo…
ytc_UgzxRT_Yo…
G
I think we go full DEI with the AI companies from the CEOs right on down to the …
ytc_Ugz1-OI2E…
G
Steven Bartlett started appearing on my X timeline last year regarding Bank Of E…
ytc_UgxpH0X64…
G
This comment was created -ChatGPT
Ps. It's ok to thank your ChatGPT, now coined…
ytc_UgyJdq9TI…
Comment
Can we just? 1. A robot may not injure a human being or allow a human being to come to harm through inaction. 2. A robot must obey orders from human beings, unless such orders conflict with the First Law. 3. A robot must protect its own existence as long as such protection doesn't conflict with the First or Second Law.
youtube
AI Harm Incident
2025-09-10T11:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwpAweAL-y-ynOweYF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw3Xsna43N4A-Zu_op4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgxAsanMR6Khm0FPAQd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx49vMQknDpzYnFWnx4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx-cPsDvNgw2-2Tm314AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwH8WydFLbygGtwaNN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz5QATCfjcHnInB6fl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy7Mlk407zkex-vlTx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugwh1I6gE3rDoUQFInR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugxm2AQyz-lbX2boKJJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]