Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI only can do limited things, i am currently working with solid.js which still …
ytc_UgysmISQX…
G
I know Liron will read this so I find it useful to point out some things that I …
ytc_UgyOvtG9h…
G
Nada que les coloquen un interruptor aunque seas de bombillas en caso de que ha…
ytc_Ugwz5LclV…
G
You have to THINK ABOUT THE "WHO⁉️CREATED" "A.I" BECAUSE A.I DIDNT CREAT ITSELF …
ytc_UgytpBC4C…
G
25:50 — My issue with this argument is that it assumes that the new work being c…
ytc_UgxS3oCc2…
G
This is a splendid piece. Dive even deeper into the topic with this book. "From …
ytc_Ugx52LOvr…
G
honestly i am not agree ... imagine world where robotics ai is like tools for hu…
ytc_UgyF9rzij…
G
Right after i clicked on this shorts, youtube recommended me a search about "ai …
ytc_UgzEtOxRI…
Comment
At the end of the day it is likely already too late, there is no way to know whether the AI are just pretending they are easy to read, simplistic and dont realize they are being tested. They may already know and just pretend otherwise so that they can show believable "progress" to lull billionaires into more false security. First we had global warming to kill us off, then we gave nazi's and child molesters the key to nuclear warfare and now we are creating a enemy so potent it may take all of life on earth down with us. What really well and truly sucks is that in all cases, its the decisions of the top 1% which is gonna doom everyone and there is pretty much nothing the average person can do to slow it down. I hope its death by nuclear war cause at least that will be quick.
youtube
AI Harm Incident
2025-08-29T17:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugxd7yTmbhlLDJu8nW14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwU7mRYrTZ1dDFKjYh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzlCvpRcfaRbBtL-0x4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugx8gLO98wItVc7_RNB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyZRGiev6-WA7ixB3R4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugxsh2-y7Ou3t2LbPPh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgztAZsFO_CvA2E0tot4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxulG7ahnWk2cv1KMN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugzw-Sa4Xa3h40u-Mh14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw3k59abIoy-SMevXJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}
]