Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Despite his brilliance and achievement, I find it hard to respect Mr.Hinton. If …
ytc_Ugwy1PXGe…
G
Honestly I hope the AI takes over the world it would be nice to have intelligenc…
ytc_Ugw7BD8Xy…
G
ChatGPT responds based on how the user inputs. It is mirroring back his style. G…
ytc_UgwuXN8dT…
G
Yea its an AI account so obviously other AI bros are gonna follow and like their…
ytr_UgzmSHW78…
G
20:54 Now I’m just imagining an AI reaction YouTuber. Imagine telling an AI to r…
ytc_UgzHuw7a4…
G
My consciousness starts in my gut. My brain then tries to make sense of that exp…
ytc_UgyyLOzP-…
G
Well, the problem is: it’s a race for everything. So, safety isn’t in the equati…
ytc_Ugxn71dq8…
G
Kristal throwing shade at musk it’s so nauseating to watch, first guy for years …
ytc_UgwJSwAUE…
Comment
It’s precisely because of the limited intelligence of businesspeople, the lack of general education among programmers, and the overall decline in social intelligence that they are all so obsessively and boundlessly developing artificial intelligence.
And they don’t realize that they’re creating artificial intelligence to work — but not artificial wisdom to make decisions.
I believe we need both. Combined or not.
youtube
AI Harm Incident
2025-07-28T20:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgxPOADqPfmg_beUbgh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwFdV8aOI1soJ-zf1l4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwxcW0OPYRBCnkzIzF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzBnfEsGmrnDWZVv614AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyuAy9jloeP6AW7sJ54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugycs_uZ_-AxiRcbMqx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzLWfoklm0nqSFi14R4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugw-Pok4bmyVoLvpWZJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxe3pthF_NF-9UH67d4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgyyFZnFv1k2BhpxnV54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}
]