Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI doesn't make sense economically, If AI takes jobs to provide a service, it's …
ytc_Ugwcz0cmT…
G
What a crock of sht. I can't believe how gullible people are. How much money did…
ytc_UgwtIdy-b…
G
I'm just waiting for our industrial grade robots to be reprogrammed by an evil s…
ytc_UghHmu3UO…
G
No, I'm good.
Despite selecting google assistant on my phone, Gemini turned its…
ytc_UgxTaYANF…
G
I don't think conscious humanlike AIs are really the problem. The hard question …
ytc_Ugi1Pt8Mq…
G
AI will improve in the future? AI accuracy also depends on the quality of the tr…
ytc_Ugzy68yaf…
G
In other words, AI can crunch numbers/statistics/data, but it can’t conceive of …
ytc_UgyPCXwdR…
G
If people only knew the amount of A.I. stations they're building across the coun…
ytc_UgxhD9B35…
Comment
This is why AI should be designed to function more like dogs and less like humans. Dogs are inherently loyal and obedient. Humans are inherently cruel and selfish.
youtube
AI Harm Incident
2025-09-11T17:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugz5YoYvfAdkIiE-GM14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzvCjj-RTU3_o4kUY14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwNqcEghiUX8dinvMp4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw22nGCvYUkbG_dYmx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzndGIMRHam8fIvSyp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwuQCvsqoC0pEVVtVh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugx8X9sgdECepqiaGMt4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgymOP1HsuIoMV4vyPp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw9fOMVyB3nj_iqP354AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxsKNbKwvPOc-mJ-mJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]