Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@Jjkal899 In that case AI will decide everything and we have more problems than …
ytr_UgzVGX2k1…
G
Automation does not kill jobs, it just shifts them. Sure, replace us all with ro…
ytc_Ugw2canXZ…
G
Until the Corporations turn into techno oligarch ,achieve ASI. ARTIFICIAL SUPER …
ytc_UgwCWNhgL…
G
Imagine having beef with an AI before getting diagnosed with a disease, and then…
ytc_UgxTjuRW5…
G
I have the feeling that we artist need a app where no one can steal our art,no s…
ytc_Ugz7CBDSu…
G
If AI is so evil we should shut it down now but humans in power are retarded…
ytc_UgzD7TeKt…
G
I seen that movie where the AI goals were not in line with human life! Humm what…
ytc_UgwhSBIu6…
G
The only thing I use chatgpt for bsing whatever to see how it reacts. Basically …
ytc_UgxeriByf…
Comment
She sounds like a grad student that’s super stoked on a paper/op ed piece she just wrote.
I think there’s always a chance your own child will kill you. In this case we are giving birth to something we consciously created that is much smarter than us. I’m more concerned the advanced AI systems won’t like each other and we might be caught in some type of cataclysmic crossfire. But I don’t think they will treat us like animals. If so it would be like spoiled pets or relatives. I think anything that smart could control unfathomable variables would have both ours and its own best interests in mind.
youtube
2026-04-15T01:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugzl83LDkeLVpP8YR0p4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxNF_niwKpmWKs3-Q14AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy-SD9Cx8XTITCwZI54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_Ugzj4keWoHPJ63Um5kV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx6pvE-pQ08V5vM0xV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyaCoZXATT7Mo0t8G14AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugz_cdOcjIUT7nnJrWl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgxU83TAL7y5IR708SZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzoUI8xpiwuMMb0DzZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzWO0Ug0luozL0-Wux4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]