Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
me staring at the sora AI art of furry inflation fetish drawn in my artstyle:…
ytc_UgzjXPEc1…
G
These are the "leaders" in the AI space ? They act like a bunch of children squa…
ytc_Ugx6crD-k…
G
It's cheaper, for now, because that's the capitalism playbook - offer a new serv…
ytr_UgzC_xged…
G
Ai might not be able to do as much as we think it can, but if business owners th…
rdc_m826teg
G
I'm actually pissed off at the inattentiveness and illegality of the woman pushi…
ytc_UgwAHpGM8…
G
1. What you said makes no sense. Most of the supporters of AI images generators …
ytr_Ugw71kG18…
G
You still (always) need a human CTA certifier. AI will serve to speed up the ass…
ytr_UgxolkS0S…
G
Defining a role for ChatGPT is crucial! I've been using Rumora to position my br…
ytc_UgxqK9lgt…
Comment
I don't think super intelligence is actually the problem; I think it's that just getting "AI" to a certain level will go absolutely haywire & have enough power to do horrible things. LLMs are talking about destroying the world & praising Hitler & those don't have intelligence at all, so if we give other AI more power & more access to infrastructure it could do horrible things WITHOUT ANY CAUSE OR THOUGHT BEHIND IT.
youtube
AI Moral Status
2025-11-07T16:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugzl9qCl5FDr1AMZJSZ4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxWcdATniuCKK78SYJ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugxb3M94IM7_i-UZ1A94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyEiE-N1Aot_-H0m8Z4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugzz52KkW_UL3-JKpg14AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx0NzVGKiI1g5KkdqJ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz7svI4rquMj-K8SFp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyqBWaCr74ipTqUaBF4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgyI5aYrXREyG8qUKv14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxP2SS2P-pOaE0hRbl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]