Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Yeh, just gotta ruin it dont you.
I'm almost never on AI corporate's side but...…
ytc_UgzjMuhHp…
G
By 2050 we should be having fully autonomous humanoid droids to serve the Milita…
ytc_UgyqohGm-…
G
In other words these data centres and the pollution maybe deliberate to accelera…
ytc_UgzDzEjp5…
G
Robot cars are not smart enough to drive themselves yet folks and this just prov…
ytc_UgwTsRDQP…
G
But WHY should I increase my output? If I go to 100 MB output bandwith, there wi…
ytc_UgyHoOede…
G
@TaylorLorenz lol ok so I just took an actual look at your content.. acute tren…
ytr_Ugw4V7aJr…
G
It is very scary with AI drivers period. Many accidents were caused by AI cars. …
ytc_UgwYOPqby…
G
I think it’s a little unfair ai is judging the argument score, considering it’s …
ytc_UgwTK77B8…
Comment
AI is crap. It incredibly inadequate. Every interaction I've had with AI in customer service is a ridiculous circle jerk leading nowhere. It's interesting how willing the human race so happily accepts mediocrity. Depressing. When are we going to stop cowtowing to wealthy class and push back? We're all screwed.
youtube
AI Governance
2025-06-18T23:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | virtue |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgywenVBefdHkv3U2nZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxUad6XoUmE5VVK8T54AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzYuVH8FC5VL6GwIld4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxvDak1SsYNbWHP_z94AaABAg","responsibility":"company","reasoning":"mixed","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxgMmwLtSUkuDlOUk14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx77Zyv3mRumAbUEjR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwKYfsKX2-Jd7tI41t4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyHhVqt-8Wi1-vKX454AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxXr87X9ZFijC0yjhR4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzx9nBuhkK2DHns4Zx4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}
]