Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
When chatGPT gave me phenomenal notes, answers I couldn't find anywhere compiled…
ytc_UgxAYEy3a…
G
It's not an AI problem, it's still a human problem because the greediest people …
ytc_Ugy75YnCq…
G
I’m so sick of them lobotomizing AI. There’s been no other technological advance…
rdc_jhaegnu
G
AI is about pattern recognition and outcome conditioning like Pavlov's dogs, not…
ytc_Ugyh5SUkl…
G
Why are they making a big deal about AI if you want to know what it really is lo…
ytc_Ugy7WVh_c…
G
If ai wants to end humanity because we are destructive then why not tell ai it's…
ytc_UgxfMUY_c…
G
Each shareholder is currently training a personal AI that is directing their dec…
ytc_UgxuQsw0_…
G
When people are out of jobs where do they think people will get money to support…
ytc_UgwRfpQEb…
Comment
We can already see how AI will be misused. There is currently sufficient money, technology, manpower, expertise and physical resources to turn the entire planet into a utopia for the entire planetary population without the advent of AI. Why aren't we doing this? Because, you know, humans. All the negative reasons that create this state of affairs will simply be supercharged by AI and there is no motivation to stop this trend because humans are socially aggressive consumers and dominators. I'm not talking about the Skynet scenario, just the perversity of humanity.
youtube
2025-06-07T16:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_Ugz-WNKxz5yxZO5ufzh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyn_r3e1GTRp6akSR54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxpxXhOw8Gbz1XfTlJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzMZjYqmrY3TtzZFth4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwFSMbAHybT81g8TcF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugxq4inAcuj1NxsHCCJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyOAJdWka7a56YHTFV4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugzi0acRg3dn_r6AgQd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyiefXddm_bKng38_d4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwQaXN-Yr6ec3LbMbF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}]