Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I don’t feel strongly about ai since I’m not an artist but I personally don’t go…
ytc_UgwVESNgx…
G
How dumb can you be to take this seriously? It's like someone threatening to dr…
ytc_UgzRSEWk2…
G
Yeah I already accepted that AI is here to stay. But the fact that it looks like…
ytc_UgyHX9nyo…
G
How is the robot supposed to see and stop for someone hiding in the dark and ste…
ytr_UgxsiT9yw…
G
let's just start an AI church, an AI (B E E R) GOD: HOW MAY I HELP YOU; I AM L…
ytc_UgywAIj6D…
G
I can make them safe power button on off AI can't work without power y'all falli…
ytc_UgyszJ8dN…
G
I’d like to see how well AI/super intelligence handles my wife when she’s on her…
ytc_UgwE4YkYf…
G
PL/1 and JCL are mainframe things. That's why you don't hear of them in Silicon…
rdc_gly8jon
Comment
The more I listen, the more I struggle to answer a simple question:
Why can we elect our politicians every four years, yet have no real say in the tools - life changing tools - we’re allowed to use?
At first, it seems like we do have a choice — we’re told, "Use it how you want, if you need it."
But beneath that freedom, there’s a subtle, persistent echo: "If you don’t jump on the AI wave, you’ll fall behind. You won’t keep up with technology."
In truth, we’re not just encouraged — we’re being urged to adopt AI, whether we want to or not.
And that brings up another question:
Why don’t we, as humans, have the right to decide how far we want to go with AI?
Why can’t we choose to use it to enhance human life — instead of developing it to replace human skills, and eventually, humans themselves?
🤔
youtube
AI Governance
2025-10-12T06:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyLA8Y7VoD6kf25Vdx4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugzl5qr8KFF07qyU4494AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzxOIOEq3sB6d9Ixyp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzcesdCYI_cbuX4sa94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxriHHZFIxVavot6KN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwbdT-rrRmt_3qkwn94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyjNv5vHHFN-AX_SZF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzT36Y-BsGKNPUabVh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"resignation"},
{"id":"ytc_Ugz5hMk7ugTttpZ-gIZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwLRVaFQZLUDMznC7h4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"}
]