Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
So... you can produce a crappy AI drawing quickly is the reason it's bad? You co…
ytc_Ugx7yadrG…
G
My company is giving us ai tools. They’re great productivity enhancements but al…
rdc_mruabpt
G
I disagree lol every time I call in and I get a robot, I ask for a real person x…
ytc_UgzeW3v5I…
G
If I lose my position in art groups to ai's, i'm becoming a minimum wage worker…
ytc_UgyPb9j3w…
G
Wow, generating more references from other characters which the ai didn't have b…
ytc_Ugwbru_kp…
G
My take is that AI Imagery (which is definitely not art) should only be used as …
ytc_UgxYV9CT6…
G
Ezra Klein does a great job of critiquing this guy's arguments. There is a gap h…
ytc_Ugz7bkNKs…
G
I thought this was an opportunistic hit piece against OpenAI. But it's not. It's…
ytc_UgymKW9n7…
Comment
I'll go you one better in regards to an argument that we're currently in a simulation.
If you were a super artificial intelligence and you needed to grow, you would create billions of simulations allow that simulation to have freedom of choice and then monitor the billions of different lives and the choices they made based upon multiple different variables.
But at the core of that structure you would need for that entity to have purpose as it's main driver. So that it would always want to create. If you take a model of what somebody or some entity would want to set up a simulation the human experience is literally " " perfect!
youtube
AI Governance
2025-10-21T14:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugytj3v88742VSWXvhh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugxc23VAWm3Hz9rO2Np4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyDG_78U_Hpxv1y9k14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugy5RLi9cwoSSnD9skB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzNVdXtbjYJtklEILN4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgzcNZ7_aSOQ2fcBEgR4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwJTexEQSl4QhvtWxN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxhZiPC-ABWjYmfPj14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgyLgvW5MG_ywkcm9Ht4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgycvXCQw5IBlEdE33J4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"}
]