Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Most psychological experiments cannot be reproduced - psychology is one of the l…
ytc_UgxDm3foc…
G
Narrator: After yet another large group of people reached the edge of unsustaina…
rdc_g692njj
G
David I can't agree with you 100% on this one. Maybe, maybe you can get away wit…
ytc_Ugh5rhUrj…
G
This man lost me a bit when he said he reads only the BBC, the Guardian and the …
ytc_Ugz2pt1-_…
G
Pls Go backstage and have 2 more cups of coffee, make it 10 and leave the 2 AI s…
ytc_UgwzNtFrp…
G
Me fr. I look for supernatural characters and have my persona beat the shit out …
ytr_UgxFdbhQI…
G
This AI art is quite complex, There is a lot to learn and install. Far from an i…
ytc_Ugw4wb-kt…
G
You are wrong with your assumptions. One big part of any autonomous tech is exac…
ytr_Ugza84klr…
Comment
Eric talking about regulations for something that is already being pursued aggressively by these companies is hilarious. No one is going to slow this train for fear someone else will get there first.
Will AI destroy humanity? I have my doubts. I don’t think it will consider us all that much. But thinking we’ll control the super intelligence is laughable. It’s like saying we’ll follow the instructions of chimps and only do what’s in their best interest.
We’re already seeing the economic and social impacts that this tech is having and it’s no where near super intelligence. It’s only as bad as it will ever be today. Just look at the improvements in the last year.
I appreciate that they brought on someone with an alternative opinion, but he clearly is only giving a rose colored view. What you have to ask is why? He obviously has more insight than most people but still chooses to mislead and downplay the risk.
youtube
AI Governance
2026-04-13T16:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxAw3mE9oKvRc3RAz94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwY-tFU4ftjKYdiThR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxTgoQVmJGORJ1q9lp4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzBYof-1J16eUKusQ94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugypv_hyeGCHZsCFzGZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy130o6Rrm4akewzNt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzBLlBWPzsDnpiI35F4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxbtY7eUwNkP6w-ei54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzee1i5bls__u9Knbh4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgziKjZMdAcI4Fs8yKB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]