Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@goober-7290 your opinion that stealing doesn't matter if nobody notices is so r…
ytr_Ugz9X-28n…
G
Samuel Harris Altman = 666. He is the head of a 10-person board (beast) and subd…
ytc_UgzrVBIpb…
G
Don't worry too much about it , a reset will come and to bring back the balance…
ytc_UgzLsE4so…
G
I watched this expecting to get some footage of the autonomous truck making mist…
ytc_UgwP76yUo…
G
Did you hear about the spy drone that flew over a FEMA camp?
It saw Ai drones, D…
ytc_UgwEF_Xxj…
G
What countermeasures you should use is determined by the tech being used for eac…
rdc_fjdqmao
G
Works as a nice loophole for their customers to use too. Program a robot to do s…
rdc_dy5acof
G
You don't understand what's happening. AI does not replace people. These compani…
ytr_Ugymc7jVW…
Comment
Your argument about prioritizing control over the expansion of AI is thoughtful and important, and I agree the risks deserve serious attention. But I have one question that keeps coming to mind.
If responsible nations and organizations slow development in the name of safety, what happens if less responsible actors simply ignore those limits and continue advancing the technology anyway?
It reminds me of the classic gun-control dilemma: you can restrict law-abiding citizens, but criminals may still obtain weapons regardless of the rules. In that situation, the restrictions mainly affect the people already willing to follow them.
So if the “responsible world” pauses or restrains AI development while others do not, how do we prevent creating a power imbalance where the least regulated actors end up with the most advanced systems?
How are you going to solve that?
youtube
AI Governance
2026-03-16T05:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwytkjWRz4txk43RDZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyDQ_Q9wQI37Ckr39t4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugzs1Dnhv1nAB7lhcUl4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwNSBBqcpEepOavDxR4AaABAg","responsibility":"government","reasoning":"mixed","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugzw4rnrrKUPQplRV1V4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwZz0UeIxQ-1Z-FU0R4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgywmdzGtrV06GWdJh14AaABAg","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyGXKLpZniJ36Seu5J4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugzr_dGza4U624ENI3t4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy2fT--RinN6azFL9N4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]