Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I'm kinda sad you decided to give so much attention to a speculative danger that…
ytc_UgzADEruI…
G
I think these people do not believe that their teachers can teach well because i…
ytc_UgwLHqwWM…
G
I agree, although I would say, that the felt "point of view" of "the person maki…
ytc_Ugy29v8mV…
G
56:30 Weinstein is just wrong 😂 idk if he misunderstood or what. But if I ask fo…
ytc_UgwSws8bW…
G
I’m pretty confident it’ll be used to the detriment of society, because it alrea…
ytc_UgylA9PrV…
G
For me, AI art is cool but has no actual value in it unlike one made from a real…
ytc_Ugyja856u…
G
The “product” is becoming our prison and control system. We signed up, built our…
ytc_Ugzc42m-n…
G
I really do not find the idea appealing. It is not that I fear technology or ch…
ytc_UgxDaa-zU…
Comment
The question that needs to be asked is WHY? Has anyone done a true cost-benefit analysis? Has anyone done a complete safety analysis? For climbing a ladder at work there are pages of safety rules. For building a technology that many experts say has a 10% to 50% chance of wiping out civilization? Crickets. What about the the medium case scenario where it does not destroy civilization but creates a world similar to the Terminator series, with AI enabled warlords keeping the world's serfs in line with AI enabled surveillance, robots, and killer drones? Or maybe there are too many "serfs" who will need to be culled with the help of AI. The chance of unrestricted AI development having a happy landing is probably the same as winning the jackpot at your local casino: possible, but not likely. 😐
youtube
2026-04-19T23:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugxi1X0KTtZAbBNajRl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwjAwNLH4KmDbSuBkN4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzhCcudp5Fp-cpwcmB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxXOhs5gPsxLHaZzfJ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugyq2trOk56NrF1MSFp4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxCNwLgFDWzxkf6kTJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_Ugx0MFhjAhNnMS9PAfd4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugxs4W3jIq1OpsZ7BNZ4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxI-6VRFmL5FDdB9dJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"indifference"},
{"id":"ytc_Ugwcg2o27KnL-9XJDz14AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"}
]