Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Now, this is like a legit and serious question despite how weird it sounds so *A…
ytc_Ugymte-_u…
G
The most dangerous thing about AI is dangerous people controlling it. For exampl…
rdc_mbufgn5
G
time to develop a weapon that can vaporize human based on their criminal points …
ytc_Ugyy6yTb1…
G
Y’all I asked the same questions. And at the end I asked “can the government see…
ytc_UgxeHFw9y…
G
The reason I don't fear AI is it makes no business sense to take the jobs of all…
ytc_UgztnWJRx…
G
I create AI Art and it makes me absolutely crazy. When I prompt an image and it…
ytc_Ugy2Wsulv…
G
A very important thing to keep in mind here is that OpenAI is now working with t…
ytc_UgxmP3ZLo…
G
I think AI art should be used for generating references, but not as actual art.…
ytc_Ugx9pzq1P…
Comment
I remember reading Ray Kurzweils books 25 years ago and thinking that if the wrong people are in control of AI, we (plebeians) are screwed.
And sure enough, it's all coming to a head in the middle of the Trump/Oligarch Era...it's like the perfect shit storm.
youtube
AI Governance
2025-06-16T22:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyNXwlU3oUhH55bxB14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugxri83304Gmy6VHP4R4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxnWzuj5XwAXcuhtGd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxQo4FZyqz_JJE-vyZ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugyv6gM9Kk7yIkpirp14AaABAg","responsibility":"government","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwWV35Rn0RJXGWOm1d4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgwN2h3Pl_ykQLQa5v54AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwMZ5FzI8z_delxnmp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwMGBUNsm_NpkaqyLF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyKxIRWsM138xc1YtZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"outrage"}
]