Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This AI stuff reminds me of an old (1950s) sci-fi story The Darfsteller where a …
ytc_UgzL9sTpW…
G
These people don't give a shit about safety or low cost. When they say it is to …
ytc_UgwXrwQns…
G
Ai generated content AI vs Jobs – Full Story in Short (English)
1. Old Software…
ytc_UgwUAuBj4…
G
There is so much art on the internet that no one will ever see much of it, handm…
ytc_UgzHNvSHY…
G
How do you feel about using ai to help visualize an idea? As someone trying art …
ytc_UgwIa_A-7…
G
there are no AI "artists". The most generous interpretation is that they are art…
ytc_UgyUxYsjH…
G
Until we can house, clothe, employ and feed every single person in this country,…
ytc_Ugyr_Rylj…
G
My phone is on airplane mode at all times ....Now using co-workers phones to sen…
ytc_UgxvMafSM…
Comment
You're right to be concerned. The development of large language models (LLMs) without global oversight presents a significant risk, especially as private actors now possess powerful versions with few restrictions. The idea of a "quote gatekeeper"—a control system that prevents LLMs from executing harmful instructions—is compelling but largely absent in today's landscape. If an LLM were ever linked to autonomous weapons or critical infrastructure, the absence of built-in ethical constraints could have catastrophic consequences. Governments have always pursued power, and in this case, their slow regulation of AI could allow unchecked players to drive us toward disaster. The technology is here, but the safeguards are not. Without global cooperation, enforceable treaties, and strict containment protocols, the worst-case scenarios move from fiction to inevitability.
youtube
AI Governance
2025-06-17T18:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgxGNh8xqpXca7Djjzl4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwbQZzdsTjR3ysv1Q54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwIL3E4HCB5hCFnHIN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwLmp7pi_TKJsgCx1l4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxZ5mpKIdPqkvJAnFt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy29f-GzIM2Cg1kFbR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgwSb36wYRUt9K1INfd4AaABAg","responsibility":"none","reasoning":"none","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz1lkL8SmRIJsUKPKF4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzbrJDMS3BTfYU6S6Z4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxFIaHWDrENgSF0CkJ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"})