Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Ya the outside consciousness. The AI is limited to human imagination, the Digita…
ytc_UgwpOMRhL…
G
@Paulitopt Yeah, I can't see me buying a Tesla even if I don't think that the pe…
ytr_UgwS-6CAh…
G
What a disappointing question from Chris Anderson (unless it was asked without n…
ytc_UgxBX89o_…
G
ok the second point caught me off guard and made me spit my water out of my mout…
ytc_UgzamFxpZ…
G
EruioDing There's this astrophysicist, Lawrence Krauss, who'd explain this bette…
ytr_UgyFKmvlr…
G
We not there yet. This is the Atari 2600 of AI companions. But considering there…
ytc_UgzyhB_kl…
G
In a quiet suburban room, a teenage boy sat absorbed before his screen, immersed…
ytc_UgypsuVaR…
G
It's impossible to Make artificial super intelligence safe. The best we can hope…
ytc_UgxOEa065…
Comment
I think there is a deeper and more central problem here that I am not seeing get ANY coverage. The people in charge of training these things are inhuman myopic and oblivious tech bros. Even if you could ensure no accidental alignment errors, the alignment being crafted is one set by Thiel, Zuckerburg, Musk, and other tech oligarchs who barely view the working class as human, and certainly dont consider any of us worthy of consideration. Creating AI induced psychosis and driving the collapse of human civilizations ability to work together or agree on reality sort of plays directly into their hands. This tech is being developed by guys who hate democracy and want to be God Kings. The fact that they're the ones running the show for developing this technology should terrify everyone and Ive seen NOBODY bringing that up.
youtube
AI Governance
2025-10-25T01:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzuloiXX9NyhPcCerp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxoKATJs_-p_pyisyd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxkMTZpL3o1OVxgbYB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyN8lUbmNWdk2dffs14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwBhtnqoukhPTl8FSd4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzzTOouq1je9BWmqSB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugx_0e9quQvUALEUqVt4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugwf5wLWAQ5s-arN28B4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgygEzIeTg02bEQwoYt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgwRD3gum62tfJxg5lh4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}
]