Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Because man always thinks he CAN control. But when ANYONE can control Ai there i…
ytr_UgyZYLQlf…
G
It has only just begun. The day is near when you will be able to sell your soul …
ytc_UgxsKGS9v…
G
AI safety expert is a little too Orwellian for my liking. Nobody is an expert i…
ytc_UgwX4vMgJ…
G
Funny you mention AI study buddies - Olovka has been my go-to for turning messy …
ytc_UgxE3CU2g…
G
The only reason why this is happening because American people want that Almighty…
ytc_UgyqwwJ50…
G
The reason facial recognition is bias is because the world is bias. Medical trea…
ytr_UgzsbINhj…
G
If AI can allow me to spend all of my days growing food forests, gardening and f…
ytc_UgwgG2gAL…
G
Well explain dear AI , but we ,” humans “ are not aware perhaps 😇😇😇…
ytc_Ugw9Sjkbk…
Comment
The cultural historian William Irwin Thompson warned of the reality of the phenomenon “enantiodromia” according to which one starts off wanting to do something good only to have it turn into its opposite. This is what is happening with AI modeling. The problem is that computer scientists for the most part work under the mandate of corporate profit rather than the mandate of “first do no harm” (principle of nonmaleficence). AI safety takes second place to increasing commercial capability of AI models. It is no wonder then that catastrophic harm is very likely as these companies move towards AGI or even superintelligence. And if and when that happens it won’t matter that someone laments the onset of catastrophe by saying “I never intended that to happen.” We have been forewarned, yet we ignore the warning to our peril.
youtube
AI Governance
2026-01-07T13:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzesVC2fNbzorJKwEZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy3xQz8zKD8ApbXqrB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugyx48xRodopeGKrvFh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzqugwni3NHFQBVHFJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz_giddtszVAZIAyeV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugwo8TctWPQa3lB_whx4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugy6vSx83zfa_kPliLF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxqStuTdkR57OWbGiB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz0PS3HBIl8EDk2F354AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugz7bkNKsymd12Igu4B4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]