Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI will replace the assembly line coding/programming but the “outside the box” p…
ytc_UgwCTTVYk…
G
This summarizes Americans on /r/worldnews's so big obsession with "denouncing" w…
rdc_gtcx662
G
AI """""Artists"""""" when they sit on their asses in their mother's basement an…
ytc_Ugz_mkrCZ…
G
I understand that AI will be an integral part of the human society soon enough. …
ytc_UgyR_FXQ6…
G
ai is a bubble...can't wait for the pop...no killer apps and a lot of hype and h…
ytc_UgzQSWwHF…
G
“The AI was right.”
I knew Grok-nii-san would never betray me.
((>Me when I ta…
ytc_UgxjJW3et…
G
Total nonsense, Jeff Bezos built his company and have the right to run it the wa…
ytc_UgwfybqnF…
G
I think AI could be the next Antichrist. Once it becomes the biggest and stronge…
ytc_Ugzxb6c31…
Comment
The core problem with AI research is that in essence, it is humans playing God. In most creation myths and religious theologies, man either fell, rebelled, or superseded the gods that created them. Humans think they can control artificial intelligence, when the gods we worshipped couldn't even control us. I choose the life of a human, rather than the fiction that AI will somehow heighten our evolution onto another level.
youtube
AI Governance
2024-02-21T06:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | ban |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwvNjWthrowJeX8-Cx4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgxBNd4fSmW-A-Z5hbV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgySfXaTwpFTbemOi314AaABAg","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugx10sXfe01krRb2BMd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzN7lgLFLSDkTXkE5d4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyrdoxbmJCk5_1qOEh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzoDmbOgTfbECcqM8t4AaABAg","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgxGiKury2jP7mh1C854AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugz_6B60pVAEi2S2_Nx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwCyBRm-1i1lvy1-k14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}
]