Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI is NOT intelligent, or why is it that we invented new words like AGI? "AI" ar…
ytc_Ugzwpmobh…
G
Has to be AI. Stat is wrong. The bits per second for a human is 5 to 8.…
ytc_Ugy7jKuOH…
G
There _is_ some nuance. Training an AI is a lot like learning, you let it do som…
ytc_UgzNlOPSk…
G
Cry about it no one can stop Ai it is future may be people are afraid about thei…
ytc_UgyFnoWar…
G
You won't find Easter eggs or neat details in AI work, and that's what people sh…
ytc_UgyhyxqJg…
G
Look at our generation that grew up with the internet compared to the generation…
ytc_Ugwk8ACc_…
G
LLMs give these vapid statements and people seem to think they are insightful, i…
rdc_m00h7cu
G
rotfl I just stomped upon someone on instagram selling MidJourney Images and of …
ytc_UgxszavxL…
Comment
It's inevitable that human civilization will eventually end even if you think it may take until the heat death of the universe though that's unrealistic...
I would rather end with AI going forward then say something like a nuclear war... Or asteroid strike.
reddit
AI Governance
1739122131.0
♥ -6
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_o7ezc7s","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"rdc_oi23z9w","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"rdc_mbum3mz","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"rdc_mbv9j0p","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"rdc_mbwe7jl","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]