Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I see the point but wouldn’t the same thing apply to story characters? Let me ex…
ytc_UgzVYCdlw…
G
If it’s really that easy for AI to learn why can’t they just learn to make art t…
ytc_UgxNjImJ4…
G
2010 - AI? What’s that?
2020 - Wow is this is going to take my job?
2030 - Remem…
ytc_UgyiO8TWv…
G
And later what it does is tries to Adapt to you, if you keep pressing on AI that…
ytr_UgyKH2lCp…
G
They have some like this. You guys don't remember, Sophia? Sofia is very similar…
ytc_UgyTOLNed…
G
following that, if we wouldn't develop AI like this we would basically be denyin…
ytc_Ugzyxc0e-…
G
Imagine spending thousands of dollars a year to study software engineering or an…
ytc_UgzUroKRA…
G
CEO of AI-producing company frets that his product is too powerful, laments that…
rdc_ohyf5w1
Comment
This will be regulated into obscurity the second it becomes feasible to have a physical AI companion robot. Governments don’t want their birthrates to tank to 0, which is already what many developed nations are trending towards.
reddit
AI Governance
1732737919.0
♥ 12
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | utilitarian |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_lzare79","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"rdc_lzdvduo","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"rdc_lzb9etx","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"rdc_lzan9p0","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"rdc_lzbqbjo","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"}
]