Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@maramé.r God damn, you spoke serious wisdom, some music too easy for AI to copy…
ytr_Ugw5K8D-_…
G
This is probably the most compassionate view of intelligent AI i have ever seen,…
ytc_UgjAEXe8e…
G
I work in AI development. I totally agree that stealing is bad. And I’m very sor…
ytc_UgzXSGLYX…
G
You got plenty of Internet and AI but will starve to death with no food and die …
ytc_UgwVGdxvB…
G
This man does not know what consciuosness is. So he partecipates ai business. An…
ytc_UgygA8za1…
G
Ok but they have caught their very carefully trained models lying several times …
ytr_Ugx_J7i_V…
G
I've got nothing against AI art just existing, but calling himself an artist at …
ytc_Ugxto49ub…
G
How can I make some serious $ off of the AI craze before it kills us?…
ytc_UgwxYDPKG…
Comment
I find it interesting how we seem to think AGI to be both superintelligent as well as dumb as rocks. Throughout the unverse, symbiosis comes at a lower cost than competition. What we should be afraid of is NOT AGI. What we should be afraid of is powerful AI in the hands of primate. I think being controled or governed by AGI is something we would hardly notice .. whereas control by Elon or Sam .. well .. no thanks.
youtube
AI Governance
2025-12-04T09:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | virtue |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugy4vmTG-8iDRggmGUx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy12WJElNBpRtx6K4V4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxYUpsj5jdLHHd-Zct4AaABAg","responsibility":"user","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxNWIOitL5NssHegwZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxYye2iDkiH6LRG0Wh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxaNXwJPuk4BpY71VF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwK8rWcVdrfE3q0f694AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxRS2GFkVX21qEHEC94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwDt7TRBj84NwsUMqV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwvznqkTnsExAcrqvp4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"}
]