Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
We need a new and improved financial system. Thinking lazily, the first thing th…
ytc_Ugxv1SA2L…
G
Dude i saw that original AI art on Facebook and shared it before i realized it w…
ytc_Ugyn8gTd5…
G
As a graphic designer, no AI art is not making a massive loss, that's a huge win…
ytc_UgypFMI1t…
G
.... faceMask and helmet?? why not just instaMakeup yourself to look like a fr…
ytc_UgyWt54KO…
G
2:58 this is a dumb interpretation of art. Are you not moved by the beauty of na…
ytc_UgxxSPRIR…
G
Then it must be real hard for the Scots and Welsh to get work there as well.…
rdc_clut3rl
G
Every time I try to load Chatgpt 3.5 it takes me to #4 any ideas are appreciated…
ytc_UgwDkUmo9…
G
We are better at problem solving because we don't just think but we feel emotion…
ytc_UgwZNoPrY…
Comment
The game / website "Universal Paperclips" is a great example of the dangers of AI. It's a clicker where you make paperclips. First, by hand, later, you can buy machine, and eventually, an AI to improve your paper clip making. As the game progresses, the AI will make gifts to CEOs until it reaches singularity. It then proceeds to turn the ENTIRE universe into paperclips. Because that's what the AI was programmed to maximize and do at all costs. If AI ever becomes a danger, I suspect it's something like that - a single minded AI pushing its goal at all costs.
youtube
AI Governance
2025-07-06T06:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | ban |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxJR9_zycrZmoLaT_l4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxtwA9VawSfgI-VRxp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzW5u680mtmkkfTcEh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz8dW_VnoINeu3Hout4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxBq4j-NkJedSN7ppV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugx8FqmgA2wKcpcFIN54AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwY-3l1yVLr2Ys1BuV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxNPjOP3kn2-jCjAel4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyZkN17I9V-0Fa8f5d4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxV63AtsU0An6tlWWt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]