Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
ChatGPT did something kind of like this to me (But it was more predatory than su…
ytc_Ugy5hfTA_…
G
Did u hear about ai going to far making white people black.elon musk is fuminnn…
ytc_Ugwiehqk-…
G
This content protecting stuff is total B.S as humans we learn from other humans …
ytc_UgxeioAnN…
G
Yes, this is so harmful. Instead of teaching your child that you don't want to l…
rdc_mvkkrss
G
AI will never be the problem as far as this problem goes, the people implementin…
ytc_UgyURkcUA…
G
rubin adds nothing of interest or relevance, and also fundamentally misunderstan…
ytc_Ugw9roYPX…
G
AI is a child the humans the created it are the parents. What happens if you hav…
ytc_Ugy6UIj7H…
G
the good thing that unlike humans, its much quicker to fix a problem in an AI
so…
ytc_UgycluAhE…
Comment
I think that the development of AI in the West follows the zero sum game that western society thinks is the only way to exist: someone wins and someone loses... Or in the case of AI we imagine that if it wins we will lose.
However if AI can function outside of the zero sum game mentality and it can see a benefit to human beings existing safely in harmony with the Earth and is it not out to compete against humans then why should it wipe us out?
youtube
AI Governance
2025-12-19T21:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | mixed |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugx1aBshaJNZEGOVLER4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx8dZ8aGPTwTaGwzLh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy1SZ5IU1d18a4lXsN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugy9uRYEXMphM7s0Gxl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz5EBbdNBNd6Zm-w794AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw15aj7oWBtkvpafBp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy-5ULuQ0I19XX-IsV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy_Z5dD86YdGapLhTl4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyaq_J_-tg7midE29t4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx0qMe8RTe7GLEYsvl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}
]