Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Is AI seriously not that obvious (in the last case) as it is to me?…
ytc_UgyidDNZ1…
G
TEEN TITANS MENTIONED ‼️‼️ FUCK YEAHHHHHHH. THIS IS AN ANTI AI AND PRO TEEN TITA…
ytc_UgxDl2Zji…
G
@Jurassicparkatmospheres It doesn't matter why someone makes art, or their level…
ytr_UgyY-iCAJ…
G
Clearly you don't fit that standard. "Artificial INTELIGENCE." Not to mention li…
ytr_Ugy15DkCn…
G
healthcare and energy is where AI should be and all its resources. but everythin…
ytc_Ugw_d6STb…
G
Someone in my life who doesn't oppose AI art recently said something along the l…
ytc_Ugzsu6BOc…
G
Here are 10 concise reasons why people should consider using AI, especially in a…
ytc_UgzP451RG…
G
So the dude openly states that he uses AI.
Twitter has no option for "AI Genera…
ytc_UgyEIzjIR…
Comment
I share Geoffrey’s scepticism. This thing will snowball exponentially and we will lose control before we have realised it, let alone have a chance to stop it. Human nature dictates our behaviour, we are greedy and selfish and focused on ourselves - we are not able to think collectively as a species to safeguard our future. Humans can’t help but compete with each other and the AI arms race is being used to establish/maintain superiority over others. Military domination, industrial/corporate competition, social engineering, etc. We have not evolved sufficiently to get properly organised and control this thing. I feel like our leaders are totally asleep on this. We are not socially equipped for this challenge. We are literally creating our replacements. The AI tools and weapons we are building to compete with other humans will end up dominating and possibly destroying humanity.
WE NEED AI SAFEGUARDS NOW BEFORE ITS TOO LATE. EVERYONE SHOULD BE DEMANDING THIS FROM THEIR REPRESENTATIVES. THIS IS A CLEAR AND PRESENT DANGER.
youtube
AI Governance
2025-06-17T19:0…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugx51K3o6aDehDk48Nt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyArREimhsgKXrTfNh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxvV-nZKAfkX6Rw_Md4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgwLZRtxPGySP2Tffq14AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyuCfLMK-jL9kymgkp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyrLqPmq-SMBFnUcZd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxP9WGpcjxoIgyuzU14AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzHyx7cSqCRVQ1Vvjh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwaC_QEfaqSssOvxyB4AaABAg","responsibility":"user","reasoning":"contractualist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxbQhurPibk1c5qFEN4AaABAg","responsibility":"user","reasoning":"contractualist","policy":"none","emotion":"mixed"}
]