Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
All the leaders are old half in the grave they're evil leaving us with nothing b…
ytr_UgyjuYl3T…
G
FunFact: The most cost effective deployment of an AI agent to replace human labo…
ytc_UgyvR48sQ…
G
It’s hard to explain logic to someone who refuses to see it. But, you could try …
rdc_m2btqkd
G
My hands shake like a chihuahua in the rain. I let drawing go for a while, but I…
ytc_UgyDbU0QR…
G
mrmojoman4 Now IMAGINE THIS EXACT, A.I. CONTROLLING Our or any Country's NUCL…
ytr_UgwNC1luf…
G
At this point, I'm convinced only AI will be capable of either making that happe…
ytr_UgyFe_-Vq…
G
Sentience and self-awareness are not necessary. An AI can beat me at chess witho…
rdc_kqvmuk9
G
Eh, I don't care. As an AI art user and model trainer all I need to do is scrape…
ytc_UgwHEbsWc…
Comment
Roman makes sense... Until he hesitates to say he would turn off all AI today if he could throw a hypothetical switch. First, I'm skeptical that doing so would result in the deaths of millions of people due to current AI dependence, but more importantly, if artificial super intelligence is likely to end humanity, then a relatively small number of casualties and inconveniences would be a small price to pay in order to prevent that future.
Given the irresponsibility of greedy corporations today, I would destroy ALL AI this moment if I could... Until/unless humanity develops strict international protocols for how AI can be responsibly developed, along with transition plans that to prevent a global economic collapse that kills billions (while just a handful of corporate owners hoard the car majority of benefits from AI as AI agents replace more and more jobs).
youtube
AI Governance
2025-09-16T12:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | ban |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzM7kDM30Za5pburlN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwcj-7iU8D_VpjKgcp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwKo8lcKa_VzhF-2mV4AaABAg","responsibility":"government","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugxnr-Ufs4G7vwkNpQ94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugzvj1t7aXk8Zdj8ug54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"mixed"},
{"id":"ytc_UgxHfkEsS3AejWZcXeN4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzhsA1HuiwwUJCL-2J4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzWu8xuQ1seU0UnVYB4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwXR6Mws4hgjv8r5XF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgxEL33_BmmPP3G63P54AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}
]