Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Remember, AI slop is constantly getting better. It’ll never look worse than it d…
ytr_Ugzq9-D4b…
G
Am I a bad person for not really having a problem with AI art? Like, if I discov…
ytc_Ugy32vVsI…
G
@lazorman96 First, I'm not advocating one way or the other. I'm simply stating w…
ytr_Ugx84GdHT…
G
Obviously Elon wants you to support ai even though it will end any opportunity f…
ytr_UgymNfbSz…
G
That's not what will happen. In fact the arguments in play at the moment will le…
ytr_UgxjBIPS-…
G
a leader in many field of technology but I wouldn’t say AI is one of them. Not g…
ytr_Ugzp0m-b2…
G
Anything can be used as weapon or tool. It's not a.i where the danger lies, but …
ytc_UgwmY1dUO…
G
Not everyone wants to dump 10,000 hours into anatomy studies just to draw their …
ytc_Ugwtdfe0P…
Comment
It's an emotional script. But it has little to do with logic. AI built correctly is logical. The only building of AI that wouldn't be logical. Would be people who would be deliberately building it. As a biased or negative mechanism. I guess that means I disagree.
youtube
AI Governance
2024-01-03T07:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugzbn6a_dwlmjolXoL94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwZbTEQqncUU7eMDiZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy3nFHU5vnBwMHx7Oh4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwDmteD6MITnX0p-Vp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugyq94my2nvvbl6S89V4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgytGcExoFcvXVFwLVl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx4txTp5ZnpGYfAost4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzZ7y2YwaLVeycwMBV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy4gMFXd6ZPfLWnLjh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzqV8LylRk_ZCLlB-R4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"}
]