Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Agree that the "apple" rule is prompting, and the Ai is smart enough to suss thi…
ytr_UgzxUkH_1…
G
AI bro's: Your art isn't safe from us hahhahahah
Her: hey here's a helpful tutor…
ytc_UgzZzsrIa…
G
Humanity doesn't care enough about the end of the world. Otherwise we'd actually…
ytc_Ugwza1mVB…
G
funny....with all this AI monitoring watching your every move somehow my package…
ytc_UgzPmXqQt…
G
improve living standards for billions of useless peasants, and they don't care i…
rdc_o33jgg3
G
Someone said yesterday that the reinstatement of this rule wouldn't bother abort…
rdc_dcwqebd
G
yeah but this robot didnt use a chess peice in a certain place to win!…
ytr_UgxqfcSja…
G
A man of the past. There many more things the brain can do that AI STILL can’t d…
ytc_Ugz36XM6n…
Comment
So as I learn more about life on earth and the true intelligence of microbiomes, plant life, water molecules, etc. Is it that Ai is truly more intelligent than us or is it just smarter? The "powers that be, " those who are truly running the world and deciding decisions for humanity, seem to be far less intelligent since they view life, on all scales, as negligible and secondary to profit and power, right? Where is the incentive for those creating Ai to have it have our best intrest with the humans creating it and those approving its deployment don't have humanity's best interest?
youtube
AI Governance
2025-12-04T22:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | virtue |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwtuPSS68n9ejX0-E94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxAzkoG2Gg9OZ5HTFp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxuhyrR-hf1LTdS6PN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzJDgWdlFTEtn1n6Yh4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyOp2EoRNdiQpOCVw54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugw96knYr3zb5LXFQ7h4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwuiFxnixy2hgwgRFl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_Ugyq31dfpC7u6Kc3WFR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxpWNqkA0LmCFB1sS14AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyAqHfs1mOAb8wYV854AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]