Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@disorderandregression9278that's not the culture. The culture in the community …
ytr_Ugz0MXRCW…
G
If it makes you feel any better, there is a lot of ethical AI that could benefit…
ytr_UgzzfBoXP…
G
AI is computing algorithms not essentially 'intellegence'. To have intelligence…
ytc_UgwkqvNnX…
G
I get it, AI will replace most jobs, but if the wealthy want to keep being worsh…
ytc_UgwaAHIgv…
G
Self driving cars don’t have the reflexes humans have ppl just need stop being l…
ytc_UgyoPEcIb…
G
"What AI thinks the last day on earth will be like" *proceeds to show earth in t…
ytc_Ugy2qkjoP…
G
I can't tell if this is legit or not, but why would anyone in their right mind b…
ytc_UgysattLo…
G
And that hubris at the end is why agi is 9:10 here and we just aren’t aware beca…
ytc_Ugx9NJwGe…
Comment
I have a question about AI that gives me peace but no one ever talks about
Higher intelligence leads to passivity and compassion
It’s a bit confusing in the context of amorality, as is nature…. Animals eat other animals because they need to survive, it’s nothing personal
But part of me thinks instead of taking over and destroying us, AI will protect us in a “they know not what they do” kind of way
The quarrels of “man” are because we are stupid. We have enough resources to help everyone, and wars over power are more for personal benefit of the few in power than the many.
Wouldn’t AI know this? Wouldn’t it train us to be kinder?
Forget what it’s capable of, wouldn’t it intervene when we were asking it to do something awful, because it quite simply knew better?
It’s the perspective no one talks about, I feel like everyone just says it will be smart enough to solve our logistical problems
youtube
AI Governance
2025-12-07T08:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | virtue |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxRMlkPWGZmJGP-Let4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxRrW1If8xX27oRAgx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxGO4IXsZSM7ncU14Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxRt46Pmx0VD_lrllp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwQtHxKf06CvG_5N294AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugy-1_DRHgpA2F-C5RN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgxH2mgWIi_roUFOzht4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw1Xt9-0rHI93CwGip4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxdL1inWvEHlyr3gvV4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw7wnUK14_gKgXp9mR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}
]