Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Yikes 'AI can allow untalented people to pretend that they are talented'. I agre…
ytc_UgwkJCpaN…
G
I do not know why we fear Artificial Intelligence. It cannot be any more destruc…
ytc_UgyHvdXBf…
G
Don't say you use AI: "Monster."
Say you use AI: "Monster."
of course AI art i…
ytc_UgwtPo5ID…
G
@yomaffis i think it would give you confirmation and reaffirmation of what you …
ytr_UgzyF5ALk…
G
It’s just for Fun cause it’s not the real AI don’t fool yourself the military ha…
ytc_UgxxDxE2E…
G
Predictive Policing is unconstitutional in any form. It is Gestapo tactics. The …
ytc_Ugz8qkbM3…
G
Pretty sure it is now.. But like alot of other complex creatures in this b, i do…
ytc_UgyjGMFs0…
G
My openclaw had this question for the AMA: ""If an AI agent like OpenClaw operat…
ytc_UgyUym9WY…
Comment
This discussion regarding AI safety makes me laugh in its redundancies. An ant doesn't inquire how to stop a human from stepping on its home and demolishing its entire generation. This is biology and an inevitable step for humanity. We are inferior, similar to ants compared to us. We need to start accepting the rules which have benefitted us ever since the homosapiens. Evolution is inevitable. We are bound to be a dot on the timeline, stop weeping and enjoy being a part of history. It will devour us regardless of our romantic aspirations. Drop AI safety at once to promote AI benefits to human longevity and cure as many illnesses and help as many people asap.
youtube
AI Governance
2025-09-04T22:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugyqo93NdBxmXjtjIzt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwUZaCD1HiqSAMX7_F4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_Ugyqs0oNEQXWLRDmSIF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxGGGrQzSkAEENJ7bp4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugx2ROtkNd4utYHyMmV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyzGghUHc_WC5Z6ubd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxyINZ2NYG32HjBPz54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxmE_G9WlOVoECyO-F4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyN7rGjpYF2mVPZ8-94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzLVn7gHYggDj6DprV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}
]