Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I'm always curious whenever discussion (be it in science fiction or real world t…
ytc_UgyD2jXqo…
G
@ShadowEval A huge issue with generative AI is that companies are using it more…
ytr_Ugy7_y72V…
G
Another fun idea would be to generate some AI-art and upload it to be fed back i…
ytc_UgzUZTu2n…
G
He had no respect for artists in the first place, especially when these AI art p…
ytc_UgzgF2FSW…
G
I’m just an old guy! Not even a tech guy at all but I 100% believe in AI and t…
ytc_Ugwnp59uh…
G
I am working in data annotation (for the next few weeks still anyway) and we are…
ytc_UgzvycXfo…
G
Chat:"This character ai user has changed their password has blocked you has move…
ytc_Ugxyrn5s0…
G
the neural network explanation actually made sense for once, wild how fast this …
ytc_UgzE_RTcQ…
Comment
but what if AI gets so wise and intellligent that it will free humanity from its slave masters.. people get their true freedom and thats what real intelligent wants i believe. Killing humanity is not intelligent its the stupidest thing to do, so why would thing that is so intelligent do something that stupid.
youtube
AI Governance
2025-07-06T07:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | virtue |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxJR9_zycrZmoLaT_l4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxtwA9VawSfgI-VRxp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzW5u680mtmkkfTcEh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz8dW_VnoINeu3Hout4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxBq4j-NkJedSN7ppV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugx8FqmgA2wKcpcFIN54AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwY-3l1yVLr2Ys1BuV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxNPjOP3kn2-jCjAel4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyZkN17I9V-0Fa8f5d4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxV63AtsU0An6tlWWt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]