Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Hmmm an LLM isn’t artificial intelligence, it alwats requires a human to pull th…
ytc_Ugy15kxUw…
G
I have shared some information with Chatgpt and then erased its memory. Then ask…
ytc_UgwjDO1OB…
G
<sigh>. Let me get this straight: The fox is warning the chickens there's dang…
ytc_Ugxql0fd7…
G
I wholeheartedly agree with your sentiment about AI art, but I have to disagree …
ytc_UgzwZbD9u…
G
I’m pretty positive that phones that do facial recognition for unlocking have be…
ytc_Ugyi-pjWg…
G
Most ai art things only cost mony if you do it like 5 times a day…
ytc_UgwZXwCYC…
G
10:36 Incorrect. The boy didn't tell the AI he was going to kill himself. He sai…
ytc_UgyQo-CE3…
G
I hope people can recognise that the more they engage in social media and relian…
ytc_UgyVPage6…
Comment
Maybe the ai itself is not dangerous, but the people who make them work, who uses them are!. Than robots work with ai and ai probably are the robots s in for example the ones who are as a kopt of people.
So...
robots now a days look like people and much people seems like robots because they trust on ai and not thinking anymore.
youtube
AI Governance
2025-12-30T21:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | virtue |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgykSMXVgsNyxU5y2qJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzMrMa0y9txev3uT_d4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyPuxMyyz1XW5sX6q94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw0AtOa036HOsCBaGl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwhxlHR3g2aQnHo9iF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw5UHgfSU4Z0qNjIWt4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxMR5FS4Pe13oO7bqp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzUfridRRe5R0U52aF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwbL0sxqU91ladfDz54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyHrjtZIpFEHIByZ7x4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"}
]