Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I built an LLM embedded Magic Mirror with a conversational layer, hooked it with…
ytc_UgzLE0oX9…
G
Imagine someone bullied a robot and the robot knows how to know feeling and reve…
ytc_Ugyzaf7A_…
G
I can understand being polite to get a polite response, but that is hard when I …
ytc_Ugw63wyBy…
G
i read some personal injury demands that were written by an AI company that supp…
ytc_UgwFAi2Bl…
G
You also missed the fact that the AI art was already based on someone else's rea…
ytc_UgxJDxn40…
G
"if ai art has no soul, then what is this? 😏" *proceeds to post the most soulles…
ytc_UgxoTsJZ5…
G
AI only knows how to draw these things because it steals from real artists among…
ytc_Ugwhei8K9…
G
I don't need ai quite the way it is. I just want better tools to filter informat…
ytc_UgxR-zo3B…
Comment
This guy just seems to push the burden of proof onto the developers to prove thats its not dangerous. But im not convinced and he hasnt given sufficient reason why i should think that IT IS dangerous? There is no reason in fact to think that AI will have a will or desire of any kind let alone one that is misaligned. There is at bottom NO ONE THERE it doesn't have personhood therefore it has no will. The only danger is this technology getting into the wrong set of human hands
youtube
AI Governance
2025-09-05T14:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzSjRBb1Ya2UG1VSmZ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyxUvLYiok8o7lEoxR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_Ugxl74JwGBX7E6S4dcN4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyDZn29SjHU12Z6_FV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"approval"},
{"id":"ytc_UgzRYCiH2RwMqBq1jxB4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzRiJnd4ugcbfiUU1R4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxkQKqlOuV3g5rBjut4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzOc2ati7UxjpL6ZiF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxk_AuU0Su3sia7p-94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwdG4DR89y5Bg-Vm6R4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"resignation"}
]