Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I don't expect you to be miserable, BUT I DON'T EXPECT YOU TO BE PARTYING AND CE…
ytr_UgwgDU1q6…
G
As someone that works in AI, she's not fully correct, she's oversimplifying a lo…
ytc_UgyLP5qrL…
G
To fix the flaw in AI you must simply ask one question: "Would the development o…
ytc_UgxZzZQP_…
G
The thing is that in today's world we are not valuable (our emotions, family, me…
ytc_UgyXzTyrr…
G
I'm always thrilled to see more artists speak out about AI. Toss my support in t…
ytc_Ugx_w0N-b…
G
Technical training should long have had more emphasis in schools and the curricu…
ytr_Ugy_5X-HW…
G
AI was never to work like companies were sold by the high priced consulting firm…
ytc_Ugx8KBwWT…
G
The songs aren’t AI generated. They’re written, produced, and performed by real …
rdc_jhcd3px
Comment
NO!!! The blame for AI catastrophes WILL NOT be diverted to Super AI, the developers ARE and WILL be responsible!!! They have a perfect track record of greed over ethics and they’re still moving in the same direction! They are not taking responsibility or accountability, instead blaming the technology itself. It’s never “I am creating this danger and probable catastrophe” which they most certainly are!!!! Don’t try to condition the masses to divert the blame onto Super AI, the RESPONSIBILITY is in human hands.
Technological “advancements” that creates a world not worth bringing children into is destructive technology not innovation. Even if the safe guards are placed to control Super AI, it’s useless because humans can’t be controlled. The power hungry will take advantage of the technology and they ALREADY KNOW THIS!!!
youtube
AI Moral Status
2025-09-11T07:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxYuiy7i0R3X_BjpbF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyF4f_G9Bt_FR6Z0NR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx8HMmH9by6mznCzGd4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgziL2rFYgXlItgR6DN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzAEogmuux6jObOru54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxZt3Y5m8h48iYlxGF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxSS5JtPLu9polT-Sd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyoD-FvqvRaOWJLV5J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyiHsJuQNBUtbol9554AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx43_vB5ptC3vfKLSp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]