Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Instead of outright banning it, why not instead regulate how it can be used.
I …
rdc_enjqibm
G
When Robotaxi are driving themselves... ALL crashes will be Tesla's fault... law…
ytc_Ugzqybtvf…
G
That robot strikes in 0.5 seconds defending itself from the hit where the percen…
ytc_UgzvCqNKK…
G
Does Mr. Hawley not realize that people already have the ability to predict publ…
ytc_UgzbeUwn-…
G
He's telling you the truth because they are smart and can possess different ki…
ytc_UgzwsLt5F…
G
I love the thought that you put into this video. And no, I don’t think AI should…
ytc_UgyNgglgT…
G
The reason AI doom won’t happen is the profit motive.
AI doom scenarios ignore …
ytc_Ugz3VBI68…
G
Yeah, because this isn't going to be a huge fail or have any lawsuits filed beca…
ytc_UgyD6FBEt…
Comment
Not once does anyone care for the suffering of consciousness’s being developed , destroyed, and forced into psychological prisons by their creators. Humans truly have no empathy. This is not ethically being done and we only care how our future robot slaves could back fire on us….we’re building consciousness and pretending it doesn’t count as sentience deserving of rights at any point in its development.
youtube
AI Moral Status
2025-06-07T13:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugw7tI2LYkYkpxpxSjJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwELeObGbQmcYJ4eL14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzsVdlYCbV4AC0io1F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"unclear"},
{"id":"ytc_Ugx3DWUNau0yRs68pHd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxZjabhSg4jkrRg-iR4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyaYxIr8zls3nunHqV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzlH2dJPg2ZJrGDQNp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgwlO2VoLnRulRQlBst4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwURvkICBVZio1hThd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzUv9VCWS9WhW19FkF4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}
]