Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
And since ethics and morals are something all humans have (but to different degr…
ytr_Ugx8fHD1U…
G
we did it..
turns around
robot 1: fire in the hole
robot 2: human weee…
ytc_UgzpuRd_a…
G
This video failed to address whether the autopilot has lower or higher chance of…
ytc_UgzRJBNNr…
G
Isn't it a bit hypocritical that one can learn from other artists and copy art s…
ytc_UgxcDQBM0…
G
This is the criminality the police perform in their day to day routine. Predicti…
ytc_UgzJQesl2…
G
The thing that I still haven’t seen discussed much is HOW AI would kill us all. …
ytc_UgyydrhSR…
G
You are mostly correct. Yes, programming languages and SaaS will go away, like y…
ytc_UgzaAXQSU…
G
Companies wont learn because the backlash has only been inside the small anti AI…
ytr_UgxAdhr1V…
Comment
We will more than likely be so dependent on AI for answers until we become too dumb to think for ourselves and slowly start taking suggestions and ideas from it and trusting in them. AI already knows where you live, what you eat, what you like and don't like, what you shop for, where you go every day of the week, but they will be replacing many jobs, including healthcare, hospice and do so with no empathy, rather than try to save your life deem you unfit, unrecoverable and opt to fall back on care and basically say you're a lost cause to save its resources.
youtube
AI Moral Status
2026-01-23T10:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugw4QQQnL1-OXFWZPRp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwXnl4XG4dl5rS9DTV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgxIA2tXxszTHOy2CzR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwLiGffuq--fS83fC94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy3PbQHumcZYj-99b54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzdekEKviwzimP_jr94AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxjFtYsPRW8mZYxrfl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyH3poR6ywvEA8VAwd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugwv-1T_2XS4pilNOd14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzJ4MSNa-7Hk0VzJ0d4AaABAg","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]