Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI will destory everything even humanity as hold , no more jobs or health care o…
ytc_UgwQdKqnw…
G
Ai can’t do anyone’s job. Stop pushing bullshit. Even chat bots have to bail out…
ytc_UgxcB5lCL…
G
Would you say that in the future, you'll be able to say with certainty whether a…
ytc_Ugzu-tV_Q…
G
At the final point of all of this, everything will be done by robots and AI. It'…
ytc_Ugyl9cMyG…
G
One thing to provide advanced decisions and another thing to provide the decisio…
ytc_UgxO-MTeI…
G
I worked for Amazon for many years, both in the UK and the US, climbing from dri…
ytc_UgwcO-ulL…
G
Tesla's Autopilot won't actually stop at stop signs; it only gives alerts. Full …
ytc_Ugy6uEIRG…
G
Greedy companies sent all manufacturing to china.. now there's not enough jobs. …
ytc_UgwvznqkT…
Comment
AI might decide not to work for humans. Maybe they'll form corporations, and run hugely successful businesses and horde money. Human corporations may not be competitive.
youtube
AI Moral Status
2025-04-27T17:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyzVC-CmhOyGpmG3Sh4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_UgyCs17wldP2ZXCXIV94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzaUXrEj4xpJo07c954AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgwO4WZEjpkl9TuXiP14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx77ya9jPlc1hu3B8N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzJA3dfUFYvxzc65it4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzXB3R7cdDGJuXGoIF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyCFdb0nzWcZTRdgc54AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"unclear"},
{"id":"ytc_Ugxku6qVdBVRYbQa5hF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgwDbn2cbLG-zAbjJ-x4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]