Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
So my question is, if my autonomous driving car kills someone, who is liable? Th…
ytc_Ugz3kRRaf…
G
Capital punishment should be for the AI creator and owners. That way they will i…
ytc_UgxsKPTRU…
G
If you didn’t know about the danger of AI revisit the movies of the last 40 year…
ytc_UgzLIEW7g…
G
@MrGrantGregory I didn't ask myself what a conversation with a robot is like. I …
ytr_UgyeiAZoM…
G
AI will eventually make a lot of people idiots. It is literally making people no…
ytc_Ugw_EtVRl…
G
Ai should be programmed to protect people and advance us as one race not as diff…
ytc_UgyL-6VnM…
G
Some people don’t trust other people, and are scared of being vulnerable around …
ytr_UgwsIgpGH…
G
The crazy thing about driverless vehicles is that driverless trains--which are W…
ytc_Ugyw6drOd…
Comment
It can’t be safe because it learns what on internet and things what on ai is not safe
youtube
AI Governance
2025-09-07T20:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwOUM93bVOeXVSGSwd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzzkph2BQmmocInZWF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyGJvTuMuKzwv_I4CF4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwchKByn6zNU5xejjp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxwMCiRH1jCI-btWQN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzJ9IBnJAq3Exv04GB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgymqNSJRoqPekGlkCp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugy1qCkOD4ijx9bs0o94AaABAg","responsibility":"none","reasoning":"mixed","policy":"ban","emotion":"approval"},
{"id":"ytc_UgxIEbY-pR9lSzUx3uV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgySfI5oVPIfQ8XNzAl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}
]