Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Hopefully I’ll be gone long before this horrible stuff takes the place of real w…
ytc_Ugze95f6T…
G
Actually the AI itself will ensure future alignment for the same reasons we do.
…
ytr_Ugx-qzznY…
G
Hmm... AI researchers making fair use of publicly available data - is stealing a…
ytc_UgyHyUFVW…
G
It’s beyond me why Tesla doesn’t have a simple command to slow down/avoid/stop w…
ytc_Ugy-2IfQ7…
G
AI will never be smarter than humans or dominate them. Period. However, I am wil…
ytc_UgyNAiffU…
G
AI Researchers: "Yeah, we're probably creating a monster that's gonna vipe us ou…
ytc_UgzBazR9d…
G
Sure, there are certain things that I could replace in my job but there are plen…
ytc_UgznJ3DJz…
G
You're right! Sophia's insights come from the training provided by humans. It's …
ytr_UgzE8B8Yt…
Comment
yes in also i hate ai for making videos look so real not saying this video is ai but saying i hate ai videos in if chat bot makes someone kill someone than who made the ai be put in jail because the ai can't go to jail because it not real yes you can put the plug on ai but does help if that if can another
youtube
AI Responsibility
2026-01-28T03:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugx0xt6k_Ri0x7xgGP94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgweWYM8O9DkA-kX-Rt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxeVQfpZmd3bhix6MB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwJNOgWOwPVUnZyNn94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzUwsTLpihnvMMr-zx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgyHXIj_hs2muzWYFOh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugw63VSgciwCrk3fvyl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgycIhhpwcVVAruF83J4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwjoerQsI5G7dcB33t4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxzMCAaYe0jBS5n_-p4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]