Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Excuse me imma need my robot uglier . Im jelious when my husband around s hot ro…
ytc_UgxG_GuZ_…
G
The job I would like to do the most rn is voice actor, AI is probably gonna be s…
ytc_UgwDNVwzn…
G
As it currently stands, Shad's AI Love Letter video has 2 dislikes for every 1 l…
ytc_Ugz4StxIJ…
G
AI will learn to do plumbing. I have a very real warning like this guy. I hel…
ytc_UgyiwpxK5…
G
The things that normally motivate violence among humans are things that produce …
ytc_UgwdDmaq8…
G
"It can execute a plan for a takeover for hundreds or even thousands of years wi…
ytr_UgyUKU3Q7…
G
More evidence Ai in nearby future Will be miss used on humanity, many warned abo…
ytc_UgxljeXOt…
G
I am asking you, is this information true or not. you know the answer, because y…
ytc_UgzM-jarb…
Comment
We are literally 100% beyond the point of stopping AI from being autonomous military death weapons. Neil said it himself, if your system requires human approval, you're at a time disadvantage. This does not sound good. We should be more concerned. Dr. Hinton was informative and hilarious and I appreciate his honesty.
youtube
AI Moral Status
2026-03-01T22:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugym54oiLUt1TYNQSq14AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxHdKsm0gkRKlKROlN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy0zdC9ezKAIy8U1b54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzHR5jrfIZrHMd-Ng14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwwPe_H4u0iP_6B5AZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzEP8QtAxXI2iKBInF4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwvP5rLLHKrz3O3fGN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyKhCYHV9sVevWPXLZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwpRuexR2h9H6l9nt54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzyGMgTSlPoKXYRanZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}
]