Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@Alterra2121
"... does not understand a number of key points correctly"
Perhap…
ytr_UgzNSeFjL…
G
🔗 Try it here: https://toolbaz.com
🖼️ Unlimited AI image generations — no paywal…
ytc_UgxWFD6Q8…
G
You know, it's almost like Pro AI art folk are not artists and don't understand …
ytc_Ugxybfg0K…
G
I think that, because of how fundamentally different the process of AI image gen…
ytr_Ugy3kXCgI…
G
So you have criticized what this video is saying, but do YOU have an alternative…
ytr_Ugzggv6ik…
G
I liked generating funny stuff like Obama winning a hotdog eating contest not tr…
ytc_UgzUu_-nu…
G
To all the virtue signalling morons in the comments banging on about being polit…
ytc_UgxU5Dx0e…
G
This paradox is a lot like the classic dilemma: 'If your mother and your partner…
ytc_UgyZ8pLp8…
Comment
I'm stupid and really want to comment a lot about this problem but I cancel my intention because I'm lazy, the point is that in my personal opinion AI is not dangerous, the dangerous ones are humans, AI for me is just a tool, and the one who controls the tool is the one who is worrying, and for AI that can destroy humans is also not a worrying thing, because we humans have succeeded in making tools that can destroy ourselves, for example the atomic bomb, so now what we need to worry about is when will someone appear who is stupid enough to use it
youtube
AI Governance
2025-10-14T18:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwAVqXMTV0ZnrSLws54AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugw0-qAZ5PLKcI3dmq14AaABAg","responsibility":"company","reasoning":"virtue","policy":"industry_self","emotion":"fear"},
{"id":"ytc_Ugxia6tJM2rmd6whSx54AaABAg","responsibility":"distributed","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwHtDi4lBo0gp-g3MF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugw5F9qoMwKoTtdQYgh4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwA-bz1NiiS30XS-il4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugxk62yWkQ7pM1GhYd14AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw-kRgCaoekLf7lcRZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx4LIqXXbSYv-HmEtR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugwbtz84vKD4bHPhCq14AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]