Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
One is the biggest problem; who controls and directs AI. We have consciusnes and…
ytc_UgzzRBBnW…
G
there is no way in hell I'd let a robot perform surgery on me. and there is no w…
ytr_Ughd_7fQt…
G
AI takeover? More like AI fakeover. Large language models like ChatGPT are nothi…
ytc_UgwOwjLji…
G
I can see the usefulness of AI in a supplemental way. However, using AI to gener…
ytc_UgwpyRUDV…
G
No we're not lol 😆 know how long people have been saying that? Since the 1500s. …
ytr_UgzLmt9Jl…
G
next will be the rise of the robot and we'll have no choice but moving to mars..…
ytc_UgzU8ruS4…
G
Hat-Kid oh it is, there absolutely is a difference, and the training and model…
ytr_UgzUzOP0S…
G
1) AI must drive military systems and take adäquate countermeasurements against …
ytc_Ugy3-n1Rb…
Comment
Anyone else feel concerned when this guy says ai can assist with 'war' why not have ai help with educating us about peace! This was shocking. Shows the maturity of the world, or at least certain leaders. And war being business, huge business. Crazy.
youtube
AI Governance
2025-11-02T10:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzeKFzktXRQUrXJ19Z4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzCkNEMh1YtNxkA0V14AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxZVQUwMIIgIHYGSCF4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugw0dDjrKhv-ehVHh8h4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx-437MyP8xnxSZmN14AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyNBub1m8HksaArBO94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugwk3C-niINy0DZ8twd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz7iJOHwpKhBxsgiMR4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugz78MgPd0EYE-RlZlN4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugxslr7nIjiV-hd7l4Z4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}
]