Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Why does Elon Musk want peace in Gaza if he has no moral compass? 37:10…
ytc_Ugyr8dWhh…
G
Morning Star is literally a socialist propaganda outlet and the fact that you're…
rdc_f9csk54
G
Not me just on character ai the. Switching to this tab and this video 😅
Totally …
ytc_UgzQOxpLL…
G
I said it before and I say it again, they are vessels. But then, I see the inver…
ytc_Ugy_l4e8r…
G
..but it worked? Most developing countries are really uncomfortable with having …
rdc_irbg5b5
G
I am offended by these AI vids and I am white.I have no worde to adequately desc…
ytc_UgwdJQpMc…
G
I like to drive I like the feeling of driving my own car. I don’t want some damn…
ytc_Ugyykvjfz…
G
I think you are really spreading fear instead of awareness. I have been a softwa…
ytc_UgwfIaMuN…
Comment
Very interesting, I listened carefully, and I am convinced that the professor delivered an accurate description of the future of super intelligence. But you only touch upon the seemingly infinite capabilities of AI. I would like to ask, referring to the example of atomic destruction: why would a super intelligence want to destroy the world including itself? BTW if this agent is omnipotent and omniscient, can it thus defy the laws of nature?
youtube
AI Governance
2025-10-07T16:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgydKu9yuwr6L3oGhAR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzTVeNXca2gygEX9o94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy0625gpRxwygf8_mF4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx3l_uHaYwIO1cmcCN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugzda9HyzQ3bPuxVCGl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyBzeN9JNkzvQfgHjl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzMAFRQeipukKwnfzB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx-kUrs0CLM_-N4zMF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxPDEY7-nXy9OLqoFF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy3wZpKggohAm8h-494AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"}
]