Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
A writer using gen ai in any aspect of their books development causes me to ques…
ytr_UgzTIBXpm…
G
Remember what mango Mussolini said in 2024,
"Vote for me this once
And you'll…
ytc_UgwWzyuFo…
G
Governments of the world making weapons powered by AI and using them to cull the…
ytc_Ugxsb1J-P…
G
chatgpt cant browse the internet, it only has memorized information. also, yes D…
ytr_UgwnGOGHF…
G
if people can call themselves ai artists, i can call myself a dishwasher using a…
ytc_UgxGbvntz…
G
Ai can just make art without any effort or time in it and humans put there heart…
ytc_Ugy1hc6Sg…
G
@Mel@M@MelodicTurtleMetal’re not even responding to the point she’s making. Here…
ytr_UgxGpexOD…
G
The person here in the interview has one aim.. to raise awareness about corporat…
ytr_UgwTTliKN…
Comment
Another channel asked AI to tell a hypothetical story of how it takes over the world and it explained the same process of takeover. Then he asked "How do you stop that AI trying to take over the world?" AI answered with "Well, the only way is to design another AI to stop it." In other words, we would have to fight fire with fire, which are both *destructive* elements...
youtube
AI Governance
2023-07-07T14:1…
♥ 6
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxgOO0o8rYcCbHE_1l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy3FlxL_yyTpKA266J4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzPoAx2MV81q9FH48J4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyAVLCjC2FSPiLDcwZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyevAa5KtDnj8OU4b94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgzyQiQqu6vPKYgtwAd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgwUKbDSKtuqlgq2yrR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxTAGqXWdHBOrV1WlV4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzVaAT2SgG4rJyoMxZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy_BDnYNmQLiduU12l4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]