Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Do you realize how much critical thinking ability you even needed to make suck a…
rdc_n0loxgf
G
I'm not against AI, esp after seeing how useful it is for science things, but I'…
ytc_UgzCorWDq…
G
It won't be a petty civil matter if op brings in a lawyer but whatevs.…
rdc_dya68xt
G
and to think, i thought that black mirror episode with the robot dog was totally…
ytc_UgxzCAtI4…
G
Elon Musk: I dont like AI, i think it's far more dangerous than nukes.
Also El…
ytc_UgxoG5t_r…
G
It does need to be regulated. This video just shows how much the layman has no i…
ytr_UgyYaYQA8…
G
AI doesn't learn the same way we do and it doesn't generate art or text or music…
ytc_UgwbopgPM…
G
There are so many morons here that think alignment means “robot follow order of …
rdc_m9iq88i
Comment
This world has been dualistic in it’s nature, who knows there could be world of good and bad AI systems created within themselves and a new game will begin between destructive AI Vs protective AI!
youtube
AI Governance
2026-04-18T07:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugx_hInz3Y2csNl-4JV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx1xVeGY7FVfm052jZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxtV_bnlfHe94o3yht4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz9SZLECWvim_Fzq0x4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgxmdZS4B0ZIpyVNkc94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwSpyikwOaq5plRz054AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw4XSmhByLZXSY87Ul4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzXIc1uOJj1CTTowfh4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyDVktZQMiPzGVfEl94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwMCab6sKgYNHpVwAN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}
]