Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Agreed; you can't get hired if no one gives you the entry level chance. You need…
ytr_Ugyuojpum…
G
I dont think reference to how LLMs work by itself is an argument against against…
ytc_UgxdXf3K_…
G
I'm not sure how useful they'd be. Stable Diffusion takes about 15 seconds to ge…
ytr_Ugw4LfIR3…
G
Isn't the majority of the "factual arguments" that people showed on that video a…
ytr_UgxQIGLkT…
G
Its a revolution so big, so BIG, we cant grasp it on our imagination. AI will be…
ytc_Ugz6rD_Sd…
G
Just wanted to chime in on this part of the conversation about digital worlds ge…
ytc_UgzEzCe2X…
G
Another thing I could only imagine AI is sabotaging itself in order for humans t…
ytc_UgxGlrw6h…
G
I use AI artistically and what I'm hearing here leads me to believe that any wor…
ytc_UgytlyqpA…
Comment
If we get to a point where the AI is actually harming humanity there is a 90% chance that there will be air burst nuclear weapons involved to create electromagnetic pulses destroy circuitry but it will also affect normal people because our world runs on technology people should learn how to do stuff for themselves without computers
youtube
AI Moral Status
2025-12-14T18:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwsPy4wQ9FtglaVP3p4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxqsLJLipvjSr4FaY54AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwzPVRASMcYcCWtlPN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyDunCX6lxr6shddAd4AaABAg","responsibility":"company","reasoning":"virtue","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxKIqDEOAKIzZftWJZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxbmHGeo-oioj77vvV4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_UgwV6_2Vj1Hkccn5P714AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxHuw3tstFFF_o42Ad4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzAPwW47HTJVFiq_jV4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxHdv_GGcPpidL44Vd4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}
]