Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
okay you can't be talking about LLMs like they actually "think". they're predictive word generators. there's no thinking happening there. it's all just repeating patterns. if AI's doing sneaky shit, it's only because it's mimicking the sneaky shit of its own designers. also AI can't help you make a bioweapon. it's trained on human-generated information, and even humans have not figured out how to create bioweapons at scale. not a single country, no matter how many decades and no matter how high a budget were invested into the project, has succeeded in having a functional bioweapons program. anyone even remotely well-read on the topic can tell you as much. there is no literature on how to create bioweapons, and so AI can't train on nonexistent literature. it can't tell you information that doesn't exist and have it be true. it can tell you hallucinations, sure, and likely will do so because it's trained never to say "i have no idea", and it has no concept of truth. it'll just spit out words in an order that sounds vaguely plausible. and frankly, if you know anything about microbiology at all, you'd understand that even if you were handed the world's most detailed instruction guide on creating biological weapons, crafted by a real human being who did the thing, you still couldn't replicate it without massive amounts of personal expertise paired with a high budget and years of practice. there's a very good reason why bioweapons have never been achieved by any government or organization, and it's because them bitches FINICKY.
youtube AI Governance 2026-03-17T14:5…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[{"id":"ytc_UgxiyqmDpw0692lWdMV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxFmram0tdrPZZkhXR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzHgx5LLEPGx3ulz-x4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugx3LYWFlFWH96PvjyZ4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"disapproval"}, {"id":"ytc_UgwyoAiQ9L3VIl2NzxJ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzfXqIy7IFntO98QPx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyPVAagMVHIt04LYvt4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugxdb7d6Klf23WAABVZ4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugx4TW_rtnN_Hd6z5yR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwpSsiwJJ5MLGWYULh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"disapproval"}]