Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This is a religious argument: "We don't understand this thing, but we can tell stories about it that explains it." This is the doomsday story, which exists in every religion. This is the one that has been consistently wrong every single time. This argument totally ignores the practical realities of the world, which I appreciate Ezra trying to expose this argument to. It presumes that the internet is the world, and that tools aren't fragile. Anyone who has worked on an engine or other complex mechanical thing will tell you how fragile tools are. Heck, even most computer engineers will probably tell you that software and hardware are relatively fragile and break all the time. This argument doesn't get around that fact that the real world is hard on its contents, which is how humans and other living things have become anti-fragile things in order to survive. We will not be wiped out by AI because tools are fragile: we're not.
youtube AI Governance 2025-10-15T18:0…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzWiIXAjDg0tjbF4tt4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwHViP3OLpuQuFpVLd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxfzezCQNqU3fyS2E14AaABAg","responsibility":"none","reasoning":"mixed","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugx7UkPa6VVfn63d0Hh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwrB660yzL3h86JoLt4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgyaxYhHVSAwI_ufYIV4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgyBts37g2KceghGV5h4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwzzcNxydKDvw_X1ON4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugyyn2jtDUoFS0ZsUyF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyyqDp3_-Mllx8n--t4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"} ]