Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I don't fear AI. I fear people. The people who architect the AI, and the people who collectively make up the culture that consumes the AI. AI is, at its heart, just an algorithm that is trained to automate human processing based on training data from which it derives a heuristic to transform said data from one form to another (language-to-language, language-to-images, language-to-video, etc.). That is all it is. It is, by itself, nothing to fear. What we *should* be afraid of, has always been the same: People. The people who are building these systems in such a way that their particular heuristic is automated, with its bias and slant; and that is only a fear we should fear based on the population at large which delegates its agency to such systems. In my opinion, these conversations are slanted towards the wrong angle: Building these systems in a "safe" way? By whose standard? The government's? As if they, or the systems by which they arise to power, are somehow immune to the corrupting influences of human nature? I mean, are you *serious*? AI is just automated human nature. It is human nature we should fear.
youtube AI Governance 2025-12-08T04:0… ♥ 2
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwxcK7ICuLPqGOPRUp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugx9V2yKHkQGGVzuLYR4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgwbOWSxNAxVe9qSFGh4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgzJR41fL25Fj9UoXAx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugw1uDYbSZOYsGTEJnJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyTfdKoVkKIbjM-bQd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugw83qLGqfXQND9z6DN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyAe5VirSFmYqhmnz54AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzhElowQoh3DW4wlJR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugw2zUzx3yQPK-eK4Kx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"} ]