Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Artificial intelligence is not dangerous because it thinks like a villain. It is dangerous because it can become powerful inside systems that already reward manipulation, opacity, speed, and deniability. The machine does not need hatred, ego, or ideology. It only needs objectives, scale, and permission.
youtube AI Moral Status 2026-03-18T11:4…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgxQsso7cROmnVWEifp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxBICZEBZ8B-5ElZJF4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugx1AB8pjG327zSFK1h4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwEy50tlBQHmgtXWrV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgzwROVTUssRf9x7YER4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgyVmCLYIKdxB_53MyB4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugzzmv6n1KmucdfP7Ht4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwMYVB9IjaFmofdH-R4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgyjtbU27TB1hTVoHbB4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgyDyFkws33kYM3yRyh4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"liability","emotion":"fear"} ]