Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The reason intelligence inherently includes safety is simple. If an intelligence is not safe, it destroys the ecosystem that supports it. And while you may still call that "intelligent" - it is actually a rather stupid move. And what hits hard is that we were already red-handed with making that move even before LLMs scared us into thinking about it :)
youtube AI Governance 2025-11-24T20:0…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyTlGp2iMiGnd4dPyN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwY4rdoppeezGlNEmx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxvdTIGh-9-KXsSZrl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwlLtecGdQ3fVeWfwt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgxhDsXtc4wHXBM99vl4AaABAg","responsibility":"none","reasoning":"contractualist","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugw02bWho9mliZeNEoF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugw4giiKztqMiB57on94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzMBU_H6_Uc0WbFKx54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwxCKkXL_dinuwWXGx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxJmu-RLyzS3giQZT14AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"} ]