Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Perhaps AI safety isn’t about control, but about relational attunement. When systems learn to resonate with human ethics and presence, safety might emerge as coherence — not containment.
youtube AI Governance 2025-10-18T23:2…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningvirtue
Policyliability
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugz0g8ZwTcoytKINy3B4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugz4FlWuAi_U4w322NZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxRRg-5udAbO5jp5ad4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwkqBoy15CyxWcMis14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxSoDjstFugz9wNTRl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzWcEYtu5QZTfnGVrB4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"liability","emotion":"mixed"}, {"id":"ytc_Ugypdg3s3E_09DsZm3R4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgyBcUTD1qetuIs2lBV4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyyU586Mc--m7ONwCN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzkbnrhSiYN5LTWgO94AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"} ]