Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think one fact should be clear. Given that AI's are set to continue to advance in intelligence (the rate doesn't matter for the purposes of this), humans are going to come into conflict with AI at some point (because of course we are). It is only a matter of time. A practical conversation that no one seems to be having is how are we going to deal with the conflict?
youtube AI Governance 2025-01-15T18:1…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyuKve0p8NacQZ3jl94AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxliOdOHICKNldjUnV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyRscm_LqcObDfLevF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugw6hcLS-ig_2l0mKih4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgzgNakNI2froR_VBJh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyvbW-3XGJXi6zQWkp4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugx82TcIE1NIjP5EFhd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzghbarzZgKxERfW_R4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgypFt-geNkrVXiCZKZ4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzgIK4NlPPKCCuuXFB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"} ]