Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I don’t know what’s worse… leveraging faulty and hallucination-prone “AI” for weapons targeting, or the potential that we might create a real AI and use it for the same thing
youtube 2026-03-07T01:5… ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgxSPUM20hj0-3tPYJ14AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyC0aFbnUHgNVeXyIx4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxzvPsvAc2VSICJneZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugzre-7yHYO5Y6go9nh4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugxyic7g_Fig-IQnhTJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugz1BFjn9chZB3vcr_t4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxChGkI8FGQVa1jxiF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzZEfxbU49oga6dBXp4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"approval"}, {"id":"ytc_UgwbCsRsq1Xp0C2tieV4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzLswZ7kLrQmcuaoIZ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]