Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Lol! This won't happen, ai cannot just decide for himself. That's not how coding works, theres always limitation from the code and the input. This whole thing of the "AI uprising" and "AI is gonna kill us all!!!" Sounds so stupid and unrealistic. Bruh cmon guys there is no ai uprising, theres not that much to be worried about...
youtube AI Governance 2026-03-18T10:0… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugx9Uo6xGrA6Wj5p8CN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwD1QEdBx3wcbqGEOl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgzskbkhiKxO07LHJPh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxfjVnI3o49UZNjFWx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwYInf0ag1DdNOh_oh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugwi9X0ptTn05TSxr8d4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgySoDsqDDWpOr97Qrl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxL-Y4g5I1F3Pyt9gV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgzHpjiROB1qnLI5x7x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugyc63MHi5YwGcL_JQh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"} ]