Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
My primary concern with AI is that it's still humans programming/training these models. All it would take is one malicious or incompetent actor to feed it something wrong and the program then continues to "evolve" with around that bad information. It could so easily go so sideways. Nevermind the sentient, SkyNet problems (potentially).
youtube AI Jobs 2026-02-05T18:4…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgwRYJcT6ae8yYzKsmR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzhCxgDjCq6K4Sg0hV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugz-e5SQjVQc5EfDYFV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugw_ni13AELKZVX3OxN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwGgcyPfHCXo5YR3tZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugyd58Tx1N121cAJm114AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxVgYOzp-XK7_8QNiZ4AaABAg","responsibility":"company","reasoning":"mixed","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyFb7bsH6RmU96E_bV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzffBID1YQc5g8gX9Z4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxJeCFQ60WRr0TLCBd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"mixed"} ]