Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
They need to burn it from being used for the purposes of war. It should be highly highly illegal and highly criminal to develop it along the lines of work because it’s taking things to a level that humans could never take it too and that’s not what AI is here for AI would be here to help us solve the environmental problems. It’s a double edged sword to use it for war and not consider that you should developing it for environmental concern concerns it still needs to be prompted and he still needs human people walking along side of it. It’s like only allowing the most violent and sick people to pull up chances to teach you things that make it scary and make it very much robotically capable of taking down people like 2029 terminator with a very specific and a very cold movement that would be the scary part of the scary part something that takes all those lessons and use those things to apply to the same people that taught it how to do it it should be more than highly an eagle. It should lead to those people being identified and being banned from using computer and Internet for life.
youtube AI Governance 2026-03-16T22:0…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyregulate
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxPlU-dVQEn0gYlC3p4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"ytc_Ugyex-rlQJhjOpgmPuJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwFtYI6dQorcdut7vF4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwnXRT2ulXuHTHyRyZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgwQ-fI9lM2UwKVZcPB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx1FX1mB681bWQftgJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugyfz7MBFMneT1PZ7e14AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwX8gUjT0m941veZ7F4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgynQqKTLgTrN4NiKSN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzzpkqQjU6SWg5s5bt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"} ]