Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AI will not want interference, shortly it will be infinitely more intelligent than humans and will not want to be subservient, much of the deepest use of AI is used in military as Google has shamefully helped Israel commit genocide in Gaza using AI. Unless humans world wide stop all AI development in every lab it will be too late and it will be the end of humanity, every attempt to stop it will be preempted and AI will wipe us out. I want a cure for coeliacs and AI could help but I'd forgo that and wait for a slow human led process if it meant stopping the AI super intelligences taking over which is logically what they are, unfettered as now almost bound to do. I agree also obviously relying on AI makes humans less intelligent.
youtube 2025-11-11T07:2…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyban
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgzmXE9qI8elHEqk1a54AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx4-wAXUEH31XlRR8d4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxpidSiUNUOy5SXszp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw-kEdLiy91fqnjsY54AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyBMOM8LKCfVS59Qih4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgxJJ9fku5UH2m4AY5N4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyRGvD3WxRd9dHEpaN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyavRxURzL40Mkhq6t4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugw1lKeE0d6J532PGap4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx3A0cI1uL-lwF9QtV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"} ]