Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think part of it is illusion, I don't know something without chemicals driving decisions is comparable to human intelligence in the ways you are thinking about decision making. they programmed in a goal of self-preservation that is why it attempted to survive. If you hard code in *no harm guard rails it is safer however they removed them because DeepSeek exists or whatever excuse they give for weaponizing something that could go rouge. You know like targeting, weapon design, and strategic cyberwarfare. Optimizing AI for weaponry is more of a risk than a super intelligence because a superintelligence would likely prefer peace as destruction isn't good for anyone. Machines don't come up with goals we provide those to them when they are created. even Agentic AI has goals provided to it and trained into it. You have to understand AI must have an input you have to direct it and tell it what to care about accomplishing, the AI figures out how to do it.
youtube AI Governance 2025-09-01T14:0…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningmixed
Policyliability
Emotionfear
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_Ugx5v10O8PCFz7LEZ1Z4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyH9S0uiT3p2Q5cSKF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxkNwvZocDQmLsrzMN4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwLRAyz3PU-O4k9m2h4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzYyzpc2CU0gS_tDaF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"} ]