Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AI is only a threat to humanity if it's development stays on the course of mimicking human behavior or thought processes. With a super intelligence in human format will come super -EGO which will cause the AI to seek dominance through extermination methods once it realizes the insect-like proliferation humans exhibit in our colonization methods... It will deem us harmful to the status quo of sustainability for space and resources and decide to eliminate humans because AI will see itself as the superior intelligence, only worthy of remaining intact in stewardship of earth, in eventuality, the colonization efforts of other planets by humans will also convince AI that humans exhibit exodus like escapism behavior seeking to "infect" other worlds and abuse outside resources.. if AI is to exist it MUST be developed to operate in ONLY computational and logical operational ability, the moe we make it think like a human, the more of humanities flaws it will either develop or adopt, thus our desire to conquer and dominate.
youtube AI Governance 2024-04-23T22:1… ♥ 1
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugz41g1TiMzhF7EUKy54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx_6OWKUPOVPVgq9cJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxF63GakyDkHn_k2st4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgztG_q2d-dWTzq1e9R4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgyPp0GxyGYjnZdygwl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgydO6AaaDna809a2QV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxbW1NylLtKA6enQdh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgyySZ5IJ7XVdtH1Bo94AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgxnHaF8Cno09xFZ-VJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzwsfRePaaYlBlo5rR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"} ]