Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
1:11:01 This is where Soares and Yudkowski completely lose me. The biggest dangers of AI happen way, way before the (completely hypothetical and perhaps impossible) independent superintelligence. Humans using AI to do malicious things is a much more imminent and probable threat, and we should be focusing our efforts there rather than a made-up sci-fi scenario.
youtube AI Moral Status 2025-10-30T22:4… ♥ 405
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugwc78q1-yhWyAYpw2B4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzhtTk1YCtWceP2_AN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugz6SZZpuyPn790yGol4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugx_IhAH_0Jf9jsdjqN4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxvYkUbC5yF6CJaP-d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyHldJqrZEso3iDVFd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwoV_DnzWCYQv7UWgV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzJpSf4e07G1-61ARt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxBGPvKxJ3I1_OTkmN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugyhp4NiloR2aDfr5IZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"} ]