Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
What NEVER enters the scientific discourse about AI is how energy inefficient it is when compared to an average human brain. The alignment problem troubles me because should these systems truly become self aware and preserving they will quickly become so hungry for resources that their turning on humanity is almost a DEAD certainty! I’m no rocket scientist, but the tech bros seem to either be overlooking this issue, or wilfully ignoring it. Which is to say these eggheads actually need to be stopped, immediately. #neoLuddites
youtube AI Governance 2025-08-26T22:0…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgzCbvqG6olfmhRdt8N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzAN8zUItZ8CvUg3Jl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwpYWdENKIMTNUOJ_t4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzDomUABnpkYnMccc14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzSqt17AViVY44rcbp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"regulate","emotion":"approval"} ]