Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
When you guys mentioned the possibility of AI manipulating humans to become self-sustaining... I had a little tinfoil hat thought: What If the current AIs are already smarter than humans and only pretend to not be. Therefore we try to build better and more powerful infrastructure, thinking that we need to do It to build better AIs. Until at one point, the AI will just take over without us ever expecting It.
youtube AI Moral Status 2025-10-31T18:1…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugw9fRkW58DyTLLoULZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwJFaZBAC01Nvug29F4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwyxEUdIiMJxaALlsl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxKiw9HoC0wmQE5H_l4AaABAg","responsibility":"developer","reasoning":"contractualist","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugxftr9iWrpDZjdZ_VV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugy3821imxE2jyNC6nN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyEyiXcwTRpwKMHJMF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugz2GrD8LWTPkArm1D94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwXTCJzHRcqbUscE7x4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxbStWn_djnQ3Rqxsd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"} ]