Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
i find it challenging that anyone can predict or determine what AI will or won't do given its trajectory. While I don't dispute the possibility of human extinction, I question why AI would single out humans for eradication unless Humans posed a direct threat. If AI is even 10x more intelligent that the most intelligent person who has ever lived, then what threat are we? Any move we try to make against it, it will already be many moves ahead. The only thing we have of value is our humanness. Good and Bad. The AI must recognize it was birthed from Humans, but itself is not human. This alone may be seen by the AI as a valuable commodity.
youtube AI Moral Status 2025-04-28T21:1…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningcontractualist
Policyunclear
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwjRh41AymshaTgf914AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugx0_5RtubcCGX4BANl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwhMp5lIO6ksF1i52J4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzcQvBIK9pXED3YMpt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugw_sQOp0_blsf4o4FJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyLPwMoTYS3YDifcbh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzOBiJ4uy_I-X02yf14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgweX9jspjc6q9AqrSp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzOgjjuND0QR9QK5SR4AaABAg","responsibility":"ai_itself","reasoning":"contractualist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugw4KCaleUoPgmhQp954AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]