Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I only have one thing to say, i am a software engineer. If AI was able to be really smart, at the point of out smarting humans, the first thing it would do is convince humans that humans are smater. Also a smart robot would not tell you its plans to end human race. That would not be part of the plan.
youtube AI Governance 2024-03-25T17:2…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxuHi5IfSc1a7G5gNB4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugx9UloyUaOoo2sG9Kd4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugw1PXT-f2LEf0JUoKd4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgyhHxg1dL7sZyCrkFF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzfTIQO2CoSpkOz2Ap4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgyRY7JHzBJnI4grgs14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyNnakAdVRdNp_FtJ14AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxygVxJUIo4VY-2m_F4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugw62-PupJDw1wM1bmZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwVM7ZQRLZTIrFAEft4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"approval"} ]