Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Very interesting interview! Thank you, Steve! 👍 Regarding the topic of AI having emotions and the example with the two robots, I would frame the story from a different perspective. I believe it's easy to determine, based on facts, whether the chances of survival are lower when facing a bigger threat—of course, the feeling of "fear" kicks in, and the smaller robot retreats. But this is purely mathematics. I think the real question here is (in my opinion): even if the odds are heavily against it, would the smaller robot still fight the bigger one for a greater cause, even if that means its own extinction? We all know of humans throughout history who changed the course of events, even against the odds—those who had the courage to act. It’s definitely a topic worth discussing. Personally, I don't believe an AI could ever truly sacrifice itself for a greater good—all of its decisions will ultimately be based on mathematical calculations.
youtube AI Governance 2025-06-17T14:0…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugwbdkr3ml7a0-zZcPx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugz7gmyXtODls9kxa6R4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgyGKEPXNgoHFnJtAwx4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugzt_2OxoDqm5l5Nsll4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugyrnqu1F0WC_g8UGoN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzJc9wuCumKLaL6F194AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxQfAaeUy9qBeJxhvx4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgyWfKufPgBfM4km3Md4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugwl-vdW1azCqkIYXP94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxQkltu9Du8C3vYBU54AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"fear"} ]