Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think we can predict it because the ai was faced with a situation in which it would essentially die and since it was train based on human interactions and data its acting as it believes a human might if their boss told them they would be terminated from life. These are all human actions and ideas it learned from us. It more predictable then they lead on. Plus people who aren't the owners of an ai company tend to know more than elon and them since I follow a few how talk about ai and programming and explain why they do certain things. Sure the shareholders and owners are clueless but that's not news.
youtube AI Governance 2025-08-27T03:4…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningvirtue
Policyunclear
Emotionmixed
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_Ugy2sFO9gMXP5iBlB-h4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugy4otCoxcyQOrckVSF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugw29efcp4iGac6pNZF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgyOOYnhUQbsniVAfOl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxiArPGzLjLsyh6b9R4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"mixed"} ]