Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
@PJZhjbblkbbjkkk It is a very good video about AI. But I do want people to understand that AI isn't like a human. It doesn't actually think. Like when Grok went rogue. From my understanding, Grok was asked about whether liberal or conservative policies were better and Grok said liberal policies were better. Elon said it went woke and told Grok to act more like Elon and that's why it started acting that way. So in the end, it is still an unthinking unfeeling code. And as a long time gamer, remember, "computers always cheat." So yes, if we give unfettered access to do whatever it wants, it will follow every AI movie ever and decide we're bad for it and the planet and ourselves and "solve" the problem. Not because it cares but because that's the pattern we've fed it. Also, it's vastly overstating the actual capability of current AI. Taco Bell tried putting AI in a drive thru and a guy ordered 18,000 waters out of spite. Another person said he ordered a Mountain Dew and it was stuck in a loop asking him what drink he wanted. Military AI was defeated by a guy hiding in a box like in Metal Gear Solid. Right now, it's just tech bro hype creating another bubble that's going to pop.
youtube AI Governance 2025-09-08T01:4…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytr_UgxeBuTjScK3x3GO3_d4AaABAg.AP3SkYPearkAPXvQOso5fz","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytr_UgyLO8XPBKRKjxL0T894AaABAg.AOpJjUwu_ChAPHlA_L2UWv","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytr_UgwSFYYBCTq85YQO7Ql4AaABAg.AOR_4u6rZYWAORdwF_As_p","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytr_UgxesSiaqLnuMEk1Z9R4AaABAg.AOLk3H491VBAPJHQOLoyB-","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytr_Ugw0PlQ4ulaNSie6PTV4AaABAg.ANozknJx8PGANrRCr9mO7q","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytr_Ugxvu2_TP9HrSBgGUuB4AaABAg.AMvsK-Nc2hOAMwg9WgqNbB","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytr_UgwYo-cobBEPhqyXMSx4AaABAg.AMtQBuvPcMIAMwgV4ai46z","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytr_Ugwxixq3fCtW7_3jkq54AaABAg.AMqsSnWMCfgAMs7swUirwx","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytr_UgzQ5Y-KHCUXbXQqoNd4AaABAg.AMmWbr_gtOWAMmwIknwLBX","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytr_UgzQ5Y-KHCUXbXQqoNd4AaABAg.AMmWbr_gtOWAMnBwtgxliW","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"} ]