Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Ehhhhh I dunno about this guy. He says "oh yeah AI can totally tell what is an isn't true, we just can't get it to express that to us" and then moves on without providing any evidence whatsoever. I dunno dude, if it lies constantly and we can't get it to recognize a lie and stop, it seems like our default conclusion should be that it doesn't know what truth is. Ten minutes later he says, without any apparent self-reflection, "-we have no idea what's going on inside AIs. A because we can't see what's going on inside there, we can imagine that it's whatever we want." He also mentions the classic "it's really hard to convince someone of something when their salary depends on them not understanding it." How about someone whose book residuals and podcast appearances depend on them not understanding something?
youtube AI Moral Status 2025-11-03T17:4… ♥ 2
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugxs4e2UNdweIXZcscJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz1a7i9Y0bJagEdERZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugz2uMrP8Bmv3J1qRBR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwsL5oUEYvqk1uyj4R4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugzq0KOoim73dCntkdh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgzsafBd5FfFmH9EZSB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxjJ8DZEUtc3q1DQ-B4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx4IGU5QzByFqxVLt14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxaO3DME9TsmKePwPV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzFY7J9QxhTDcPdPX14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"skepticism"} ]