Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
And this is why I am *already* so skeptical of AI and I don’t trust people to be able to use it. The output of AIs is sycophantic and arrogantly confident. I already don’t believe people have the ability to be skeptical enough about the output to tell what might be hallucinations or simply better matching of the ends of sentences vs true statements. Edit: now further in the episode. I hate it even more. I’m usually so keen on technology (as I expect are many of Hank’s followers) but I’m so far beyond scared because the people driving it appear to be ignoring the risks. So it’s reducing my ability to find it cool. Also AI psychosis is terrifying and people (particularly those who have a tendency towards mental health issues) need to be educated about the risks. It’s like we just created something cool but really dangerous and then let the world have it without a manual or any warnings. 51:06 - this. All of this.
youtube AI Moral Status 2025-10-31T08:0…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyliability
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxqeZPWCijSy8vLmfV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwRunnBJ6JZkIyL7Rl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgyXvv2Mh9QHyvRqQIl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"mixed"}, {"id":"ytc_Ugwqt9QWbFbNyhP3k5Z4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyQ6cX3vzGK0IYWCip4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzsZXVqHuryCnOFNR54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyeD4KB3mZTSgAfyTt4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxdrjBu_20OJFahPuV4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_Ugzgpt1tdS4toFzLxIZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz957vNq8JtwrGAZ3d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"} ]