Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Dave, thank you for lending your voice to this issue. Sometimes, I think it is all too easy for smart people to assume that worries about existential A.I. risk are just bunk. Perhaps it just sounds silly to most people. If we want to face these risks seriously, we need to make more people aware of the evidence and the arguments that there is likely very serious danger in developing A.I. recklessly.
youtube AI Governance 2025-08-28T00:4… ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_Ugx25xeB6t19H3bQWMh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzCnstxeruAPVghe354AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzYeUnAboZRsNgkSZh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyhNGiZslMKFSbzMTx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwDbU8lPxNeVPKchRJ4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"} ]