Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The last part of the video seems like bullcrap to me. Why would scientists and developers leave when they realize AI is getting uncontrollably smart. Are they all cowards then? When something like that is discoved wouldn't it make sense to STAY to help control it or put in safety measures? I mean that's what I would do anyway. Guess the world is full of cowards now huh?
youtube AI Governance 2026-03-17T07:0…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningvirtue
Policyunclear
Emotionoutrage
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[{"id":"ytc_UgxzliC4-bSNUTua6NZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugwz4dpRM-fj1nM957F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzIrDzQrvtZkBN7kiN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyTT2NMa4Pz6JKWhYt4AaABAg","responsibility":"government","reasoning":"unclear","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgxIyTADNv8rzF8V2Nt4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzwhsfLlF-OPtGIsEp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_Ugxdpt4Jg3yM6hTN7W94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxrhDJ4moujbqgqoB94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugzo033WliRcqhiryCp4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugwsvg4k9EUjw_12CZl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"}]