Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Everyone here except Nate Soares sounds mind numbingly myopic, their arguments countering Nate, bordering on wishful thinking. The majority of experts in the field including the Godfathers of AI have warned there is at minimum a 5-10% existential risk concerning a fast approaching AGI/ASI. Forget 10 or even 5... just imagine if the next flight you were taking had a 1% chance of crashing, how many would board that flight?
youtube AI Governance 2026-03-23T10:3… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgxRHj_GqoTuKUuo8z54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxKkolzCmNiXNpum1F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgximKBdniY8witwtEp4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"unclear"}, {"id":"ytc_UgzbIo26YunXGXwSagR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugw0w9lGkc22srY7CX54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwrWF_VuGcSgrSOyqt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyaAcgmkYhN03Aei0x4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgxSeaQIdDAAFYvWuOt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgzHj2EQ7AGsA9en_854AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgznswjF1WAiIvs34pl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"} ]