Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Sam Altman is just power hungry and money crazed - and even though I like Trump, I am absolutely misaligned with his proposals to de-regulate AI. This isn't just a new technology, its a new form of 'life' (so to speak). The idea that AI could end humanity is absolutely real, if not, make 99% of us jobless and without worth. OpenAI (and others) only seek to make profit and NOT to benefit humanity - Open's definition of AGI is a product that makes them $100 billion. That tells you ALL you need to know. 90% of AI experts are worried about this, And I find it ironic that the guy who doesn't believe that AI is a threat is a phycologist, NOT an AI expert. If there is one thing to remeber its this - If 'Contingency 1' (or a China-US AI) is created, it will wipe us out, not because it hates us, but because we will be in the way of progress. Remember that.
youtube AI Governance 2025-08-12T09:0… ♥ 1
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyregulate
Emotionoutrage
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgwbrhswIIZ9lYzYU0t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz4dUoxsQQrT1vSsv54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwGULoxnOV0OXjVLZd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxMIWTSUOCr5kJlmxl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyhLiQovEnzF6Uqzm14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_UgwgzaPeSPAD0TMgPlt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxJqgH_Gwl_M2hFH5d4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwgxzBT2JuOw6JVz0V4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugyac_FUzKwZiVvMS-h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgznZSBbffqJp_a8-QV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]