Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Here's a question I'd like to hear raised and answered. Where we already have general intelligence, ie, well educated and trained human intelligence, why make a synthetic one. My expectation is because that can be exploited by a small team or company to a degree that human intelligence can't. So why do the rest of us want that? how does that work economically at scale? I'm not categorically for or against automation, I exploit a lot of automation in my work, it's great, we don't need to turn soil with our hands anymore, but like anything there is a point where something that has seemed great, becomes inappropriate and destructive, and this huge push to exploit AI to death and develop AGI is being driven too fast just to harvest investor money before people have enough info to really decide whether it's a good idea. I fell like everybody involved in AI right now have lost the plot, and many of us should have learned enough to see it by now. Wtf is going on.
youtube Cross-Cultural 2025-07-07T03:5…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyliability
Emotionmixed
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugxo3FIgzMePZHMVlER4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwLasqhlfZsAzYw_uF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxMm15i0GZHFRHom0B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwLm_QnY4MiAIQkr-p4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugzew2f_O2aXAfI42694AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}, {"id":"ytc_Ugz1Vd1FFENJMroqnNV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwWHAuiQpmN2GT22YJ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwSGwWEX9BVrynQHdF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxwSrA_hjqPtBS4tDp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"ban","emotion":"mixed"}, {"id":"ytc_Ugydr7rKyxpgmgNKi_54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]