Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Intelligence is the ability to acquire and apply knowledge and skills. The idea that current AI architectures will surpass human intelligence seems incorrect to me. AI is not truly intelligent. It is an algorithm—a sophisticated probabilistic one, but in the end it remains merely an algorithm. Think of a chess bot: is it better at playing chess than any human? Sure it is. It has insane crystallized knowledge (having played billions of games of chess, if not more). However, does it have any subjective experience of playing chess? Or can the said program adapt to new rules of chess—for instance, if pawns now attacked the next square and moved diagonally? No, it cannot; its architecture simply isn’t built for that. Thus it completely lacks the “acquire” part of the definition. It has almost zero active learning (except the context learning, which is very limited), and that’s what most of these videos ignore. As such, it can never create anything genuinely novel beyond the patterns it was trained on. Its “intelligence” is therefore purely instrumental, not creative or understanding-based. That said, there will undoubtedly be strong pressure on the workforce nevertheless. As humans at the workplaces are usually cogs of a big capitalist machine.
youtube Viral AI Reaction 2025-11-27T16:1…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policyunclear
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwLW50Igp1EvFne9yN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"}, {"id":"ytc_Ugzpjw7u_swLSSX8_S54AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzvTyNOib_XaT5tpGJ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugx0qbx5qQtLGHXM6AZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"outrage"}, {"id":"ytc_UgwDXc7Ctb77r9Wntil4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugyyc-YRJXM6Y4u3SRx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxRsnI1EvUhP06JM0N4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwJw2DzWyVsSQlQdVV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzWYa8JtnKBvAop2K14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzWGlOWULVeRUA2QTt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"} ]