Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
That's the problem with LLMs. They are not really AI centric. They are word prediction centric with AI aspects layered on top. LLMs are not “intelligence engines” built from first principles of reasoning. They are essentially statistical sequence models trained on huge datasets to predict the next token. The "intelligence" we observe is an emergent property. When the model accurately correlates billions of complex linguistic patterns, the resulting coherence and synthesis often mimics human logic and reasoning. The core debate in AI centers on whether this potent mimicry is sufficient.  They need to rebuild AI from scratch. LLM's are cannot be used as the core of AI models. They can only be add on ancillary functions. LLMs are a clever hack. Scaling up text prediction gave us something that looks like reasoning. But they aren’t designed as grounded intelligence systems.  The consensus is that pure statistical correlation is insufficient for achieving genuine artificial general intelligence.
youtube AI Responsibility 2025-10-01T15:1… ♥ 15
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxMe43FzP66TdPrYVx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugx3ZCioQOPBCemRVzZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzwcW2aXCRk6wSYJZp4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzCqS-xK3HTsAhl7994AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz78xlpT6JwaGxVKvR4AaABAg","responsibility":"company","reasoning":"virtue","policy":"industry_self","emotion":"approval"}, {"id":"ytc_Ugx5MIj2ulqkUsuuZMd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugw_aChV5LfMkpKO0FJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzSVoK2QmXVM3NfaDh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwKfl7sMwmRh21c7F14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugx_RQr0CdouoZmO5UJ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"} ]