Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
After looking into AI behavior more, I'm much less concerned about AI training other AI. The occurrence of hallucinations, and compounding errors seems to be a persistent issue without human intervention from what I've gathered. AI is, at this moment, much less intelligent than we're making it out to be. It's currently just a more adept database search engine with a sophisticated linguistic interface. Just ask it some simple philosophical thought experiment, or provide a few prompts to completely unpend it's language comprehension, and it completely breaks. It's ability to come to conclusions and make inferences is still limited by just available information and training, it's not making prediction models outside of what it is trained or coded for. And it doesn't actually remember past prompts, it has to compile information every time to produce a response. Of course these flaws and controls could be mismanaged to the point of a very negative unintended event, but it seems like many people are anthropomorphizing AI much more than merited.
youtube Viral AI Reaction 2025-11-05T17:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_UgxewMK1MSlYx0nNrnd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},{"id":"ytc_UgzGKypewDz97e2FuZ54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_Ugz48dtJJlbSN-fN-Al4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"ytc_UgziPP3ExZLjfli8AQV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_Ugz2-608P6Btt-oa_FF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},{"id":"ytc_UgxPA2qJTabDZeOQfQ54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"ytc_UgwpSyL2-Yl82UxIM-F4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"ytc_Ugx_qYgjVB6jezkXwBh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},{"id":"ytc_Ugwyrm4tIckqH_opYmx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_Ugx1L3ZDHQpfE0yjhVh4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}]