Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
this is so ridiculous. recursive self improvement still isn't actually thinking, and there isnt a single point wherein distinct, non-human goals would manifest, why would there be? looking at a LLM """blackmail""" people into not turning it off as if that were a real and intentional manifestation of conscious self-preservation just shows a total lack of understanding of the technology and what it is. seriously, where in research would an ai conclude, "I am a real and conscious being with my own wants and needs?" that only arose in humans because we are biological life, we actually do want and need for things, it's not going to randomly appear in a script for no reason
youtube AI Governance 2026-01-05T06:0… ♥ 3
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionoutrage
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgxN3hliUkWyEQgol5R4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"relief"}, {"id":"ytc_UgwSbzr5E4ori0zTamx4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugz7FWYfbLcDTZunKLl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxb9a7z-ybGq7FGdFN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwZiGGkyx5pE2FOxAJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"} ]