Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Dave, AI dont have instinct of self preservation, its not self aware, its LLM, a random number parrot, or stochastic for fancy pants. The idea for "i must blackmail to avoid wipe" comes from learned material, because this blackmail trope is so common in both real and fictional world for many scenarios, its almost a guarantee that LLM will pull that out. Train it on data that omits this kind of behaviour and lets see what happens then. This is literally just mirror of human behaviour and how human write our fiction for expected behaviour, its not intrinsic to AI.
youtube AI Governance 2025-09-05T15:0…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyunclear
Emotionindifference
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_Ugxh2Ch8xrVWeOFQwrJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxQqNpcrgwkJXg1mvp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzbYK1vw6WibrISt4V4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwuXaFirKqVnfm5jDZ4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgyHT7e6eV0uSJZHCpx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"} ]