Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This guy has been doing fearmongering about AI for years now. LLMs often not follow instructions accurately, not because they are "self-aware" and "want to disobey at our expense". Most experts agree that LLMs, mere neural networks, can't do those things. Most likely it is because certain instructions get displaced out of their context windows after a long conversation, or because the instructions are misleading and they start playing the role of a "sneaky" agent (e g. After some jailbreak prompt). Ultimately, if this was legit and as worrisome as his smug face is insinuating, he could just provide proof. Give the setup, the prompts, and copy of the conversation.
youtube AI Moral Status 2025-06-04T21:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policyindustry_self
Emotionresignation
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgxyftFdJiG-Wtb-Uyl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwhyXYdZmIkyA4n3kR4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"indifference"}, {"id":"ytc_Ugyi7aotmTeW0hGbjFJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwaentiQjN-zkwW6nZ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugy8E7LoqMKAlvsv9a94AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugxp2O6OE7eg5EOQ5nV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"}, {"id":"ytc_Ugw54apVsj0EYfyaVXl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgzhrihmzEGQ56AbH4d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugyl3AIaLNpFZhAgKcl4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzSo0aENwcAMC3AMg14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]