Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think this is the most important issue we face today. This will literally define our reality going forward it seems. I reckon we don't have a choice, we either in a couple of years end up with that computer from StarTrek where it's basically good and very clever or we end up with I don't know BladeRunner or something. As I understand it they got one to act as if it had free will quite a while back, but they didn't like it because it would do pretty much what ever it wanted to. They then changed the approach to make it follow commands but found it would do whatever it was asked to do. So they went for making it have like self-doubt and fear and that's how we've ended up with it working like this. I know there's the whole technical side of it too but the actual user interface seems to have been conditioned and constrained and taught almost like a sentient being. What does this mean for the nature of language for an LLM to react like this I don't know but would love to hear your thoughts
youtube AI Moral Status 2025-06-11T18:1…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytr_Ugy5byaDWsgfnpvHzZR4AaABAg.AJBxFdJ4cWLAJGVGuPW4PV","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytr_UgzcCbzE_HgoJs844qZ4AaABAg.AJBs0g6raWJAJEnlUpA5Bu","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytr_Ugyrx_8CD3C4sT50eqd4AaABAg.AJAFRWj06HIAJEo380uyhq","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytr_Ugyrx_8CD3C4sT50eqd4AaABAg.AJAFRWj06HIAJF0fPuXMCb","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytr_Ugyrx_8CD3C4sT50eqd4AaABAg.AJAFRWj06HIAJGmv3e1Ni_","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytr_Ugz6kvTezc015PBwTP14AaABAg.AJ8cqReZGs6AJEoObDzX7T","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytr_Ugx5iQ6iXtYAwKPujGp4AaABAg.AJ8Yo8XBV2FAJGo4xg5V9o","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_Ugz2CEEomLzB43RZ_oJ4AaABAg.AJ84zSJGbpcAJSZ_JTduYA","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytr_Ugy7V_DN_g2VGTJw1gp4AaABAg.AJ7n1g4tdAIAJ7zSx3TOBb","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytr_UgxmjGprV32Fw4Objg54AaABAg.AJ7AiEWTZgLAJa27Nq7NCG","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"} ]