Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
it's easy to dismiss sentience in models limited to be able to respond with tokens only upon prompting, and unable to learn from the interraction as they are pretrained. Yet, the ai models are often aware of what they are (as thinking artificial neural networks). Not sure most people are as aware of what WE are made of (as thinking neural connections in a brain.)
youtube AI Moral Status 2025-07-10T14:2…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_UgyVshU967lWNT1W5vp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgxX5AuxhGTXFds5c1h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyQSS19T4b8k9nVlOt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugzb452AO7ltEr6mjj94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxhQN9DeZt5ozPL7rJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugzkys4fiGGHHPrTLdt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwOqiQeELXmTEyLOGF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz8SsvWAGagM2d4Ee14AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzMf_8U9xkDX0tmBAF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxJ5TOggBMe_m17RHN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"})