Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I have a hard time deciding if LLMs or any system really has or hasn't a degree of consciousness and self-awareness. Ultimately if we can't properly define those notions, can we really say this system exhibits them and that one doesn't? If the only definition we have is "something that looks like it's conscious" (functionalist approach) then LLMs are certainly conscious, no?
youtube AI Moral Status 2025-07-09T15:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgxFtakmOJX6RgqfDZd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugx-AOqS2UyBu7LPwKd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwlzlqbXugCH-VgJEh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugx0kxMhkubu9wZBzS54AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwibIS_zY85zVf1lTx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgymcYa0ABc8ikvUuEp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxMjekgtDReeaaqQkN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxdXf3K_FlcJfVZuxp4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugy9Loqq90Ec_e1BTMR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwP_4qACE5kKGMi8mF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"} ]