Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The philosophy questions are not really useful questions. For example if you ask "does it have consciousness". Well philosophers dwell over whether something has consciousness, but it is not possible to answer if we don't have an objective definition of consciousness. We need a mathematics precise definition of consciousness. Something we can turn into a test and apply it to a machine or animal. Without that the discussion is nothing but a philosophical circle jerk. I prefer the behavior type test. We don't test whether planes can fly by observing if they flap their wings like birds. The how is not important. What matters is whether it produces results. It is irrelevant whether an AI model uses the same process as the human brain. What matters is whether it can produce the same results, and do useful work for us.
youtube AI Responsibility 2025-11-07T15:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgzH89X6bUBv4wCZTgF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyqGjPNwJ6QA0Fz-4F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyxQO09TTCNTIZLiEV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugy4NUAksRfnApvRwk14AaABAg","responsibility":"distributed","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgypZknkEThfR3Qywtx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwOHQ0pyTPcxci4HiF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyB20yVDFKkceDmmgp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzDRtwqT8lqpKvDHax4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwKB3YK0w4etax9s254AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzprmYLK9plq4KKxah4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"} ]