Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
A good test of consciousness would be to give the AI ability to reply and ask it's own random questions of the world. Allow it to search the internet and find area that it is interested in. Watching it observe the world and comment. It's the obvious agency of something that shows consciousness. Creating it's own goals. Currently these chat programs just predict next best word in a sentence to satify a question. So it is totally based on user input with no freedom in behaviour.
youtube AI Moral Status 2025-01-28T17:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policyunclear
Emotionapproval
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[{"id":"ytc_UgzR_mdbjCmU12TBmVp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugw1zfMgp5NYRy_8kSZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwAddGBB8duUhwgURh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwpkYYeNKSdpDmjPF14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugw-4n_jcSid-o5Goal4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxVDVxVMHdBZJtA73x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyWjtpVKluBk1PuvNZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgwK3i0Uo0JcN48-okl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzuBPDWIxzIw4ZSikB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgwsE67HyUhO63d9kBJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}]