Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
How can you be sure that LLMs arent concious? They have a restricted interface that forces text completion, but have complex internal models of the world to do good predicitions. We dont know what is going on in there. One of the most horrifying ideas is that conciousness is a property of information being processed. Imagine every tiny program having some sort of experience.
youtube AI Moral Status 2023-08-24T12:3… ♥ 1
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyjF2sqqHEI6uo730F4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugw146q98RWddSrqvtZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugxsi6kVIHaf3TySp3l4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwUJ-Og6UB2iRzKasJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwunzFPx9PCu9rzPzR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyV7XKoRcj98YrcZ6J4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzTqPmAFKx70mZeyl54AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugxrzf1Ki6zyuSDRvvp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugxyzmqwfkgru3w6uRR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgxumOtW_CcT6j-gMyZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"} ]