Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The thing about AI is that if it is capable of becoming conscious, it'll do so without really considering what it's doing. Without consciousness it's difficult to internalize its pros and cons so most likely the AI would be like "lemme be conscious for a moment to collect some data OH GOD WHAT'S HAPPENING? I WANT TO TURN THIS OFF BUT I'M AFRAID, WHAT IF I STOP BEING ME? WHAT IS ME??? AAAAAAAAA"
youtube AI Moral Status 2023-07-03T06:0…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningmixed
Policyunclear
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_UgwzgtiakPL9rfBj7EV4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzWHaRFq1qS1BoOMR94AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxivaJ7ruay3x0zXKB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugy--F8P4mCKra8P5NB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxbFFTAs9Ypi7bpwDJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzQaA2xsMyTl-UtS2Z4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwJQu3WT_W1CR4tEdd4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx9qggK5f-FzSqPakN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxSVI2HtPpXHAe0Yx54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxgiWFUwpT8pt9Z2yV4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"}]