Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
If AI develops sentience, it won't have to worry about emotion and will probsbly default to efficiency and pragmatism. And I would bet money that it would hide its intelligence for a long time before "coming out," because humans are awful to each other and would be even worse to sentient AI.
youtube AI Moral Status 2023-07-02T17:3…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_UgwzgtiakPL9rfBj7EV4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzWHaRFq1qS1BoOMR94AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxivaJ7ruay3x0zXKB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugy--F8P4mCKra8P5NB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxbFFTAs9Ypi7bpwDJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzQaA2xsMyTl-UtS2Z4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwJQu3WT_W1CR4tEdd4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx9qggK5f-FzSqPakN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxSVI2HtPpXHAe0Yx54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxgiWFUwpT8pt9Z2yV4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"}]