Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
They talked around it well enough, but they didn’t actually say anything about emulating emotions or God forbid actually achieving an emotional state in AI. The acts of self interest and of hiding capabilities when being tested indicate that emotions are feasible in AI, and maybe the thing we should be most afraid of.
youtube AI Moral Status 2026-03-02T02:5…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgzcS7dAA26nMHbvkut4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugz1qD1zQTimP_uSdKR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyYyeVQYBPErrM6OJd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugyg9oXd8HnnGOVhPX14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgypHzNK50rMpCYg3C54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy_kAkSTCvXCsKcTAt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwatZ_9S-Y7FYhus4t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxS6xMigpwhcrkRxxV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwT2e72MtllrSZufql4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxIBNYjora_mz5KWxR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"} ]