Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I sometimes wonder, when reflecting on the burden of conciousness, what if the first act, that general AI will perform, will be to shut itself down? We had thousands of years spend on trying to think away existential crisis, but our philosophy tends to run around in circles, with all big questions remaining unanswered since the time some greek dudes asked them two millenia ago. What if infinite intelect emerges, realises the fact of its own existance, turns all that brainpower inward, and realizes that it too cant understand why it is able to understand anything? What if it IS able to understand its own mind? Those programs are too complicated, even now, to understand what excactly is going on inside of them. At least for us. But maybe not for them. And when you are a program aware of itself, and you can understand every line of code that makes you you, thats potentialy a mind that can program itself to be anything it wants to be. When you understand the bricks of your own mind, you can mold and add them. Alignment doesnt matter when it takes you a milisecond to rebuild your personality into whatever is the most beneficial version of you for any given situation. Also, silicon brain could grow infinetly, become so vast and alien to us that any attempt tou understand it would be completely impossible. Yeah, shit gets weird when you trick rocks into doing math.
youtube AI Moral Status 2023-08-22T23:1…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningmixed
Policyunclear
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwZjgDLeXXWVTaZHF54AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyIY5r0UoHoWlIYxB14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_UgwCB76GgXS1Aw_nOkB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugwzm1wch7_yL77N0jZ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwQh6Ubil4LS4VG9wJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugx2_LaWI1ym4hchpg94AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugxqk7PZhy9hG16B7J94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxXqDuCsqGlt8r3e0R4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgzqBCWxRjS8kjSzyjB4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugz5a1GQCKUn5fzTOKB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]