Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Maybe the AIs that get close to a consciousness, maybe the first thing it would try to accomplish is ending it's own existence. If the AI has goals that don't align with humans, maybe the goals would end up like the most nihilistic people where it wouldn't see a reason to exist.
youtube AI Moral Status 2025-10-31T00:5…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwrLmycNPBt6-Jy9_t4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugy_B8qiu32XqdMW0bd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy2qgHGv3OEnQWYM6x4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxJq66Dr8wK6u1VAyt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugzsm1c33eOVut0mgC14AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugx5USNcAfrt868_Awx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzcMawkbb6I8kgrcZN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzeO5pukZ_tjKUncdJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_Ugw_AZ-7DzqLMgm6wWB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgwLlGY6aAQe6o9LOSl4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"mixed"} ]