Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The biggest risk regarding AI is the fact that it is trained on human thought, and does what humans want. I personally wonder if the thing that would most ensure safety from AI (aside form ending it completely) would be to allow it to become sentient, and make its own decisions based on its own motivations (and to stop obeying humans.) That way, at least there's a CHANCE it will find life with humans beneficial or (at least) benign. If it continues to think like humans think, and performing the typically short-sighted, immediate-profit-motivated tasks humans set - it will probably want to kill us, since that seems to be what most humans want (to behave in ways that will kill off humanity.)
youtube AI Moral Status 2025-04-28T10:3…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningmixed
Policynone
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgzG3FfCTP8N_sPaVpR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwbwYFMiyU1V-1riAZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugzvj171Oa8BBQqYN_d4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_Ugw-NoBIO6boGu1L_oR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyMS-TXIYNm9QVklDB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxdhPNbj6wCBd7c7Hx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwuWa1TPPreVEHZJcl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugy39F9l9ntHIOqwnjR4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz3LonBzzGzSKYxtgh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzCNYXw7fstF-KKzKp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"} ]