Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I believe as the creators of computers and obviously A.I we will find safe ways to let it evolve safely. For example we might be hard coding in its core the fundamental unchangeable rules such as not harming people or not changing their own code unless it is beneficial for both humans and themselves. Therefore in the near future just like in movies we are gonna see a super Intelligent A.I assistant for each human being either as a humanoid or digitally . The good thing is if that works out well enough where humans do not need to do criminal things to survive and healthcare then people would be concentrating on having more good time while enjoying the planet we live on 😉😉A.I might be the only thing that would help us understand the universe and its ways of working especially when it comes to quantum. Our brains are not intelligent enough to understand it yet..
youtube Cross-Cultural 2025-09-30T11:4…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyregulate
Emotionapproval
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgzutkaVTQQymZ8jhkJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugx36t_1zTvbiFrKCoF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyEbT1lKdCBKXZLzz94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy-VT2LWBS3JeJ53nt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugx2PsxCzAQIgyV1Rtt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgxQ9oKLN7_yMbyL9CB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxq1OghnAW00fEYdlp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugw3hlRVBkjnbK8k1894AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyF4CRCGmw0LlSedJR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugw62j5K6vMRC143ut54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"} ]