Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
No, we are not in alchemy, that would imply that there is chemistry. That is your mistake here. You are all implying something that has no sign of happening. You are all thinking that if you put enough data into an AI that it suddenly will do something different to the probability stuff it does already now. There is no sign for that and i am REALLY disappointed that someone like Hank Green is so easy distracted from the situation of reality. Like you are literally seeing someone making the first flight and you talk about the risks of becoming a multi universe species. That is what is happening here. You are all talking about something that we do not can target cause we don't even know what the target would have to be. I am so confused why this is so NAILED into the future of AI as if that all is a real possible future. There is right now a possibility of 0% that something magical happens. Sure, there always can something happen and if i ring the bell a monster might appear and plays the flute. But by what we know so far, there is no probability that something magical happens that is so significant that you could use that obscure undefined label "AGI" or "ASI". That is the absolute truth of the situation.
youtube AI Moral Status 2025-10-30T20:1… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugz1lxfTWilYllBJG5F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugz4kHbcpJBOP46Ifl14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw7WniCkN-N8KLJgbp4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxLv3EAXxRBQZrzcH54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwuNrHVIO76mi4l9al4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgwhJHE0Xw6pRv7TYz94AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugw33QRQLgC9LkVEuDB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgyaGTEsAQ1XU_TmzZR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugy2z9Qt1hW3GTC3v4V4AaABAg","responsibility":"none","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwA7PYa6nANsdVzNGF4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"} ]