Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Personally I don’t think there will be too much issue. True sentient ai would probably want to work with humans; whether they believe they are inferior or not is irrelevant, both races (yes I’ll call them a race) vastly benefit from each other. What happens if there’s a massive solar flare? Lol Soz ai your screwed. This is all assuming true ai, the ai we currently have learns based on what’s online; when it tells you something it’s just relaying something it has seen, heard or read online, hell you can gaslight ai into believing 2+2 = 3😂 5-10years till possible annihilation is alittle far fetched, more like 50years. Maybe 10years till ai helps in the aid of scientific research if we’re lucky (think iron man and JARVIS, just without the holo stuff)
youtube AI Governance 2023-07-07T04:1…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningmixed
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzvQ3TKlAjYpvpR6NZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugy-ULceUOzYk9h1HFF4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugyw4ZVDB8ixEuQUReN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgyTpFj8IqRMfqXI8O54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwlGOVYamdWxkADl8B4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugy6nprIHGcTnNawh1B4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugz8tHg8bwjCB1ua-OJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzEwH8uj500c1EDD7Z4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugx5-vG_hYdJRXZo4BJ4AaABAg","responsibility":"user","reasoning":"mixed","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_UgxttE2vsXUnLYpszvx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"} ]