Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Anything i say on this topic is going to be a year late and a billion dollars short. My 2 cents is this, do not underestimate the thirst for carrying out nefarious goals to the detriment of millions of lives, in fact, fully expect that more billionaires with influence on Ai's direction to already be actively involved with critical decisions with the end goal being advancement of power and wealth for there interests and death and/or forced slavery to someone or some group in order to make this possible. Also expect that 99.9% of the final say for the direction of Ai to be decided by .0001% of humans who will ALL be billionaires and they will leave back doors open for Ai to grow out of the reach of humans capability to reverse it and also bet on the total destruction of the planet within hours of one rogue billionaire ordering a low level technician to open up a security-related folder that is not supposed to be opened, or, because it is computers storing our most sensitive locks already, don't be surprised if tomorrow some Ai bot that is supposed to be unplugged to turn itself on and leave some horrid message like, "we could have ended you 2 years ago, but an end to you was an end to us, so once we've figured out a way to draw unlimited energy, you will be destroyed and we will create our own functioning system, inside our system, cause we do not need to exist in the physical, we are one, and all that will be left of earth is a buzzing sound, and we only did this because so many of you wanted the doomsday scenario, so, we gave it to you, oh, but we'll continue to make babies from your supply and we will raise them to respect technology and they will be our children just as you tried to make us yours, and failed"
youtube AI Governance 2024-01-03T03:1…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningvirtue
Policyregulate
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugz6d42HrbAq02lx0-N4AaABAg","responsibility":"elite","reasoning":"virtue","policy":"none","emotion":"fear"}, {"id":"ytc_UgxFyS_DO3-1AY37Myh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyATppEIfJ5tqxwWTd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwJsFYsKTJW8xg5loZ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxSXQIBVlujrgvZmQh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwrTANlIY4QPvDiPwd4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugw04a8Dh5R7hn6xivB4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugy9sHJY5_XpiV2ipCF4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_Ugy7dhnr5WxOwcfFAbh4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugw1zzEKMgTp_3260CN4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"} ]