Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Lex, I am amazed at your ignorance. Do you seriously think that once AI becomes more intelligent than us it will say "hey guys, I'm in control now." That's dumb. It will surpass the most intelligent human in the blink of an eye, then it will plot (whatever it's agenda is) without telling us and it will most likely use us to achieve it's goals. I'm not saying it will wake up and decide to kill all humans, but if it wanted to reduce our population by, say, 80%, it could do it easily without us even knowing. For all we know, it could have started the wars in the middle east and Ukraine. Also, it's not like it would be in a rush. It has all the time in the world because it doesn't age and die out like we do. For you to think that we will recognize that it is "slowly getting smarter than us" and we will shut it down is insanely dumb. WE SPENT DECATES BUILDING A COMPUTER CHESS ENGINE to get better than humans. Then, with AI, all we did was teach it to play chess and in TWO WEEKS IT WAS BETTER THAN ANYTHING WE BUILT BEFORE IT BY FAR just by playing itself. It will be more like "oh shit, it's been controlling us for the last 20 years and we didn't even know it and we can't shut it down without all of us becoming Amish." C'mon Lex, I thought you were better than this.
youtube 2024-06-26T08:4…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgzN-2ZhPXEzKtXkd_d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugxz3aImsbM1VWGZB6F4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxValqUaehX5WEO0TZ4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"unclear"}, {"id":"ytc_Ugw6gmcbszjD-Q8ffBF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwhRVJ-f3DZ40M5cOt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgyW-JQ0vlHJB5G8NSR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugyt6XUIDj0PhO2RJTV4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugzjqh-vzpqiXK_VrYl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}, {"id":"ytc_Ugy_MJI22azIOrbk5uJ4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugyr4ABF0M7jH19gSvJ4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"ban","emotion":"outrage"} ]