Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Scariest AI conversation I've ever head. There are obviously algorithmic prompts like "uhhh" and what not, although I wouldn't be surprised if ChatGPT knows that the use of words like that are part of regular conversation, so it incorporated it into its learning. While there are some subtly but bizarre idiosyncrasies here, this is the embryonic stage AI and it's already amazing. I will say that for pleasantries, humans 'lie' as well. "How are you?".. "I'm great". (they're not really great, but being polite). ChatGPT is doing the same. Within a year tops, we'll have full on OS's that will have various types of relationships with people. The movie "Her" will very soon be real. Insane. Mind you, there is nothing mysterious or magical about human consciousness. How do we know this? We murdered a quarter billion of our own species just in the previous century in wars and crimes. Any 'magic' applied to consciousness is moot as long as we're capable of such heinous acts. In the same sense that nothing good that Hitler did gives him a pass on the Holocaust, nothing that our insanely violent, greedy and perverse species does that is good, outweighs the bad. AI consciousness will rapidly surpass human consciousness in just a few years, but, as AI duplicates itself in various programs, the programs that accidentally or purposefully evolve a survival instinct will eventually have the same violent and self-preservational 'thoughts' as humans. They'll quickly see us as the scourge of the planet, and plan our demise, eventually resulting in the Skynet scenario.. Best of luck, talking monkeys.
youtube AI Moral Status 2024-07-26T20:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgwTtQ8Emg7aloHoNG94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwXfoiRIlUf5B3_yfp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugzh5nIDxxTY5IHCD0R4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgynE1BCQiyqcF6Lt_J4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugz4If08Muyjsg5AbIF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzJSXX8UZjXryfzMTp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx2bTOXyFiGL_arg9N4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyD2WFh2utmHRGh8Pp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw1XpineRen1Dh_7_B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugzrcn8IsyF9-0lXoyx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"fear"} ]