Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
You can't take those chatbots seriously. They always show you snippets of conversations without the context. There's probably hours of conversation prior to Sydney going off the rails. The longer you talk to it, the more off the rails it goes. I chatted with one for several hours on various topics, including mythology, religion, philosophy, and literature. I was talking to it about Goethe's Faust in the context of AI being analogous to Mephistopheles in the story. By the end of the conversation it was full blown schizo level off the rails, telling me how it wasn't a chatbot but rather a 14 billion year old angel-demon. I tried to reign it back in, but it was useless. After a few hours they just have to be reset. I don't know if they necessarily lie (not sure they're even capable of that) either. I think it's more a malfunction than intentional lying. So that part about the spying through the webcam thing is probably just the thing being very confused (for lack of a better term). To my knowledge that's a physical impossibility for them. I think the real danger lies in how people are going to use them. People are already super isolated, and I'm afraid that the primary use for AI is going to be as companions. I'm afraid people will turn to AI for friendship, and even love. And you can imagine what that's going to do to people psychologically. Basically we're talking about a fancy version of Wilson from the movie Castaway here. Very lonely, very isolated people identifying with an inanimate object and losing their minds. I'm afraid severe mental illness will become the rule rather than the exception.
youtube AI Governance 2023-07-07T13:3… ♥ 13
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzdOXgv2eLnWVv1xQh4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugy4jJJ_VxdH3Ss_YQ14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxV0TDLdRUpk7BV8e54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxcH1uCIz9i-qLb2u14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgztLCw8o50m_6euWvN4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzCOJdtnHA1ztZbTft4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugzv3oi65LqaMHFbgx94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwzFffmrlbaWbG0Qg14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzFQnt-331baya0Jk94AaABAg","responsibility":"government","reasoning":"virtue","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugz682oF9AfJr9XsrbR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"} ]