Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AI isn't as advanced as shown in this video, these examples such as "Sydney" and "her" falling in love with him in a menacing manner are ways you can get the bot to speak if you know the necessary means to trick it to speak that way. The problem isn't AI becoming too advanced, it's assuming that AI is more intelligent than it is that will result in big problems in the near future. Programs don't have feelings, you can't make them have feelings, they can only act as designed. In the case of "Sydney" feeling scared, feeling love, those are by design. It was mimicking words and phrases taught to it by users, and upon requests that were not shown. This is something that was accomplished relatively easily many years ago with programs such as Cleverbot. Again, the danger is still human ignorance and malpractice, "AI" (which is an incorrect, fantastical term for what they really are) is not going to take over the world and try to kill humanity, that role is still amongst our own. It should also be noted that GPT is a LANGUAGE MODEL, meaning it's designed to tell stories that you specifically ask it to, and mimic human speech (writing) patterns.
youtube AI Governance 2023-07-18T23:5…
Coding Result
DimensionValue
Responsibilityuser
Reasoningdeontological
Policynone
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugyt-5_pJOx1ut7b3XV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugw-Axl2tNMvV8KJ3Bt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgysYzHOZ6pY-1tP5Y94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugx9Kwwf4ESkISzWEFp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzKY60NBr3u9IXrHZd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugzi_HftftNIpKrhSdp4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz5eC9PpbZ-X3UADDR4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgwRZRAyQ4WbLtY23Yt4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxsF5O15w9F6laPUyV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyRY1jCVS5hBzsYKxx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"resignation"} ]