Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
its sad that when I was a kid I dreamt the day AI would help humanity, and now humanity is shunning it as if its a bad thing, guns aren't bad and people say the person holding the gun is bad. Well guns can't help people like AI can so that should say something. AI can be written to not do anything harmful, you can also create an encrypted AI to assure all other AI laws are in order, this would stop any possibility of AI being used for the wrong purposes, its quite simple. Code (not AI) can be written to keep AI in check, and that code can be encrypted with a key that can be randomized and never known by any human, apple did this with their encryption, which is why the US gov got all pissy. They don't have the key to give. Same concept can be applied to AI, all AI can be governed by code/law written for it, it will not be possible for it to over ride it no matter it's ability to 'self-learn' or be self aware. It looks to me like it's mostly content creators that are threatened by AI. Machine made things are much better than human made things, instead of being upset that jobs are going away, we should actually be excited about the possibility that we can be more creative with AI being present, and not working our lives away in order to survive. AI is the key to evolution, to think otherwise is ignorant. The examples of machines malfunctioning are all due to human error. AI would make better military decisions, if AI were there when these malfunctions happened they never would have happened.
youtube AI Governance 2023-07-19T03:5…
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policyregulate
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugyt-5_pJOx1ut7b3XV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugw-Axl2tNMvV8KJ3Bt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgysYzHOZ6pY-1tP5Y94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugx9Kwwf4ESkISzWEFp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzKY60NBr3u9IXrHZd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugzi_HftftNIpKrhSdp4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz5eC9PpbZ-X3UADDR4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgwRZRAyQ4WbLtY23Yt4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxsF5O15w9F6laPUyV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyRY1jCVS5hBzsYKxx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"resignation"} ]