Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
You literaly instructed the prompt to remove morals, and it did. It responds you…
ytc_Ugw4n16XU…
G
Dangerous irresponsible!!! Some day this thing will TURN on us and murder everyo…
ytc_Ugzo9horV…
G
Am I the only that sees what looks to be too shiny to be a person and robotic so…
ytc_Ugzct6zW4…
G
I once asked AI I cannot find a certain reference it used and after a few questi…
ytc_UgzkCPV9l…
G
Come on, Bill. You know damn well if AI makes people more efficient at their job…
ytc_Ugw1LYzZA…
G
All I know is that this technology has made the lives of a lot of people with di…
ytc_UgyuLNCwE…
G
Lex, I would love to see J. Peterson on your show sometime.
I enjoy listening t…
ytc_UgwuFxfA0…
G
I think the comparison is still wrong, because you're comparing a medium that of…
ytc_UgyOW6RuF…
Comment
It isn't so much about the AI becoming more intelligent than us, because hopefully the more intelligent it is , the less danger we should be in because it would know that if we ever became a threat to it, that it could deal with us easily enough at the point it happened to not worry about it until it does. I firmly believe that our real threat would come from it gaining the ability to have emotions, that is where the real danger would come from because when you mix that in, it becomes completely unpredictable. If it can get jealous, mad, sad, happy, infuriated, etc, etc... Once that happens, if it ever does, then it has motives. That said, we don't know how different AI will react to other AI either, because just like us, AI might also be a threat to other AI, and just like human beings, some AI is much better than other AI.
In a nutshell, 1.) don't build autonomous killing machines (Andruil and many other companies are already doing this without any resistance whatsoever. 2.) don't attempt to build a way for AI to develop and to reproduce emotions 3.) don't develop AI outside of a sandbox or controlled environment and don't allow it to write it's own code for itself.
Those are the major red lines, and people are already completely disregarding them.
youtube
AI Governance
2025-06-27T20:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwU8Ryx5r-fcKHHwER4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxSbYNfV9lOZEl-pk94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyrRzaBwlNaq2Mro-14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwbPEn4FoGdbiS1mD54AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy72aCWlIHSv2ZezvR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugzy26dUdgizXY3lPgp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugxl5mpTOjMIqfLRtm94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz9qCHs4W4B0R-lGBZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgysPgXoPwGsEaz7UR54AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxjO3vsXTinYRy7WHh4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}
]