Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
gods creation if flawed.. so of course anything the flawed ass hat created will …
ytc_UgxAMZ6xF…
G
The thought of self driving cars navigating Chinese streets choked with cars/bik…
rdc_e26oy1g
G
Simple as: The AI must tick a consent box with an EULA before it can interact wi…
ytc_UgyEuqtvy…
G
Wrong questions 🤣 The scientific evidences present lol 😆 We're in trouble now fo…
ytc_UgwJQjJ3U…
G
The government will use the fear of AI misinformation is crack down on free spee…
ytc_UgxIJh6hi…
G
I mean ... Ai 'art' is basically something made out of existed material and foll…
ytc_Ugxz7Q6RB…
G
Based on how clever the robot on the video was, i bet secretaries will be replac…
ytr_UgwJHx81J…
G
Its more than AI. IT'S huge complexes needingvmeag fresh water and electricity t…
ytc_Ugx2P7C2A…
Comment
What a great discussion! relating to Asimov's laws of robotics: In my upcoming book on an AI Constitution I will clearly show that we certainly will loose control to an ASI. And therefore there is a need to develop a set of universal rules and implement them to any AI of whichever definition in order to ensure that the decisions and actions are in the individual interest of each one. This is my approach to try to shape a future which ensures the maintenance of human consciousness and free life. My approach will show the flaws of Asimov's and others approaches. I try to come up with a new own Law. If you are interested to discuss, please don't hesitate to contact me.
youtube
AI Governance
2026-03-22T17:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxSp6Ls9VbI6OdwSHh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzpnnSl8HbwTc0o7Mt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzrfCmMWsyRHJo5mSZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwp77NMGC6LAMyQCIN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxKk_z5K8KBHdGF9OR4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxNqYTotGxlJvtBF6R4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyBl8PztBJOfXXHLZR4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwT813VBJ7fFC9Rv3l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyEBcceq8XHCQkTYpN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugxj-KnIt6rwczLt8l14AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"}
]