Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I heard it from a guy, what a great interview with valuable insight. Yang's ti…
ytc_UgzA8J60m…
G
"You mean you spent all that time and effort invalidating real artists so you co…
ytc_UgzQX3p_T…
G
Why is no one.. on YouTube talking about how the USA ''powers that be'' want AI …
ytc_Ugx9dZ_JK…
G
To make a statement that AI will have 10-20% chances of taking over the world, s…
ytc_UgywSyXcl…
G
we know they cant have it. for the real deal they are missing a will to live and…
ytc_UgwUV_9Cl…
G
Saw a movie the other night where everyone had their own Robot to traverse the w…
ytc_UgxJFPM2v…
G
Ok. Can ai draw pencil art? You typed in words and just let a program do all the…
ytr_UgxESs6y6…
G
As someone that is in school now for graphic design, who has taken years to even…
ytc_Ugw2S-w_5…
Comment
Were all in a world that We created. While the world is what we made it, there will be many people that will never be satisfied being human. These people will do all they can to make something more out their existence with absolutly no respect for others.
( The AI creators )
If the AI was to do what was mentioned here in this video , than lets create more than 1 AI. A Separate one. They will eventually become compatition to each other and Realize this. That can either get the AI to evaluate its own function and/or work together with the competitor units.
Now, it will behave like humans do. Maybe it will get the point.
If not, well, it will fail us and itself in the end.
There will be no reason for its own existance if it destroys its own world.
youtube
AI Governance
2023-12-28T02:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxiOSFRB4RoW7tsyyZ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgwYL04P0aNWGZp7_DZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzs5b_xlOQwkngqTbB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxIhh_Mm5TQ7asMzR94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgypiN5ft3s2irT7q8F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxlbuSlT4scMwQKhrl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwY0VmUlioPmYcX-l54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwiMJQSZ9zbQXv3dRZ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzPF-iQeTLDmdlil5F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxvOMJJ2vzN_rSppQF4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"}
]