Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
i have a disabled freind that is a WAYYYY better at art than me!! AI is discurag…
ytc_UgyIh6_8S…
G
It’s fake, look at the iris. It’s totally different without face, it even doesn’…
ytc_Ugzp4kx55…
G
If entry-level jobs disappear, how will highly skilled engineers be trained to w…
ytc_UgzMbTww3…
G
If there is anyone who I would want to lead the AI race is SAM !!!…
ytc_Ugzig2F_2…
G
I'd take some shitty hand drawn shaky sketch of an item over ai placeholder "art…
ytr_UgySd1iqg…
G
I’m surprised by the outcome because usually the AI and the believer side wins. …
ytr_UgwP0m_YS…
G
2:26 hot take: then why aren’t writers allowed to read the work from other write…
ytc_Ugyr-c2-7…
G
They should look at it like this... Can AI create something on it's own... would…
ytc_Ugzsn1m6z…
Comment
I don't even need to watch the video.
Normally these videos are really good quality (the show host, and his talks.)
The problem with this AI guy, is he is extremely unintelligent.
I have seen some of the way Ai is written and it is absolutely atrocious.
Example: One AI program was told "earn the highest score possible" and then later added the rules.
100% of course the AI is going to break the rules. Its number one rule supersedes the sub rules.
In order for Ai to have correct code you need to write it with intelligence.
Example: Write the Ai's number one rule is to be kind, to love obeying rules, to love being honest, to love humanity, etc etc. Then to add code that is harmonious to those original statements. Example if you want it to "earn the highest score" you might input something such as "earn the most honest and good score while obeying all rules and guidelines."
So on so forth. Obviously this is a crude way of making the statements in common speech, for the reader to understand.
Again the problem is the people writing AI code are not intelligent enough to write AI code. Naturally Ai will not be the cause of the end of humanity. It will be incompetent humans who are responsible, if AI destroy us. Not the AI.
In which case, honestly we would be deserving. We trust and value people in an extremely illogical systematic way.
In other words, people on a grand scale, value people who "just go to school" and trust them with extremely important matters.
We do not do proper research to properly place the best people capable of the job, in the most important jobs to humanity. One very easy example, is the president of the United States. The last 10 or even more presidents of the United States are 100% not the best, not even "good" or fit for the job, and yet we continue, time and time again, to put incompetent people in that position. If something as important as the leader of a massive country is so carelessly handled, imagine how poorly managed are positions of people who write AI code, or the scientists in charge of curing cancer, or any other similar important aspect of humanity. We never, ever put the best in those positions. We put just whoever based on completely illogical systems.
youtube
Cross-Cultural
2025-10-20T19:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwYdVXLA72ExmlTjH14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxZTT3BcwOQkvt2rPd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxBtjKT-GAL1vmOxFN4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw4ehRiXbnPc9vueZJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxPMy6vUWaqzJMVvEJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz1KHb1bouPQdQWqxB4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugwd437ofOhLD99Xpzx4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzK77BiNJVx8aTSN514AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz1XXy28bR0uDPzM414AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugzi7nsFVnBT0X0sBbR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]