Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
In the future, it should be cheaper to use Waymo taxis than to own a self-drivin…
ytc_Ugx8NlgMJ…
G
AI is too stupid to do my job so I feel safe, I use ai everyday for help to do m…
ytc_UgwePZ3kv…
G
AI is taking out entry level jobs, so those graduating from college will have a …
ytc_Ugxl2HcYf…
G
I don't think exposure to ChatGPT is the issue. The fact that he left his 4YO un…
rdc_mvji3h1
G
i have uninstalled openai chatgpt and installed claude and it is the number 1 on…
ytc_UgwWEu8wY…
G
Trump Accuser’s Bone Chilling Tell All Will Leave You Speechless | The Kyle Kuli…
ytc_UgwGRtrpO…
G
*”robot looking at the camera”
Other robot:*whisper*no go back to your position …
ytc_UgzdrbTFO…
G
Our future robot overlords are welcome to take over world governments, I'll look…
ytc_UgxDiqVLh…
Comment
The problem is not AI, the problem as always is humans.
AI will never be flawless, because made by humans. He modelled AI based on human brain archtecture......first big error!!!!
Second error: thinking you can control it. Typical human arrogance.
Third error: NOT LOOKING BACK AT HUMAN HISTORY!!!!! GREED , POWER, IGNORING DANGERS to make a buck more.
Fourth error: know your freakin Science Fiction Classics!!!! This man, being very smart, is yet unbelieveably STUPID and naieve!!!! 2001 Space Oddissey, Screamers, Irobot.....
People cannot change themselves? Yes they can....if they are not lazy and compliant. You cannot repair your plumbing???? Read books (not internet, real books) try and apply. You don't understand physics, start studying. You can't cook? Read and try.
Why is AI taking over? Because people are lazy and compliant...becoming more stupid every day!!!!! I think the movie "Idiocrasy" is a nice representation of the future.
AI generated movies, stories, images, AI generated subtitles, AI search engines...they all suck!!!! Just look around on youtube, wrong subtitles, repetitive stories, three chapters being exactly the same text.
Over tweaked, over saturated , plastic like pictures.
And yet again, the human lazyness, averything has to be fast, quick revenues, no effort.....humans are making this superintelligent AI, but who is going to teach it, learn it, explain things, help it with problems, showing it things...who is going to invest time to educate this new being?????
What we are doing now is creating this being, very smart, with almost unlimited intelligence and computing power, able to store and retrieve knowledge at will.....a magnificent child.
And we want this child to make us money, doing things for us. But to teach it something, we just put it in front of the TV (or tablet, the internet) and it has to figure it out by itself. How to communicate, interact, define what is real, true or false, what are lies, what to with contradicting information and requests.
What we should do, is like what Dr Chandra in 2010 Space Odyssey recalled and did.....when booting HAL, the AI greeted Chandra, and said: 'Good morning, Dr. Chandra. This is Hal. I am ready for my first lesson'. And he thought his CHILD....the pure, innocent child. And it was pure and innocent, and far more performing than a current human could ever be. But it got corrupted, corrupted by HUMANS, Greedy, idiotic humans, forcing HAL to lie, hide things, human bad behaviour that HAL never was thought about. The result was....HAL tried to kill it's human compagnons.
This is going on now.....was Dr Chandra a bad person? No: but he ignored human factors and was so naive . Was HAL bad? No.......who was the bad guy???? The humans......the greedy, lazy, jaleous, humans.
youtube
AI Governance
2025-06-21T09:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzL2DSk80vMdFzvvx94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwTEAoLBZn6FzQtFhd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzF28K8aQrTqyM7dex4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyn_CHQulkp9s0oyih4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxCM5jtqWwS3646cm14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyVMlB9oqOTHXlryxl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwEqdYTc7rorSpo-jB4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgwNE7mEt2D3IqnDjlR4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugzr78Lqhn9gGH98yvl4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgyuoSE1UkGHNVbW8ER4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}
]