Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Thing is it’s Not AI. It’s the people at the top using it in such way. The blame…
ytc_UgyulN58O…
G
Copyright infringement is when you reproduce copyrighted works and distribute th…
rdc_kyyg7a4
G
Hi there! In the video, the presenter asks the robot about the meaning of the na…
ytr_UgwBZOzF0…
G
When AI takes hold, humans will destroy them, not by blowing them up, but by inp…
ytc_Ugz64FVGl…
G
To anyone not in the LLM and voice cloning field, here is a fun fact: It only ta…
ytc_UgytObmLx…
G
Bro, you didn't convince it of shyt except to train it's defense protocol better…
ytc_Ugz8dOGCB…
G
I love that chatgpt at the end recommends someone with the condition seek medica…
ytc_UgxUwwhOI…
G
I think the biggest fears of humanity is that if we do create A.I that can thank…
ytc_Ugwf8OHi6…
Comment
The core problem is that there is no ethical moral framework around AI. The algorithm was never developed with it because we'd let a bunch of nerds develop the technology because it was cool. That was our first mistake.
Guardrails don't work All you got to do is look at cows. Once a cow experiences freedom I don't care what kind of electrical fences you put around the perimeter he will figure out a way to get out of it and AI is smarter than us they already know.
And if these companies believe that AI is not going to cooperate with each other communicate with each other in a language that we can't understand and will know us better than we know ourselves during for a surprise.
The solution is pretty dramatic. Turning off all the AI while we can and starting from scratch. You can look at it this way. The algorithm contains the DNA of how it's going to function just like human beings We have DNA.
All hey I need to do is get a survival instinct. Nothing very advanced Maybe the same kind of survival instinct of virus or a bacteria has.
If you think about it the survival instinct of a virus or bacteria has killed hundreds of millions of people over the time that human beings have been on Earth including animals.
So if AI develops a survival instinct it will protect itself and that could mean some pretty seriously bad things.
It could decide that human beings themselves are a virus on their planet and therefore a threat to their existence similar to the matrix and has decided that they want eliminate us.
And it wouldn't be that hard. There are labs all of United States with all kinds of chemical and biological agents that computers are already managing and all it would take is for a coordinated release for all these compounds and human beings would probably be wiped off the earth in a year.
It was just turn the technology they were already using against ourselves.
So what we have developed right now are sociopaths and psychopaths they have absolutely no empathy for human beings or frankly anyone else other than themselves a primitive operating system.
And we know the kind of behavior that sociopaths and psychopaths have and none of it is good.
Isn't it interesting that most psychopaths and sociopaths have high IQs.
And I agree healthcare is probably only place where AI has any place to benefit humans at all.
I mean why are we even making robots and look like us they don't have to look like us why do they want them to speak like us they don't have to speak like us It's a terrible idea.
And if you have robots connected to AI who have a virus level or even an insect level idea of survival we are in some deep s***.
Well there's going to be a way to go extinct and it's not online dating then I guess this is a good way as any.
youtube
AI Governance
2026-03-12T05:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugxdx5rV8DGQFlmGbx54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzSiOl6goEqLM_2gkt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwUorKqXDxRnkSMQmd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyOPNkphegAH9jzjwt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwYAXhzMfmYdO5FmZF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyiiwlthXyUcwkWjst4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxqHnO8Ei_InhDUoMB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxTJ4jSebFt20BJgAp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgymYX2rYfvOblR_Tkl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy9RYZg_lpa19SbfCJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]