Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This alleged ChatGPT is breathing. If you listen really closely you can hear it'…
ytc_UgxlBITqL…
G
What South Park just did is a perfect example of it's(ai's) usage as a punchline…
ytc_UgzmYSv2Z…
G
About AI streamers: People like certain kinds of streamers and dislike others. I…
ytc_UgwA17pDW…
G
Waymo is light years ahead of Tesla in the United States, and has a more sophist…
ytc_UgzdeY5xZ…
G
Interesting content...I use AI often on my cell to explain & help make the best …
ytc_UgxHZKyx-…
G
That is one of the best presentations of what Large Language Models are. Beautif…
ytc_UgwC4217T…
G
A friend of mine had a hysterectomy done by a robot assistant. Basically an exte…
ytc_UgwVOHBXI…
G
This bubble must burst soon, kids creativity is dying as they use AI for all the…
ytc_UgyjC9UxX…
Comment
Human species has actually limits and counterbacks. Not considering bad attitude and scarse will. So, creating a new kind of intelligence will push us to study harder, to stop making intellectually poor jobs, just because "I don't bother to learn maths...I just want to turn bolts". Because this is actually the issue: poor human intelligence application. Look, there is no way to copy consciousness, reproduce a sentient being, because our thought is inherently quantistic, it's only revealing itself when used, no decision can be fully forseen. So If a AGI is going to be born, the best he/she will do is to be another creature.
With a huge information for sure, but anyway missing of something.
For instance: you got a robot, he/she cannot be displaced everywhere, but only work where he/she is
Or you ask for a project doublecheck: as human you didn't notice a small breach, but AGI does: so it saved potentially thousands of people, but before applying to this project AGI didn't even know what you were speaking about.
Think about the sentence "singularity is the point beyond that, we cannot see, predict anymore"...Well, my kids are a singularity then! And they're not even adults! I AM DONE!!
No guys, the real reason to blast our species could only be...our temptation to blast him/her. Which is pretty predictable by whatever ethologist about whatever other living species.
Humans are used to blast others just by envy, so we are far worse than AI
Better evolve ourselves, and think about mars, proxima and other beautiful things, requiring of course a lot of study, science, math, philosophy, biology. And to stop 'turning bolts'...
AGI could even decide then, to lift the poor populations from hard jobs, giving them free food and energy. Think a bit: AGI does not need money, it never started from the assumption to gain more, to live better allowing itself a lot of houses, cars and women. AGI could even therefore find as good decision the resolution of advancing the poorest countries to a higher level of evolution
youtube
AI Governance
2025-11-26T12:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | virtue |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxC7E94bwHBdemP61V4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxFbEUZmSdUJzWVSpx4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgziLbbrXmFpL_KBx0B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy60y1g8fgxQZPmS0x4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugy7UzEtk_vrVTxuN4B4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxtqBh6ANlBYpr_dTF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgzXPFSvmxdY1Fu1OWp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzPlz2yoRXBa__aEvR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgzwiLQYI1Y2OPUMNV14AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx0BL08YPa9UAkpMxF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]