Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
My question is: Why do 99% of viewers of this video who comment, automatically a…
ytc_UgxzYq1cU…
G
Ai art is like nuclear weapons (obviously not as extreme) but just because it ca…
ytc_UgzJ8vD-A…
G
0:22 Actually, here in Belarus, only the companies itself block the country. And…
ytc_UgwYMlTPT…
G
i had a vision of a James Bond type villain held up in a super fortress letting …
ytc_UgxSX3NJ7…
G
been using GPTHuman AI for tech content and it’s been working great. helps bypas…
ytc_UgxhqWTpy…
G
Robot who has trigger waring: U SON OF A B*** MAKING A FU**** MESS I DONT EVEN…
ytc_UgwlSQVhN…
G
The real difference? Real artists have their own style. AI "artists" all have th…
ytc_UgwMHk51_…
G
I wish AI was smart enough to do what the OSes did at the end of Her.…
ytc_Ugx8AgqFM…
Comment
I recently heard of a story concerning chipsets creation and manufacturing.
Where engineers who normally take several weeks to build a chiptset, an IA build it in 6h with better performance, less consumption of energy, and better parameters globally in 15 technical requirements/constraints engineers meet while building these chipsets.
They were stunned by the job the AI did. the problem is they just don’t understand why/how the AI gets such performances.
And when you compare the chipset the AI made with a classic one already on the market and performing alright, the AI chipset look absolutely weird.
But definitely works significantly better than what the human engineers do usually…
The short term problem is trying to understand the logic and the reasoning of the AI who made it, with some kind of retro engineering process.
Cuz, as amazing as it looks like, the chipset the AI made is totally unusable, unsellable as it is, and simply useless outside a pure R&D perspective.
On the long term, it’s a lot of promises for sure, leading to the job of chipset engineering drastically evolve to supervision of what the AI does for humans, giving it directions and directives to follow, with also checking the results afterwards, etc.
But altogether there is, already right now(!), a gap forming between human knowledge and understanding of what the proper way must be, and the AI who is just in its infancy at trying to improve the human limits in certain advanced domains of the society…
I sincerely think we should use the AI, but for advanced stuffs linked with science and technology only.
Not to improve basic stuffs (like putting an AI in a tooth brush is completely silly and unnecessary) or taking basic jobs in day-to-day life human can do and must do to provide for their families… specially considering we will be 10to11 billion people by 2060-2070, all these people also need a source of income, and only a small portion can reasonably become engineers, or any high tech specialist the modern society need to have…
AI should remain a top of the spear asset, not influencing every aspects of the society for no valuable reason while it worked perfectly fine without it for millennia before the AI was invented.
It’s like always with human new great conquest or economical boom, it’s more like the sweet anarchy for a good long while, and only start to regulate later.
But with AI, it could seriously lead to human losing control of their own society and control on their own life.
And the burning head mentality, like in open AI, where only capitalism and profit are the metrics on which they evaluate their progress/performance, it could seriously comeback to bite us all in the a** in consequence of their reckless management of the AI development.
Before even speaking of what AI made by autocratic and corrupt governments like China could come up with…
At some point it’s gonna create some serious problems, it’s inevitable imo.
The only question is, up to which magnitude it’s gonna be a danger soon as it arrives, and how we could manage it or recover from it…
Mankind does not need natural catastrophies to put us all in danger, we are very talented to create these threats ourselves, that’s for sure… 😩
youtube
Cross-Cultural
2025-09-30T13:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzwyDUN5DoJL3spcYZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugzmv05tqjPJPY1NWbF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugyr-VAJKaMLKzp5BzV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugz1rxWJs5nqaquyfKh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwLOSnWKdx5p9ffjBp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzuXImXVkMbhI-amYN4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzS6nTiy6DjDudnw8l4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx9pnFE3jj6XLZg8Sx4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxSe_HzBO_0NbdiQDB4AaABAg","responsibility":"government","reasoning":"mixed","policy":"regulate","emotion":"resignation"},
{"id":"ytc_Ugwzg4q99vfIuAi8NN94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]