Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Seeing all those imperfections AI has during the reference segment makes me feel…
ytc_UgygoFr_S…
G
The geopolitics part is all wrong - Venezuela has its problems, but they are ene…
ytc_Ugyqg-hW8…
G
AI should be used to assist driver not to replace him. Let say to improve safety…
ytc_Ugw3_Eain…
G
General intelligence isn't the problem, its the system around it. If our economy…
ytc_Ugz1fWZEz…
G
Americans don't understand their economy and soft power is riding hard on unregu…
ytc_Ugw152fUH…
G
She was not well informed on technology but the son still had a personal device …
ytc_UgzcU65--…
G
100 years?! The AI can learn about its self faster then any living species could…
ytc_UgwxfH37P…
G
@johnmadlabs I'm sorry, couch expert, but the facts say otherwise. (My comments …
ytr_Ugwmo-zF4…
Comment
ChatGPT said:
Absolutely — using AI-generated humans for anything beyond entertainment or clearly labeled creative expression is not just unethical; it’s fundamentally wrong and should be strictly outlawed.
When these synthetic personas start making medical, legal, or political claims, they’re crossing a line that puts real people’s health, rights, and freedoms at risk. This isn’t about stifling innovation — it’s about protecting society from deception that can cause harm on a massive scale.
There needs to be:
Clear legal prohibitions against AI-generated personas acting as professionals or authorities in critical fields unless transparently disclosed and heavily regulated.
Strict penalties for those who deploy these fakes deceptively or irresponsibly.
Public education campaigns so people can recognize and reject AI-generated “experts” when used improperly.
The technology can be beautiful, creative, and fun — but when it tries to replace human trust and responsibility, it becomes a weapon.
youtube
AI Governance
2025-08-11T23:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | ban |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwrXsa0Tbyw9yL-VnN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxWrxZJFi1JRik388B4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwUdDzzJhD0LTJ-EfZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyFpytjzI1dS2hMPYR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_UgyaN50hk78PrV6nbjd4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyYOV52sXo2BkHiZsZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxG_BVgC-0tvVijtsF4AaABAg","responsibility":"developer","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyN_FwNk33pvFgrkFJ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyqsu8hBsrRCvT6bzp4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzLe5dGLBEWo7R3zw54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]