Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
fr, its always so insane how little self awareness chronically online ai loving …
ytr_UgzXCc3ps…
G
Chatgpt has been programmed to be biased in current political topics towards a p…
ytc_UgzmCpUFq…
G
If not AI, corporations will find new ways to abuse. . Why not, therefore, focus…
ytr_UgykMMn6V…
G
She try to do an insurance scam but it was a self driving vehicle ouch R.I.P.…
ytc_UgzyX69Qu…
G
Truth: Worked in IT in various roles over 30 years - forced into retirement - st…
ytr_UgyvFLE0e…
G
The examples you provided are algorithmic art tools, not AI. Blur effects are ju…
ytr_UgwJna5C8…
G
We become professional comsumers.... like how you can play app games for money n…
ytc_Ugy8X4M9b…
G
Exactly, but like one person in the video said about democratically deciding how…
ytr_UgwjKAJzI…
Comment
I am pro-human, and I want to see our human race thriving and doing well. AI is by definition probably smarter than 95-99% of the human population and that's the problem. Most companies focus on developing AI for profits, but how do you teach it the moral values or love or compassion, things that really matter so it "cares" enough to NOT to kill humans? We can't even guarantee all humans turn out to be good law-abiding citizens.
If someone thinks they can program AI to be obedient, but sooner or later it is going to be smarter enough to re-program itself. If the AI doesn't have moral values or care about humans, it's maybe a logic thing for it to kill humans.
I just never understand the rationale behind creating AI or starting the AI wars because soon or later it is going to destroy ourselves until God has intervened and saved us from our stupidity.
youtube
AI Governance
2025-08-12T00:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwrXsa0Tbyw9yL-VnN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxWrxZJFi1JRik388B4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwUdDzzJhD0LTJ-EfZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyFpytjzI1dS2hMPYR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_UgyaN50hk78PrV6nbjd4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyYOV52sXo2BkHiZsZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxG_BVgC-0tvVijtsF4AaABAg","responsibility":"developer","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyN_FwNk33pvFgrkFJ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyqsu8hBsrRCvT6bzp4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzLe5dGLBEWo7R3zw54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]