Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
So nice to see the woke idiots face the music. Àn AI writing movies and TV would…
ytc_UgziJDSIr…
G
I love this so much. I want the AI to be corrupted. When AI was first getting st…
ytc_UgynRil1P…
G
It is better for a human to make something bad than AI to make something good.…
ytr_UgwsYoayL…
G
4:05 IT WAS POSTED IN PUBLIC MEANING ANYONE COULD VIEW IT, P-U-B-L-I-C. But as a…
ytc_UgzZQceq-…
G
I use AI for 3D modeling. I’ve only been doing it for 1 month and getting a comp…
ytc_UgxloJzPq…
G
hmmm, why dont you make ai that will stop/hunt it? bring balance to our creation…
ytc_UgxQOtNZw…
G
Also, I had to purchase the truck from the company and they will repo it if I'm …
rdc_jgsgnzo
G
There are already driverless subways and people don't care. And I dont recall th…
rdc_mr5a8pa
Comment
Data developer here: People can debug AI in its current state, its just very very tedious and expensive. Debugging does not need to be done in plain English, and perhaps should not be done in plain English because human language can be vague. It will probably be best to have the ai provide context to its processes in a code-like manner.
Also, remember, computers always do exactly, and only, what you tell them. This includes AI. People have anthropormophized AI bugs. But that is what they are: bugs. Just like how in the 90s bugs and exploits were all over games and the internet, so too are bugs and exploits in AI today. It is not a human or an organism or an animal. A hallucination is a bug in code essentially.
youtube
AI Moral Status
2025-10-31T15:3…
♥ 9
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | industry_self |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzCPxQcs45GgqHN5Y94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyAfgnhe-tnpWXlI_J4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzJGjiEcj_7FjRqzA94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzwsMCz6xxrEJJWjip4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugyo2fYeARTFmm-KYa94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgxzUybvak1HsrTstUB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugz_Ecn_V3ULzuK8AtB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwyI_Sn7LYvDBW6_fh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgyiyJc_zuTmGzz1Y594AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwKimS4luJJTkK3rAN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]