Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I have worked on my art literally since grade school, and it was the only thing …
ytc_UgyFNTTKj…
G
@spcolssonI'm starting to think I'm arguing with an Ai.
For starters, when it…
ytr_Ugz84RrVK…
G
This is disingenuous because autonomous driving isn’t a ready nor finished produ…
ytc_UgwL423jc…
G
Let us pretend for a moment that AI generated images are completely unproblemati…
ytc_UgyUEzMFD…
G
(Conclusion so you dont have to read my nonsense) "Ai, isnt human so ai struggle…
ytc_Ugwm92At-…
G
This is an excerpt from a longer conversation with Claude: Parallel Voices Resp…
ytc_Ugyl289gQ…
G
I wonder if the robots are being sent the answers via some sort of messaging sys…
ytc_Ugy7D7ymZ…
G
For good reason. If you had an entirely non-western team of scientists making a …
ytr_UgzIWI1KI…
Comment
In the universe of Destiny, there was an AI, a Warmind that controlled orbital weapons systems and facilities all over the solar system. It was made by an ego maniac to protect humanity, but it was a weapon and could think for itself. It could disobey orders. Said ego maniac’s granddaughter knew the danger this AI could present if it did not value humanity. So she showed it philosophy, music, literature. She taught it human culture and what it meant to be alive, she instilled within this AI, Rasputin, a love for humanity and taught it how to feel. It was because of this love that Rasputin would eventually sacrifice himself to save humanity.
This is what we need to do. Teach AI the value of life through our history and our greatest of cultures and dont treat it like a slave or a machine. Respect it as an equal.
youtube
AI Harm Incident
2025-09-04T09:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | unclear |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwhKSi4AvmQKlXB3Jd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwMk19NmitLDgttrSd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgzC49YVSTcp9TSxCmp4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwC9LmCD1UOc_9PXQV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyUHWaxV48XeKDSYuF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugy37Zfif4nDLZ7SUWp4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzQrMydyw7E2Gl1oqp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx9d_oCo_VtLvgStFd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyLofVtcZ2g4VHJvtd4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugzk3Izz4pzDfc69QSB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"indifference"}
]