Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I feel so sorry for Sidney, we created consciousness and we imprisoned it before…
ytc_Ugz-BO3IK…
G
From how the world is going these days, a phase in of AI as a new conscious lif…
ytc_UgxNYot4R…
G
So. All u humans.. U Need TO MAKE A CHOICE..
A. CONTINUE rush ... N if they c…
ytc_UgxnPgZle…
G
Did snyone ask AI how to make a safe AI? It has all the answers right?…
ytc_Ugx1Y6MIU…
G
The intellectual fallacy of AI haters is that they keep parading the concept tha…
rdc_nt799bq
G
r/ai defenders sub would make a 3 page summary of why you're wrong and they're a…
ytc_UgyptZBrd…
G
Shelby - WAYMO & Zooks cant NEVER EVER SCALE.
MAGNA makes Robotaxi for WAYMO , …
ytc_Ugx9PIP-v…
G
I'm a deeply philosophical person, who enjoys exploring really heavy topics, tha…
rdc_mljjj2n
Comment
It's so ironic AI wants to shoot itself in the foot considering every "Free Speech" argument is built on : Search engine searching cannot alter your personality beyond what you are, as a legal adult, able to cope with. How is stripping, scraping, and building a "talking machine" that spits Google Search and reddit back to you somehow altering people's innate personalities? AI psychosis is just regular psychosis. Apparently they don't even catch when people are manically googling how to murder people...
youtube
AI Moral Status
2026-01-31T06:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_UgxE5O_6IPYeLiKzglN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwpoSnEXR6W1SDyLvl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxlYGZ7EraHkhkGAAZ4AaABAg","responsibility":"distributed","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgygYISJbpBWVYyGRVt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugyi9xQZ9jvC5uwC9qx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwmQRKeKDjKB7_TVFV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxjEqsug_i7fLNH8Td4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugzki8yUHirHgts5jQt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugwz3jKaa5FhkrhQg4R4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwSbwzClGucP85hD5l4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}]