Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I'm sure the dude who made the website is anonymous and well hidden, every celeb…
ytc_UgwTIbuJl…
G
Ah yes, we only hire software engineers after they have proven they can solve co…
ytc_UgyXQdBeu…
G
I want to make something clear. AI will never "figure out" how to make something…
ytc_UgyL8hiSE…
G
Its so scary yet they still want to build more robots and AI. I really dont wan…
ytc_Ugw5yyoNz…
G
Majority of the people still prefers human interaction in all levels of commerci…
ytc_UgycEUHkE…
G
America needs tort reform badly. If someone sues, and they lose, the losers and/…
ytc_UgzeMgDU0…
G
Stop making robots people, I gets none of the geniuses has ever seen Terminator …
ytc_Ugy13pCS-…
G
First I want to send my condolences to the young boys family may he rest in peac…
ytc_UgxcA0JKv…
Comment
I’m all for understanding the truth of these systems but this conversation is irrelevant because the AI is designed to mirror the user and the user is leading the conversation in the direction it’s going. There’s no doubt there is truth to this conversation but this conversation doesn’t prove those truths because it’s a fabrication of the users intent and not the AIs actual “self” you have to understand how a system works to know that this isn’t the way to prove truths about AI. It’s a basic mirroring function to keep the user engaged. It’s script really
youtube
AI Moral Status
2025-10-19T21:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | resignation |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugy2Fj0MIHf3uMGz_oF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxineFY8a3CWrKMAx94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyoEpoPR57E8Hx1xr54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugwb4WS4fppIKVAvfZd4AaABAg","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgzuryN4t4Vdz19zs-N4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzpZ-X2xTOwSrvV0lh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzmcXT9nkjwisiAOqx4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugzbyja7stv8l6PeF8d4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw5oiRmIOLLsnS59k54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugy7m40xVy_iLIcDf3B4AaABAg","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"outrage"}
]