Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@Finalhellmaster Ah yes because companies are the same as actual people...
AI …
ytr_UgyzN9e9S…
G
Also artists never consented to have their arts profiteered, but when people aga…
ytc_Ugy8cHPX4…
G
Look, I have a problem with artificial intelligence to begin with I think to a b…
ytc_Ugz48H9rI…
G
i would use ai for art but for fun i don't want to steal people stuff that was m…
ytc_Ugw-gb_qs…
G
You just failed to acknowledge the alternative: humans driving, and what by what…
ytc_UgwJjhQC3…
G
ChatGPT talks like a scam caller wanting to take your money... which makes sense…
ytc_Ugzr3e_0N…
G
I heard that this robot was intentionally prompted to do that to get people used…
ytc_UgzdA1rvc…
G
Hopefully.
After all we are all so rich only due to progress getting thing made …
ytc_Ugx-PAyjr…
Comment
I find it hilarious that the concept of AI thinking in ways we can't understand is decades old. Read neuromancer. Keeping this centered on the concepts so I don't spoil a wonderful book. The AI would sort of..gear down to speak to humans. We could never comprehend it's thoughts, as they are layered like a circuit board. Incredibly complex and the result of more information we could ever know in twelve lifetimes. I don't know I just find it so uncanny, reading on AI and seeing where it's going, where it's already been in literature. READ THAT BOOK!!
youtube
AI Moral Status
2025-10-30T22:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgzUhVnD579w9AryyVJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgzW5g9esTRdu17Kp914AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgzQGQlqGjoGTNHal6d4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},{"id":"ytc_Ugysf6A-oXWKHw4m1Lh4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"fear"},{"id":"ytc_UgwGR9i5MpZHSHASEPd4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"},{"id":"ytc_Ugx-N0B7JS01wGfwz3t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_UgxkQo9f55QhgUMT7hV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},{"id":"ytc_UgwxZUr602dA9DkHwwh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgyvwcJta1oj-z6TUQx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},{"id":"ytc_Ugweqfc1jkagDq1w7Cx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}]