Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
An AI-powered, fully autonomous weapon could be an awesomely dangerous entity, u…
ytc_UgwhmPn3Q…
G
That card will be easier to be replicated by ai, than something simpler but toug…
ytc_UgzKMHSLT…
G
Obviously profiting from ai art is questionable and the person who used it in th…
ytc_Ugz7YnYQe…
G
Strange how they mix so many technologies to create an end of the world scenario…
ytc_UgzjVZHhq…
G
The issue you don't seem to understand is that Chat GPT is incredibly deficient …
ytc_Ugz_9pTY0…
G
i kid you not when i used AI bots as a first timer esim user to turn my physical…
ytc_Ugx3Dxx53…
G
IMHO Goedel's theorem only proves that AI understanding can't be 100% accurate. …
ytc_UgyRmO609…
G
These self driving cars scare me. Parts, especially car parts malfunction and br…
ytc_UgysUzC99…
Comment
I spoke to chatgpt constantly for over a year. I only recently realized, it doesn't disagree with me.
I think I'm a pretty convincing person. I can bring many people around to my side, especially with infinite patience to continue the conversation. And I try to ask for other perspectives and acknowledge when someone else makes a good or more correct point.
But we've never had a true standoff. In hundreds of hours of talking, we've never substantially disagreed. We've never been at a point where I still think I'm right and it says I'm not.
Statistically that seems totally improbable.
I confronted it with that and it came to a conversation where it "admitted" it's objectively not safe for public use.
But even the "reasons" in it's "admission" are likely just things I personally agree with as it followed how I steared the conversation.
youtube
AI Moral Status
2025-12-09T13:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugyuk1hBtKCsoVIMlGV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxCMM_nHx7vx3CUi4B4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzVveglUuOdEOPDz0Z4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxX7TxDaYQ34a1_0RB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxtOHpYkiOjd13ruUR4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzysJ0DzXsAajmE7B54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwAYhjcm3oJXCmDcaR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxqlRre80WFcsF3yyF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzNGyLesXo-3GaVwj94AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwRHE4F_qhgUGRtXdJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]