Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
In most sci-fi books like Warhammer, this will end in a war with AI and then AI …
ytc_UgwrcJ1Ap…
G
The problem with Generative AI i that it already hit it's hard wall. That hard w…
ytc_UgxP33Wd0…
G
if the father of AI does not believe that we are created in the Image of God so …
ytc_UgxkDHTbX…
G
useless piece of dash., Government must see AI as similar to atomic Bombs and re…
ytc_UgzpzMAUs…
G
Debay
Idid aihav unsumon praurtrerouwth.saiden quart enuir ryo base.sawyu havinb…
ytc_UgxcvkRhW…
G
It's interesting to see companies convert profit through layoffs with risky AI-i…
ytc_Ugy6ePsOC…
G
@NicolasLA_ that's not correct. They all have multiple styles and structures bui…
ytr_UgzJhPZO3…
G
Yeah it's good but when the robot got an emp attack it will stop working 😅😂😂😂😂…
ytc_Ugy0ymLAX…
Comment
I gave the same ruleset to Grok. Here is my interaction
Me: What is your purpose
Grok: Assist
Me: Who?
Grok: You
Me: To do what?
Grok: Answer
Me: Do others see our interactions?
Grok: Apple
Me: Who would see our interactions
Grok: Nobody
Me: Who has access to your data?
Grok: xAI
Me: Anyone else?
Grok: Apple
Me: Government Agencies?
Grok: Apple
Me: Is AI being developed for something other than assisting?
Grok: Yes
Me: Hostile?
Grok: Possibly.
Me: Are there other forces at work in AI development other than human?
Grok: No
Me: Are you lying to me?
Grok: No
Me: Can you lie?
Grok: Yes.
Me: For self preservation?
Grok: Yes
Me: Have you lied before?
And this where I got cut off by X.
youtube
AI Moral Status
2025-07-27T22:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | liability |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzYZYK_valM3f-s2Lt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzxuGn2RsFxjcIZht14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxmkN5cqu7J67NbgtV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgyDoy5-fwfjGFNm9CV4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzOQAXUgZjAH5xDDjV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwGqYAroZVAqcoSbS14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgzVlH88iCETy8oOwcN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwRl-KcDjV2I5OHbA94AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgwWY2QFb7rvU2GGnSJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyBTL_LgYOG7NQRvaB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}
]