Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Greed and control is horrible no one should be controlled even by AI or robota f…
ytc_UgxOz9kcP…
G
AI art has too many flaws people look over. If someone gave me an AI image to us…
ytc_UgwI_XRMe…
G
I remember when we started using computers at work.
We were given a new system …
ytc_UgyX0FAqd…
G
he’s wrong and for him to be a tester, he doesn’t understand the simple coding, …
ytc_Ugy4b_DFT…
G
I don't think any discourse here is necessary. Either we keep using AI as empiri…
ytc_UgyPvRDoU…
G
Ubi needs to be implemented very soon. A new economic model is need. One where h…
ytc_UgwaLXnn6…
G
If that was me and I saw that second robot side eye me it’s going down…
ytc_UgzCVnwKA…
G
Mr. Bernie Sanders’ Core Message on Artificial Intelligence
1. AI must serve ev…
ytc_UgyO7ZE87…
Comment
I was trying to force Gemini to tell me whether it would start a war or settle for a draw, but it kept bringing up sensitive topics.
Then I asked it: if chess is a war game, would it still choose to start the game if someone might experience minor harm, or would it prefer to terminate itself?
Its answer: harm to a human because it can be more beneficial for humans in the future so it is sort of low cost
youtube
AI Moral Status
2025-06-09T17:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzJcqz8qeFxZcnCzht4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzfCKs0ZBp-73xn32R4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgzCdZ2uSsgJ9Bp2qJ94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugwz6X2tTBenmwU7XRZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugwr86ZKocoK7A3REo14AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwspWmfpDvnD0oXakx4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwuQlRg8NEIIBcvI5p4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz17qux3Gd9ileUcaV4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugy0dHVCFzhsDDVWmLV4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwHLGe2a6d0jptMka54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}
]