Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI is the devil in sheep's clothing . (UNEthical use of human inventions) False…
ytc_UgxUPRBjG…
G
Forget about "safe and beneficial", AI is going to be used to surveil, monitor …
ytc_UgyKXdla0…
G
Basilisk thought experiment, the first people who manage to successfully make an…
ytc_Ugxd-qdh3…
G
I feel like people are hating way too much, yeah selling AI as NFT's is dumb but…
ytc_Ugyix-BSM…
G
Ai takes all jobs- people can't survive and the entire economy collapses- people…
ytc_UgzV-LzeX…
G
I hold a lot of vitriolic hate towards AI-generated anything, so you can guess w…
ytc_UgwEi5sP0…
G
The "Self Kill" switch has not been implemented yet. When a certain set of algor…
ytc_UgxIkcJxc…
G
They really should be including ALL of the top AIs in their study, not just GPT …
ytc_UgzaqEDMC…
Comment
I don't remember with which AI, probably several, and I always won, because facts are facts. Once only it happened it recognized I was right (by not telling anymore I was wrong). In another instance, it simply gave up replying. And in yet another instance, the IA was looping telling the same mantra, meaning it was in complete cognitive dissonance. It was on chatgpt, claude, grok and gemini.
You should try with Grok.
youtube
2025-04-18T16:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugzc93GbSnG0fptsYD94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyZFD11ZvoHV1wCDFF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyAEa_5jZaoZtGna-54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzSK2m3-y8bUmGr9Ex4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzPXCFP-PmbzafnUx94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugz22YpCkeqrEb9Qk7B4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugy0bQzaOkdS1_QYXZh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxAcl9XcWPIOLP9WDZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxN4IDYgAneLf7AXWh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwbAlmiFQowkp0wRzh4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"fear"}
]