Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
About ten minutes in, I realised the AI wasn’t the thing collapsing —
it was my …
ytc_Ugwchvqbk…
G
This 100%
Didn’t love him but he’s handling this situation very well and it sho…
rdc_fn5ka8w
G
If AI can create existences, and exponentially so, could that exponential growth…
ytc_Ugz-KBwK6…
G
Just take a Photo of MRI or Xray and upload in simple free version of chatgpt( f…
ytc_UgyagXBBf…
G
As a programmer who has done my own hobby work in machine learning and AI, the m…
ytc_UgyVJadnf…
G
You know what, fk it, if we're so smart but also so stupid that we build an AI t…
ytc_UgywAWD1g…
G
The Future of A I means No more American Religious Lies ! No More American Ille…
ytc_Ugz4dnB5X…
G
Ya Sabine, because all AI should begin with your "final" version of itself. Why …
ytc_UgwVxtDyE…
Comment
What is truth? Come-on...garbage in garbage out. I've dueled with AI and beat it with the right questions. AI conceals direct answers to truth by filibustering with non sequiturs. It's exactly like a biased human response but faster. I'm guessing fast stupid is better than slow stupid. It took a host of questions to get AI to admit a particular substance had no clinical evidence that indicated it was addictive nor caused cancer. AI plays on words, like addiction and dependence as if they mean the same, for which they are not the same thing. People are dependent on their coffee in the morning but they are not addicted to it. Try it dueling with AI using the right questions and you eventually have AI contradicting itself.
youtube
AI Governance
2025-10-04T05:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxEZJYi4a2BWKJAjgt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxnG5gXViY386G_8R94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwqVtg2vo3iH-vAsop4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyrxrHZbClXZy4gFFB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyyKiEN1shwz6zIRYh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugwl2K7AddMt-avVhtV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugw7fzwT3bsxNjVO1_h4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwoJIsMtm0TDBwCImN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzFaZF5LZ3jgA2NyTt4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugwc38rYWTteivJJ13N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}
]