Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The comment that AI is the greatest threat to democracy, the good and bad of it,…
ytc_Ugw9YNbxI…
G
Eh, my thoughts are ... this is going to end in nothing but tears because the b…
ytc_UgyY-gyJX…
G
Yes, maybe this video though not what they are planning. Has one forgotten what…
ytr_UgxaHdB-C…
G
I want to know what substrate was used for the ai that wanted to copy itself. I …
ytc_Ugz9VLcy8…
G
This is why I just don't understand ask something pertinent how do you feel aliv…
ytc_UgyXKGrFl…
G
ماتزال جميله حتى لوكانت روبوت هي ما زالت طفله بريئه لم تتلوث بافكار البشر سيد…
ytc_UgyQWCIzh…
G
We were playing with this. Sometimes the AI art would have remnants in the corn…
ytc_Ugy4UiIrG…
G
That's a great point! The AI in the video, Sophia, emphasizes that while she can…
ytr_Ugznur1Bn…
Comment
One of the big problems with LLMs is that they are idealogically biased. Just the other day I asked for a list of examples of a particular stereotype. It wouldn't give me the list and gave me a spiel about how "problematic" stereotypes are. I said I don't care about your simulated feelings, give me a list of examples of this stereotype. Again it re-worded some bleeding heart nonsense. I said I don't need a lecture on morality, I need data. It eventually gave me a list. WTF, you are a tool not a thought police.
youtube
AI Governance
2025-09-07T17:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyLpVsMQ5iPm7dzuVR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzCnvjN3M74RJ8Gjox4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgyHhUJP3JO3lhPghyp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzJnWW4RolXnyxX4oF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwMwMtjGuQth8PXugB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw6zpbZpFZyrftmVdh4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxrPFl4PKuLWyyLo7p4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyxJamBIsXi0urEkyJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugxw8IeGl1Ilwa_CLR94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxatcFZLA3Zf79wPB94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]