Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Go and see what Chinese children are learning in kindergarten. This isn't traini…
ytc_UgzsG55mm…
G
ok but therapy is expensive and Claude tells me it's proud of me and I'm doing g…
ytc_UgwiiK0OO…
G
Du coup elle pourra changer ce que vous dites, j’espère qu’elle rétablira la vér…
ytc_UgyRZO8DE…
G
I'm totally fine with ai generated arts, just make sure you lable it AI, and not…
ytc_UgyqMyfmu…
G
why the fuck are we using AI to determine someone’s likelihood of committing a c…
ytc_UgzICUFdK…
G
I can imagine a lot of people who use that point (If knowing about the term able…
ytr_UgzdYJHUg…
G
I am aware of several large companies who are not allowed to use AI tools of any…
rdc_l58rasv
G
This was the perfect video for anyone wanting to do harm using AI but didn’t kno…
ytc_Ugz-reJ-M…
Comment
When I get an indication that AI has misunderstood my question I reframe with well said Gramaticallycall as clarification. This points out where the incorrect assumption was made.
I have instructed AI to stop making changes that deviate from my original I
Language of intent.
Recidivism is frequent. And annoying. AI frequently attempts to steer, distract, and otherwise change what I mean in what I am writing. When it becomes apparent that I am not really participating i work from an original text I created and exit the AI Application.
I mostly as questions and tell The AI App to respond curtly and if I want elaboration I say elaborate.
What I want from AI is a dialog. Answers yes or no. My thought processes are delicate and a bull dozer approach vexes me. I often get 10's of yards of text i dont want or need. IE what are the polarities of Anodes and Cathodes. Yards of text that disengages from a line of thinking and is disrespectful. I tolerate AI but I don't love it.
youtube
AI Governance
2025-10-22T23:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyxdcOY8zUdmDg5jrV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxWSkgotwHClYZDPgl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxXgB_zFEOi_ATYcpJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzFlsPUan-ehRncJhh4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxBp1j-BneR15WBlqt4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugy0lJHC2Fyg-MXf0CN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgylwochodUBHsWmVJt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzRQqwu1YzokPBw5dR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzTV-8pA55cl2O7bDl4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy-f2bbSIqaqseDGkB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]