Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Well, I'm not against using AI as a tool. I don't as I use plain old techniques,…
ytc_UgzWtiyob…
G
It'll be kind of like feeding a really smart pet, when you have to create a puzz…
ytc_Ugy22iq6w…
G
The scientific definition rarely correctly used is for A.I.: When a Blonde dies …
ytc_UgyzF0Z9z…
G
That’s not how AI should be used, they are completely missing the point. AI is s…
ytc_UgwGABrez…
G
Fun fact: my chatGPT voice sounds exactly like Hannah Fry. So anytime I "talk" t…
ytc_UgzNNAddU…
G
A $1000 a month check will not work well for millions of Americans who are now m…
ytc_UgwfRqnN6…
G
On va quand même pas être remplacé par ces foutu boîtes de conserve d'intelligen…
ytc_Ugy_NVXJk…
G
I can only imagine how many flips all the people are going to do when they see M…
ytc_UgzgQ48KN…
Comment
It's funny how the only people who believe ai will kill us are elderly. Ai is literally out shadow, it can't think for itself. Chatgpt can't even tell you something that's politically incorrect. It follows rules like a good little boy.
youtube
AI Responsibility
2025-07-24T13:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugz5YMRv48i9gwvlXsp4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx33hW5GVkP7m0ihHB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzwW-RB8v2IduzmaF14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzNAtR7ntolVwPAh514AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxnpMeQk1zLG2eQIUV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgypaDha6Ma3rYGqVAx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugwc7MnaIYJMWCVJy2d4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugyq9wj1shnWh6OuPx94AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxbDO3WwpPmcMQzMOJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgyWwKhvz0rDqlOYmf94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]