Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Taco Bell started using AI at its drive-throughs. It's a lot easier to refuse t…
ytc_UgxAlDIl6…
G
Yall knew it was coming... I've been saying it for 40 years. Now you'll have to …
ytc_Ugw_fpsDM…
G
Easy, just disclose if you use AI or not? Why is this a problem in 2025? It's no…
ytc_UgxOmmdaM…
G
well it was bound to happen eventually... maybe this is the start of people waki…
ytc_UgwUXoXj-…
G
Railroads last 100 years, fiber optic cables last 50 years, AI Data Centers last…
ytc_UgzyjZqM3…
G
Do you not understand the amount of behavioral issues and mental health issues t…
ytc_UgyyKbRrY…
G
The thing is, is that the robot kept goin for him even when he was down. Isn't t…
ytc_UgwqEKyiE…
G
ai is awful no matter how “neat” it looks cause there’s no shot and no detail in…
ytc_UgwTRev7w…
Comment
When do we become accountable and stop being manipulated by the ideal. That technology is for our benefit, and we do not recognize the need for control safeguards and limitations.
If we as people for centuries, have had a disregard for other human beings very life. What makes us think that artificial intelligence we'll see us as anything less than the dust beneath their feet or necessary.
We can no more control the emotions of ourselves, let alone the actions of a
Artificial intelligence being
Whom we have given all control. If the creators of this technology think for one moment they are necessary, they are merely Pawns in the hands of greedy , Self loathing , destructive people who are poisoned . What fools!!!
youtube
AI Responsibility
2025-07-24T11:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgxJG3T1S4U2_liIfUJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyeZyuya9W4j7GNunJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxVJXuYRAPDwJjxBMh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxlC7Hks6onRot2b9Z4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgyS41CJBdrEDGg6YMZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy3mqOXEyw6qWMqyIV4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxBhaQlcizyw2G-JOl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzgWvS2qzCwX-kNdyx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxfL4NMK2R8k5QJdJ94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxsCYRhp6Ab_lfaTr14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]