Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Too bad the guy that county voted for passed a 10 year ban on regulating AI…
ytc_Ugw67T3C-…
G
@xenn4985 eventually the Twitter bots will say something that gets their account…
ytr_UgwUVi6Mo…
G
Yes, however, just like humans are being manipulated and controlled by the elite…
ytc_Ugz6fRVKI…
G
yeah so if I create a robot to kill someone by its own will he would be the kill…
rdc_dy4c8lm
G
Knocked his teeth clear out smh you’re fucking up ai this is where something ins…
ytc_UgxqcovUM…
G
Count fingers and toes in pictures. Sometimes a hand has 5 fingers - no thumb. T…
ytc_UgwmXOpD5…
G
Josh, these are not "smart" machines, they are just prediction machines. They ma…
ytc_UgzrEfk14…
G
Yes, AI I knew that we would beat you. You might be smart, but we are smarter th…
ytc_UgyrrQk65…
Comment
AI's don't have drives. He knows this.... he spends too much time reading predicted text and has deluded himself. It still is fancy autocomplete. Yes, there is other stuff bolted on top. Why is the AI saying how a patient reacted to epinephrine? That's nonsense without being able to observe the patient. Yes, it can guess, but so can I and so can a doctor. That's what autocomplete is, it's fundamentally detached from the physical reality of the situation. Solving math Olympiad problems is almost certainly training data contamination. It has been demonstrated that it takes very few examples to poison AI training for whatever output you desire in a narrow area. If AI were that great at math then it would be tremendously useful to me, and yet it isn't. It sucks hairy monkey balls at basically all math related problems I ask of it. Why? Because the problems I ask are novel and apparently the Olympiad problems were not this time around, they entered the training data somewhere to be regurgitated.
youtube
AI Moral Status
2025-11-02T14:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgwoZ_ObFGWO8kS0MN94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxm8ymEkFJfTdvizG14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwXu4ZoKd5ie0rGLkp4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugxi90pefiwO-3ZJ75N4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyMvq0VERxFUxZ9n5x4AaABAg","responsibility":"none","reasoning":"contractualist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzraAd2k9OgS67G7Ct4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwhprLqk9khERGYPCx4AaABAg","responsibility":"company","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwccqDfXKUFdMg788V4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxNNNjj3Wgf80ULMJ94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzYqiRM8kumsn5QPgJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}]