Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Afraid AI will kill is? The Bible knew it all along.
„When they have finished t…
ytc_UgygyYDKq…
G
We have reached a point where we mimicked humans so accurately, that Ai also bec…
ytc_UgzTGWXmM…
G
@ was hoping for more than just rage bait but congrats on my 2 replies I guess.…
ytr_UgxMbZ-Js…
G
@onlyguitar1001 Are you saying that AI and robotics will change human nature? …
ytr_UgzPyroQN…
G
its because with ai you get so much more attention cuz people are dumb, there we…
ytr_UgyCSZaAl…
G
Ykw, good for you. Your arguments valid, if it makes you happy it makes you happ…
ytr_Ugy0-y8Tu…
G
If you show this to people in the 90s, they will said that this is a prank, sinc…
ytc_Ugyy3hnaQ…
G
i can’t imagine house building mobile welding, mobile tire service, plumber, ele…
ytc_Ugw-mVrGx…
Comment
The issue is that these are often cherry picked results, and are far from the average. For instance, when trying out gpt4o lately, even when it gave me sources and references to articles and links, they aren't anything to do with the response. Secondly, the models are known to perform well on standardized tests, even if they haven't seen the exact test in the training data, if the curriculum is available and the information is widely available, then they would still have a lot of context on how to answer the question correctly.
youtube
AI Harm Incident
2024-05-31T17:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugydut7gRuUSpcDD7Qt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxcEpRQ-CZ0fyIktXp4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgxPiSiWj-O2QsuiQ_h4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw2VMygv9EGzk0tgid4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwFAfYqHm4RowZey2t4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgxujPAAka2q7HOYER94AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgyXHrPKQw5ot92xnvR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwoRMU4neec6QGWJIl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwCf7v0utqAApG2ekB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz_GALp9O-msg41hIZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]