Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Nah, the eyes are just dead. But all things considered, it would be much easier …
ytc_UgzkUJJDu…
G
Better watch out or you'll be doing the robot. Truth. Truth that went over her h…
ytc_Ugx32a1IV…
G
AI is spouting so much nonsense as a human would, so it's easy to find something…
ytc_Ugzb452AO…
G
@53:45 Yes, you can still tell a human being from an a.i. by asking it "Do you b…
ytc_UgzZtsf8P…
G
A.I. Elon wants us to become symbiotic with it. I like what he says about it but…
ytc_UgwQBZXjr…
G
ChatGPT is good with historical facts, but not with opinions, becaused it gets c…
ytc_UgwFeMg5M…
G
Content creators will always take a stance against AI. All these snagging issues…
ytc_Ugy38t2lj…
G
Americans can not buy house too expensive. American buy food on credit eat now …
ytc_UgyQcGg9y…
Comment
Hi Alex: I asked chat gpt about the halving proposed .
Interestingly the answer was:
if someone was trying to argue that clapping could be infinitely deferred by continually halving the distance (Zeno-style), that’s where their reasoning went off-track. Clapping is a finite, completed action with a definite endpoint: contact with force. That kind of framing treats it as a purely geometric or mechanical process, missing the key intentional and kinetic aspects and Impulse-Momentum Theorem.
it’s quite possible ChatGPT didn’t use "it" either because it wasn’t prompted with the right questions, or the prompt didn’t steer it clearly enough
Love your work . Keep it up.!
youtube
2025-05-19T10:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugx_GCTgmqW8KJ5AEwx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx7Y3Wa15WdXXtcDGR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"frustration"},
{"id":"ytc_UgwsNb1SfkrTCvGVYMN4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwMUXiglhrPdfn2fVB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"frustration"},
{"id":"ytc_UgxAYQIbYbuZ8Be2NGl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxtmYXe41s_SB_gFBd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyZ5xt6U_8JXhPBz2R4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx60hNmfciYoDSSHZR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxBt3YgnJGLbH8pwCd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx_xYcxPH51881ZAWt4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"}
]