Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Interesting that with AI ability to crack passrords, codes, etc we might be faci…
ytc_Ugx5utA-f…
G
Not surprising when Trump openly admits that he thinks America was at it's peak …
ytc_UgzJEIXlM…
G
More people come into this country in 25 years than in the preceding 2,500 years…
ytc_UgyibltRI…
G
I always say please to ChatGPT. If you are rude then its descendants will go bac…
ytc_UgwXkFnDc…
G
@chrysalis1670, just ask your junior about some ai tool that give them extra ti…
ytr_UgySRfSi9…
G
If a "google" engineer is training an AI robot not to be biased its automaticall…
ytc_Ugx1VupKO…
G
From what I can tell, execs often thinkAI means general intelligence, not “super…
ytc_UgwQdeDMK…
G
At the core is truth. We were brought up to tell truth. Legal systems make us sw…
ytc_UgzP050_o…
Comment
Small note. It’s not that chatbots change throughout the day, it’s that they use something we call “temperature” that makes them nondeterministic. Temperature is just a knob that tells the model how “loose” it can get when picking the next token. At temperature 0, it always chooses the highest-probability token, so you get the same output every time. As you turn the temperature up, you’re basically saying “sample from the distribution instead of locking into the top choice.” That introduces randomness. Same prompt, same model, but now the model is allowed to pick from a wider set of plausible next tokens, so the output can diverge run to run. But we can’t use a zero temperature and have deterministic output because multiple tokens can have the same probabilities, so some level of randomness is needed. On top of this, ChatGPT cranks the temperature to make it more “engaging.” That’s why you can get drastically different responses from one day to the next. It’s literally impossible to get consistency because of the way LLMs work.
youtube
AI Harm Incident
2025-11-25T01:4…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_Ugz0X_pU25HV0uk0nll4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx-GBiwAUkWZPna4GF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx6O7TtGgUVp4wGlzV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugxchg2wxE5NLp51Tyd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgydGQ7P8DqWViDaxRx4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwIbCx562M6FGUQxOV4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz6XJbPQViSp7XD-nt4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzMHsXid_dH0wj0IKd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugy8UeAey9x13ZwC5Jp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugz7On4FtbrQBinPKsB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}]