Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Yeah, but also its not because ai could steal composition and characters without…
ytc_UgzSTQ7MP…
G
I totally agree and think we should also push the development of tools which uti…
rdc_nt8vapr
G
AI isnt going to replace jobs. Its going to make email laptop workers even more …
ytc_UgyISECZZ…
G
Stop this insanity.
Some say the world got ruled by AI for a long time.…
ytc_Ugw6ppcui…
G
Ai uses a lot of adjectives, it’s easy to recognise if you look out for them…
ytc_UgzgYah3X…
G
Using AI is just covering up who you really are it's best for us to shine bright…
ytc_UgyKQj77A…
G
imagine telling an AI to fix a bug. only for the AI come up with a solution that…
rdc_mrrnbgo
G
So we’ve really taught AI to be like us and it turns out we don’t want super hum…
ytr_UgzAFidDe…
Comment
Hallucinations are the natural byproduct of inference (approximation). 20 years ago I was reading about neural networks (when I was designing a discrete genetic algorithm) and was a little jealous because NNs could approximate and my genetic algorithm couldn't approximate. But that very 'approximate-when-the-real-answer-is-unkown' is a double-edged sword. In other words, it's the NN version of 'gut instinct'. I noticed also that humans also make lots of inference/approximation error. Humans have just figured out how to mask their inferences and trick each other into an illusion of competence, but we also infer too much and very often.
youtube
2026-04-02T14:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugz6OOdMfn6y-cyq4aN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwClhTmjvC4_RCcNaF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzC82t813B0PADVMal4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwyC_035_l494Y8DZJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz4Vv7kpGlkANwLfsl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzdCrNVtyGe7ZyGsXx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzA91hdgQqC00T-qsN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugze9WD8OEAk165fGvp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgwC6Z0d1-MPT3lg2GV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxfCip9JXNkjxBgol54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]