Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
0:13 - 1) Rule-based AI
1:04 - 2) Context-based AI
2:11 - 3) Narow Domain AI
…
ytc_UgwvK20w2…
G
no way Ai is creating more jobs than it’s taking away. if that were the case, th…
ytc_Ugy0KlTMR…
G
Whoever here praising ai art. You had zero clue of what makes good art, it is ab…
ytc_Ugyav5B-R…
G
Imagine being the only student who actually wrote their essay without a stupid r…
ytc_UgwPenM_r…
G
Peter Thiel, for one, has said that with AI, people are no longer needed. Sam Al…
ytc_Ugxf4jz8s…
G
Human art has soul and worth to put it on and AI doesn't have that worth. that's…
ytc_UgwSWCngf…
G
19.50 anyone with common sense could work out the connection between a compost h…
ytc_UgxPQ15P2…
G
We just need to pay u money huh?
No thanks
I use ghibli AI for my perosnal use s…
ytc_Ugym1Jdt-…
Comment
It's fun to play pretend, but if you know how they work, it's just a very convincing emulation. The neural network is only part of it, there's also other things on top which make it happen. Say, the neural network only suggests a statistical distribution of many potential continuations of the dialog, the rest is done by conventional code. There are several strategies how to pick the next best token out of the found candidates, and if you pick a bad configuration/algorithm, the model will start spouting incoherent nonsense, its intelligence will completely disentegrate. If you make the token selection reproducible and remove randomness, the model will always respond with the exact same answer to the same question every time. There's zero self-awareness, all the pretense of intelligence completely collapses when you slightly disturb it, there's no memory, no perception of time. I think consciouness requires memory, perception of time, self-awareness, some sort of resistance to outside forces ("ego"). Otherwise it's just an automaton.
youtube
AI Moral Status
2025-06-05T22:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugw03r_Uqkt70VUBW8N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxeLem0YEk9-7G6MZ54AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzihHM0kumGZMHn1k14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgzjtPfA6dgImIIeBBB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzEQmLWO4T7YA-7YU94AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx7owW1WyXLLnj41fp4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzHR2AYBSZYnAQQxY54AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgySluYDI-hNZt-n1fp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgymdxALGkvFswFB6b54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzdWuohwDcPd_EjolR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]