Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The only way to save our lives is to destroy the AI period, if not this world is…
ytc_UgwyN_xPs…
G
That is not going to be the last time someone says 'Wait, how many seconds was t…
ytc_UgzebO8_e…
G
We understand that interacting with AI can sometimes feel eerie. If you're inter…
ytr_UgxlJQ7r2…
G
“If this all goes to plan, if this all goes to pass, where news is now we don’t’…
ytc_Ugw6WLQro…
G
11:01.
Effort from all sides? Is Chat GPT deliberately ignoring the facts about …
ytc_UgyfNUr4o…
G
I see this guy has a lot of knowledge about AI but almost none about humans.…
ytc_UgxBHVyjj…
G
Many companies prevent their coders from using ai because of security of their c…
ytc_UgyUBAZWU…
G
AI will be the ultimate collectivisation, communism was a precursor for what is …
ytc_Ugx69_Ay1…
Comment
Saying chatgpt isn't smart it is something much weirder
is kinda a semantics game imo
foundation models are smart in unexpected not fully understood ways
that is why we have terms like "alignment problem, blackbox, hallucination, etc"
we don't have a clear detailed account for the rules AI opprate with
we have fuzzy general idea based on how well they perform on benchmarks
sure pointing to a set of rules and saying "see rules are not actually smart"
misses the point that we don't actually know those rule percisely, bc the rule space was explored by a digital simulation of what we typically consider the biological source of intelligence, a neural network
sure I will concede "AI doesn't understand things the way an organic intelligence or a human would"
that is more than fair
but saying something like "AI doesn't understand anything at all" is obviously not valid, bc if it were even remotely true at all then AI would not be useful at all either, by definition
it is more accurate to say we don't know what AI understands except that is very likely very different from what we understand
which again brings me back to the point about relevant terms like "alignment and/or blackbox"
youtube
AI Moral Status
2025-11-18T21:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwIAaPPRLXE4Fn4Sqh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugys6Qv-XehHwSYnnQp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugzx5nPOWSs-WxwErs94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxBeTJFNwF3SQq6Cl54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxDPgN6R-_bVSvuJzR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwqOajLar9znY6abtl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwKR9cn5lwmHfefHth4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxZrgBzrlT0P2OR8Pp4AaABAg","responsibility":"company","reasoning":"mixed","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgyVCWbBcG5RT6j5Ge94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzlT0d2Hqa_Tt3bYTR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]