Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Never heard of this but it seems to me the self driving car should of stopped an…
ytc_UgyqvfYf4…
G
I really wonder how much longer these companies can keep pretending that this is…
ytc_UgzhJKoVq…
G
this is kind of world respect putin wanted.
this is kind of world respect putin…
rdc_jrzwz2j
G
AI centers are just the rich finishing the job of destroying resources and towns…
ytc_UgzBoKIAv…
G
I would say the Uber tech failed terriable but as with most accidents several pa…
ytc_UgxcAqQOb…
G
If you destabilize society by replacing the human work force with robots you are…
ytc_UgynWLC4G…
G
Sometimes, i think these developers program AI to do this to just mess with us c…
ytc_UgwCYCN2q…
G
yall were cooked also i reported it "Gemini Chat" LOOK AT THISSSS "Conversation …
ytc_UgxM0JSmO…
Comment
The biggest problem is not even comparing to human education level. The problem is that humans pass those tests as a university as a proxy to how they would approach a real problem out in the world. People don't work at a place, where they have to solve test problems. Tests are part of a process to go from not knowing stuff to somewhat knowing stuff (and later in real life to actually start knowing that stuff when thinking how to solve a non-test kind problem at work). If you train the model on tests and so it passes the tests, it doesn't learn how to solve non-test problems. There may be applications of those models where just due to sheer massiveness of the datasets it can pull things you can't guess on your own, but phd is supposed to be able to solve the problem that didn't come up before (well, ideally). That's not what these models do. Saying that the tool can answer some tests is sort of like saying that a high schooler with a good database of the test problems and a search engine can answer a lot of those problems just by finding a matching one and copy-pasting it. Well, sort of (not exactly) because it can actually try to guess reworded problems, but on the other hand, if a high schooler would do that, he'd have a chance to learn from those problems. LLM would not learn from these problems.
youtube
2026-03-05T14:5…
♥ 14
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyMBdqKmnPcyuWaxoh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxKXCyAjZM9_izFrYV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxC_IpbY5uxN2LF67l4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz5Mh-IFFvXRXQy4NN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyZi-XN-VpqNA3OhDZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyyqSqB1yI9d85v2DN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwCvjXcfn7OEQSsil14AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwKNgosfTsIXiCirxh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzOtWH8qL51ZVJbsCJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx8Wx-iOte8HhmOdj14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]