Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI WILL PURCHASE.
AI WILL CONSUME.
AI WILL DO EVERYTHING.
WHAT SHOULD HUMAN DO ?…
ytc_UgwIAD1TR…
G
Elon is so full of shit on predictions...
"November or December of this year, w…
ytc_UgzIxr1SN…
G
Will AI discover that the best medicine is not a drug but certain unpatentable h…
ytc_UgwXTF_yJ…
G
AI can't even follow simple instructions to write a paragraph correctly. So AI m…
ytc_Ugxrfc0ae…
G
Reduce personal cars down to singular essentials. They can come with self drivin…
ytc_UgzIri4-X…
G
If AI goes into war things, we, yes WE, are in the IHNMAIMS timeline. AKA we are…
ytc_Ugx5AEViY…
G
I helped build the older GPT-3 models back in the day, and my go to gas always b…
ytc_UgwxqDERJ…
G
Because teachers can't keep their hands to themselves. AI teachers, on the other…
ytr_UgxyiHwZz…
Comment
You should watch a video on the mathematics of how these models actually work. I think you would change your mind. They are much more limited than you would think. Here is an example. Every time you give a query to a large language model, the models have to basically reread the conversation from beginning to end. The models do not experience a lineage of "thought" In the same way that we do. Another limitation is explaining their behavior. It is impossible for a large language models to explain its behavior because they have no internal experience. Asking a LLM to explain its actions is completely pointless. It can only make inferences about its behavior based on the knowledge about itself, or Large language models in general that it has been trained on. Humans sometimes do this as well.
youtube
AI Moral Status
2024-08-02T16:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_UgwxI6HdQ_lxJBJeO_h4AaABAg.A6jMucGiitHA6vG7tmVskO","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgzI9vNCFb7_7L39BZx4AaABAg.A6jHTZd-s8GA6jIJkUYurl","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytr_Ugxk9RE_jALJbWunvMV4AaABAg.A6jA_XGpqcdA6rwqjBZP0y","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytr_Ugz7Rs97umMogK8EgVV4AaABAg.A6iOsUn2I_1A6umeXW71qP","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytr_UgzKmgUMLSZWaAr6Kqd4AaABAg.A6hEFZjxCQQA6qxWKt8qhs","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytr_UgxRZJypkq20tAb5DY54AaABAg.A6gu7qNKgUYA9-13-kl_lk","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytr_UgyALY41uEkpjPfyF8F4AaABAg.A6gKOP0TiDlA7gXLyrMvjZ","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytr_Ugy9KghU-5kZC4bmmjF4AaABAg.A6fDWfRz9jGA6ikRmVP-7u","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytr_UgxzrawJmQgSJEEFbKh4AaABAg.A6eHu1MlFz-A6oPsQVFTep","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgwauuYOynirCXd5IEJ4AaABAg.A6dPGSwTf5lA6dbdtdZolj","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]