Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It’s going to be brutal, but I’ve stopped trying to warn anyone because they loo…
rdc_nt6dusq
G
The tech titans/ Oligarchs do not care about humanity. They want to destroy the…
ytc_UgyumiiOQ…
G
If you create realistic scenery, i don't see any legal issues there unless the a…
ytr_Ugy-HXFfp…
G
The true danger of AI videos et al is their impact on our political system. Demo…
ytc_UgxJg5OTM…
G
AI may program us humans....
Open the POD bay door HAL...🔴 Sorry Dave I can't d…
ytc_UgyxOiJX2…
G
13:00 the bot actually doesnt know what people did in prior conversations in the…
ytc_Ugw7d2i4T…
G
Robots will not have rights. They will have laws and guidelines. If AI does be…
ytc_UggkPXUXj…
G
we CHOOSE whether we use AI or not, it's not inevitable, we can CHOOSE to reject…
ytc_UgwIu1nma…
Comment
If you have any sort of experience in ML, please read and reply to this.
Ironically, this compells me to research how we can build it so that it can indeed become an actual "expert", not a simple probability engine. However, I intuitively think that the current architecture of LLMs cannot achieve that, no matter how much data they're being fed. I believe that data will indeed make the probabilistic calculations more accurate, and hence yield more natural results, but to truly advance this technology beyond such simple processes, I think a new system should be designed from scratch.
Anyways, I have read absolutely nothing on AI so far, so I am the exact opposite of a specialist. I just want to share my oversimpli fied and intuitive view on the topic.
youtube
2025-11-14T11:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgwZDQ-2nENRwHKmXXx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzJ6a6yb0LJOCfyos14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxa4L3R6ZuTL-tvJsx4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxFhR6bZ0Yfx-zFXe54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx4elqxpntbVge0SQV4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzC8_QBcK3MWSup4C14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgwunplicShq99W3XLR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxCCo3vZowmEFypBXB4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgylyNga8rSxUNvwX414AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgymEcUSVg_K1tHDFUx4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"})