Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
"AI-enabled virtual classrooms can transcend physical barriers, offering access …
ytc_UgxkaY8ZF…
G
Hahaha, now read what i asked ChatGPT.
1. which temple he destroyed?
GPT
- Th…
ytc_UgwwEx-Rj…
G
Personally, and I know some will downvote this, I could care less. It’s always g…
rdc_jtat25n
G
Our ability to be self efficient is gonna get completely destroyed amd we are go…
ytc_UgxC2XcH3…
G
Regarding new jobs being created, there's one factor you hadn't considered. That…
ytc_UgylceS5b…
G
Make sahadow channel where a tablet on the table with an ai doing the questions…
ytc_Ugw3uRrVK…
G
Doing some nerdy tech related things doesn’t preclude you from being anti techno…
ytr_Ugx07SSz0…
G
What we have today isn’t AI, just fancy models. It’s largely being driven by pro…
ytc_Ugx6Zdnii…
Comment
This is such a mistake about AI, it doesn't take the average, it takes your input, and then it runs it through cascading probability tables. These cascading probability tables attempt to come up with the best result at each level, so it becomes more like a decernment table. were it selects and rejects previous conclusions until it comes to the best fit it can to produce the closest to the most relevant single elements in the training data.
This means that with a carefully developed prompt with proper instructions and a large enough token base you will get good code. A great example is to add instruction prompts at the start that define what good code is before requesting it to develop code. This will get you marginally better results because in this case, it might target a training sample that was provided from a text book (as the text book might have used words or phrases that described good code)
youtube
AI Jobs
2024-07-06T05:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzRjMtjPvCyJ3bh-5V4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzqDDAYiOOGTMPPNKB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxJaLZR3BzBG65I-Kh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxK5tzu3S_UgpRzD0p4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugyw_9JoiD_V7BqNtfJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz-52TgPxidp_O5G6N4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz0DB-T_LchQL-hhn54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyH7Fr0Ws1XGsnEbSZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzzKn-Op2CxL92Cwyp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgwvmFRUm6nZjVkweQB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}
]