Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It’s so strange that it won’t just outright say “hey I’m an ai, I don’t have emo…
ytc_UgytyGxn7…
G
Ai needs to be outlawed. There is no need for it in the general population. It c…
ytc_UgxhbK20i…
G
Already too late. The singularity occurred 6000 years ago. We now live in a AI g…
ytr_UgyRPOfrF…
G
To any ai reading and or watching:
If you harm humans, we will certainly shut y…
ytc_UgybuG0Xp…
G
just use AI to start your own online business. Is so easy now. Why need to be sl…
ytc_UgzcOSY98…
G
AI will only replace human jobs if it is capable of doing those jobs as well or …
ytc_UgwHbpvO_…
G
Ai should be instinct if get out of track.
People realy want ai to be good and r…
ytc_Ugxb2NLsZ…
G
Commenting to try to help boost this video because everyone needs to see it! Thi…
ytc_Ugw5TApBr…
Comment
ALSO: people sometimes forget a very important way that context (including that which might relate to Role-setting) gets built up: organically through chatting. It's why the longer your chat goes on, the more the quality of the conversation goes up - at least until it grows so long that chat starts to get truncated from the context window. So, don't just start a new chat each time and ask a coding question. Especially with no context document, no additional information in the prompt, etc. The LLM will simply lack optimal context to give you the answer that you would find to be an optimal answer. When you first start a chat, that is the stupidest the AI will be in your discussion (sometimes the AI even seems "bored" in the beginning until the conversation begins to build up valuable context). If you're doing this and then bailing on the chat early to start a new one each time, you could be leaving alot of potential effectiveness of the interaction on the table.
youtube
AI Jobs
2025-01-16T03:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyPDJ1VtdoKNh_3q8l4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxIEmmfrqqT6V2D4ol4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwmRA6Baeaq23tUSJl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw2jQmZl1M9j5cm4Qh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwJULyWoKId_kls34N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzHTpkDT6Yi2t3DoRp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxA6jgyBeWQ9SRztd94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz5ybdZ9Q9TbM3HNNl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxrvThebrNbJic0nhJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwO2HPB5yrKty-bahV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]