Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
For me, if it's too perfect, it's AI generated. Humans have flaws. Whoever desig…
ytr_UgyMzOr-o…
G
This is big lie, who is AI to decide who Jesus is?, this lady should go and slee…
ytc_UgwHBrOHO…
G
I've never used chat GTP or those other things but now I'm wondering what it's g…
ytc_UgwhHPRiM…
G
The negative behavior from AI is the mirror reflection of human internal underde…
ytc_Ugw1Y1eFu…
G
Ai is the beast system and the problem is china and other world leaders don’t ca…
ytc_Ugwjyzai-…
G
I think AGi will make it self after we find a way to make an Ai that can accumul…
ytc_UgyF6-OZ2…
G
@ChipsMcCliveyea i think the AI bubble will burst but we will get to agi in the …
ytr_Ugy8h2wUu…
G
First ppl blamed ozzy Osbourne, then it was video games then it was social media…
ytr_UgygLnRD1…
Comment
Sorry but I do not agree with you. You are highlighting bad projects but I would say that more projects are successful than not (I work with AI). The human error is a big thing in many projects and LLMs usually yield a smaller margin of error.
Also, a lot depend on the LLM, if you are using the Copilot by Microsoft, you will 100% get hallucinations but if you try Claude 4.5 or Gemini 2.5 Pro, you will not get hallucinations in most of the cases :)
You also have to look at the scaling factor, if an LLM hallucinates 10% today, in 6 months, that number will be down to 5% (just by upgrading model).
youtube
AI Responsibility
2025-09-30T18:1…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugy7EN3pktaC31TEKsh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwLyAwMBP5-FRVSPyZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwYMe5bPfq_Lifm0bF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx1AmoIlMvCSMWPIHp4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxN3-eJSrpg7yk86QV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyFrtFeqxqb8ibBq-54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyUWkT4BnIRK17QUXt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwdcFWm1pMAlth7AuR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyaFuZ7eve58eYkEep4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx7PD9-qZiYcxvXuQl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}
]