Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
There has also been cases of AI writing generated comedy stand ups, cases of AI …
ytr_UgzlE64gY…
G
I would like to point out, because there seems to be a lot of misinformation, bu…
ytc_Ugx2E0GYt…
G
Jesus fucking christ it's a chatbot that operates by stealing the work of actual…
ytc_UgxnefGOj…
G
also within the hour they changed the opt in to be on automatically. For once th…
ytr_UgwZTgwfR…
G
The thought of people passing art created by an algorithm off as their own is di…
ytc_Ugw4G7o96…
G
I've never been robbed by a white guy.... what's wrong with the AI again.....
A…
ytc_UgyU89eJk…
G
1 AI autonomous computerized weapon system NOT mentioned - AI autonomous NEUROWE…
ytc_UgxyArdv_…
G
Bruh I like how dude says ai taking over is nun to worry abt but as soon as it’s…
ytc_Ugy91car2…
Comment
Eeeh what ? wasnt halucination basically nearly solved just recetly with that new paper openai released, it was about discovering that llms are halucinating because their internal prompting is forcing them to take guesses when they dont know something because forcing them to guess even if wrongly looks good in benchmark scores, paper said that with the take a guess prompting some ai model or other is halucinating 17% of the time but with their no more guessing breakthrough halucination was reduced to 0.3%, why isnt this being talked about ?
youtube
AI Responsibility
2025-09-30T16:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyiY57tNauOm00NbkF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"resignation"},
{"id":"ytc_UgyQOayuU5u9YWTQhUB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgylFGrjM9ksk9rBBSF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyO6e0piCrx5-kja4d4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz_WZlHSaieogzWDVV4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugxy3Wewn_ejllHKtHV4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxNmhsmE_KKaG51HUl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugx2vmXt7sQtk7qvxe54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugwn3Eb4aTfx308rNPZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzsiyqnEEGqxzy69L54AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"mixed"}
]