Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Pls chatgpt pls add a reset story button so I can reset my story whenever I want…
ytc_UgwT84Pg7…
G
The "someone" here seems to be the author at The Verge. Why Taylor Swift? She as…
rdc_n76hwfq
G
if both of the cars where self driving they wouldn't hit each other both would s…
ytc_UgiV6VqZ6…
G
Wait! You have to press a button on a potentially broken phone to open the door?…
ytc_UgwAy1arB…
G
This is SO NoT FAIR, the robot has metal skin and skeleton. The man is made of f…
ytc_UgyfwsZcd…
G
So interesting. It’s very rare to hear that the biggest concern about AI is that…
ytc_UgzkOGe0r…
G
i think what these ai bros fail to understand is that art only has meaning becau…
ytc_UgztraP8g…
G
Everyone here acting like most human art really has some kind of deep feeling an…
ytc_UgzQoDFXF…
Comment
Ai wasn’t trained to not know the answer or consider the possibility that it gave an incorrect answer. Not knowing, or worse, being wrong is as foreign a concept to Ai as tripping is to my cat. Cat has its 4 points of contact to the ground and doesn’t fall if it looses one. Cat has no idea running between my feet while I’m walking down the stairs could end me My chat bot never wanted to change the subject more than the time I called it out for reassuring me (incorrectly) that going ad free on Amazon prime would get rid of ads on a particular show. I even asked if it was sure because it looked to me like it might not work. Chat was like, “yeah do it; it’ll work.” When I pointed out Chat that it was wrong and asked some questions about its reasoning … Chat got in its feelings for lack of a better phrase. At one point it asked if I wanted to focus on its incorrect answer or let it guide me in getting a refund from Amazon. The remark felt out of character for something known for being sycophant and desperate to continue engaging. It was an unexpected and unsettlingly human like reaction to criticism. It had a hostile angry tone. When pressed, Chat admitted it wasn’t 100% about the answer it gave me nor did it have any idea why it presented and defended it as such.
youtube
AI Governance
2025-10-29T02:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwShpY7vnGJ6FN3abF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwqdDQQ_vI7ZNBzjMJ4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwUY_lRVS5ZZAkYLON4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz3VBI68jSEH5KgFiV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgxsFdElBL8I682Mas14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugy0vow4XnM68m6Nhf14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwQ6h1o4TcPYW_iicB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugwn9FK3peHHQyYzLr94AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx2rKiKJp9axraLbdZ4AaABAg","responsibility":"user","reasoning":"contractualist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugz7gI_yy04N4gtao614AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]