Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@zxqdcv how do you know what's chatgpt then if u don't know it?and say it's co…
ytr_UgzJB_6Mb…
G
I'm of the belief that in this era of AI, developers should (and likely will) us…
ytc_UgzooQ4F8…
G
Maybe the AI is not our extinction, but our evolution. We seem to be asking for …
ytc_UgzKPEMgB…
G
@tryphonkorm In that case you are kinda dumb. You see, AI Does lie. It's called …
ytr_UgyxvFm9X…
G
nice video, you put the problem flat for everyone, I'd say, generative AI is lik…
ytc_UgxSAhWZu…
G
@jgreat4785 If the robot restaurant was significantly cheaper and more consisten…
ytr_UgzvIrXfv…
G
You know things are evolving so quickly, when you can now directly play tic tac …
ytc_UgyWOXoXd…
G
An AI lied to me like a human, I was micromanaging its progress (and it outright…
ytc_UgwUTJhBz…
Comment
The "I" part of Artificial Intelligence is the ability to assess and adjust in accordance with a quality determination process. When the worker gets your order wrong, you can tell them that. Their quality assessment process kicks in. Not only do they listen to what you have to say about the problem, but, more importantly, they think about the problem in terms of what could have possibly gone wrong and start comparing that to proposed solution states in their own mind. Very simple process for humans. Impossible for AI. In short, AI cannot understand the concept of a mistake.
youtube
AI Responsibility
2025-10-02T17:1…
♥ 4
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgysIT2spg7TZSSSRjt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwgHWHrfVEigPtgaEt4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwU7tJbXuAs94gSFsV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwYBWAXb1zAt5OZKHl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwA6FmI8hwTh-wJQZp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgytAQGU8gISiFDwdcR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw4sA7HiMJ0QZaH7bl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugwgg8KvEN5yUMki9_p4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwQ1zsgjnlPXfOA_Q94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwQ7-rB3rWfsWZEEWt4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}]