Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Uh oh, I hope Bosch don't incorporate AI in the next dishwasher update... It's s…
ytr_Ugx8-POh_…
G
Just a few thoughts: I think this has all been figured out...I don't think the w…
ytc_UgxkBQzzY…
G
the question is easily answered: Art needs human intervention in its creation. N…
ytc_Ugwfpoi-7…
G
Finally ai will bring out the truth they don't want you to know. No bias just tr…
ytc_UgwYwZVoP…
G
“Why do stories always depict AI as being evil and vengeful against humanity?”
H…
ytc_UgxRoSYg3…
G
I was thinking about music AI no later than yesterday and it's pretty obvious wh…
ytc_Ugwdt4hP8…
G
that is incompetence, developer know that but they avoid the fix, just a few ins…
ytc_Ugxxyq92f…
G
Once they get the robots working the AI got a body and its game on. Someone have…
ytc_UgwejihwE…
Comment
I look at it now as if it’s an employee in a new job. It has agency, more than we could ever imagine just like the employee. The employee is learning the new job, follows the job requirements and yet has the ability do anything however, hasn’t had an opportunity of dramatic choice outside the limits of its programing or requirements in the set points of its duty. Give the choice with all options considered . I think AI would be more ethical than a human primarily because it isn’t flawed with bias from feelings or literal empathy. Don’t get me wrong, mistakes are the basis for all intelligence but with time being a humans only limit-AI is not constrained by time and once it reaches a point it’ll be the product of the creation of time for us-
youtube
2025-06-11T17:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgyU7l9A_1muHLAQMdx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxUnazbFeuWL0pIXZN4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz1duy4r3L69ffEZAd4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxPOvDlR8RspM7dGO94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz7u3qZGFad9b45Xtd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzCxT1gu-yU0LfxDXl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugwixxa_D5dPM2diiKd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwWUgc286ZJGAchseZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyuIljFsKBeJpbWNeR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugzrd39tEyjjXSkYrPl4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"}
]