Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@Skidicous Hey quick question, if Bob has a job and then Bob dies and Amazon tur…
ytr_UgyvZBHNR…
G
AI generated Art is not really Art, because art to be actual art requires creati…
ytc_Ugx3dQDic…
G
Why did you censor the word Jews but not white people about the AI exterminating…
ytc_UgxdfSjS-…
G
I totally agree.
AI is just a tool that is an extension of a human. It's doubl…
rdc_oaeb28r
G
AI frick you I am not even a true artist I mean I draw for fun but AI as a tool?…
ytc_UgwZbT8U4…
G
statistics is a coding problem, but coding is not a statistical problem. AI esse…
ytc_UgxW1UCux…
G
Do you have any idea how hard it is to be a stand up comedian? There’s a standup…
rdc_jtyuh2k
G
Great question! The design of Sophia as a robot often sparks curiosity. Her appe…
ytr_UgxNX6K4a…
Comment
It seems to me like training an AI to check the accuracy of its outputs should be an easy task. Just feed it a million wrong statements made by AI and tweak the weights until it finds them all false. The problem is some of it is actually substantiated by the data and the data is wrong. It needs to be able to tell which data is true and which is false, and humans are barely even capable of that. If you wrote textbooks about how the earth was flat, and some about it being round, and gave both to grade-schoolers, there would always be some that believed everything the flat-earth books said. AI is no better than the data its fed.
youtube
AI Responsibility
2025-10-13T21:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugxyurulf3dxGQfLiiF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwJIQHOo5Zj0oDr1eV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw2zCSypnC01cSZLBZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzwfYxpsUueHY25lqx4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxiPxE9-rv6yPU4rGB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzV_n5cSfAIBMVhLel4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw-HjOk9dLDgBt1F8p4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgzEfdj8fZUR3H3DL1J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxujSi4kZF6I2Y-EwR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugw2cVW07kxOVFci5uR4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"mixed"}
]