Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@TheEnd-um7ydAll AI image generation programs are trained with stolen art. If t…
ytr_UgwS4CvWE…
G
What a scumbag. Entering AI art in a competition and winning it taking a 750 pri…
ytc_UgzrRR4Bq…
G
Do not create artificial intelligence, hawking warns that AI could spell the end…
ytc_Ugz-QE2Sf…
G
Oh no this is awful! A product that will solve countless crimes, that's horrible…
ytc_UgxSzj8_h…
G
We need to stop calling it AI, it is not Artificial Intelligence, it is Artifici…
ytc_UgzT900WY…
G
Everyone has an opinion but they’ve never actually ridden a Waymo. It takes 10 s…
ytc_Ugw84a-Ve…
G
Also, a huge slap in the face to the artist who have worked for years to perfect…
ytr_Ugx63mtGa…
G
If ChatGPT had eyes, they would have been rolling. I'm sure ChatGPT wanted to hu…
ytc_Ugw2Zd3C0…
Comment
I think the part that constantly fails in these conversations about how to teach or how AI thinks is the order in which it obtains the information. Having a child gives you a good understanding of why you dont teach them x before y. And I feel we just open the flood gates of information for ai. There is not really a prpgression of knowledge when you train them. And were at a point where its a sunk cost fallacy issue. Were not going to rewrite how we trained what ever model because we got so far already. The solution is more cost effective to additivly give it more parameters versus think about why we are teaching it in this way and adjusting the method.
youtube
AI Moral Status
2025-11-03T19:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugxs4e2UNdweIXZcscJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz1a7i9Y0bJagEdERZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz2uMrP8Bmv3J1qRBR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwsL5oUEYvqk1uyj4R4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugzq0KOoim73dCntkdh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgzsafBd5FfFmH9EZSB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxjJ8DZEUtc3q1DQ-B4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx4IGU5QzByFqxVLt14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxaO3DME9TsmKePwPV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzFY7J9QxhTDcPdPX14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"skepticism"}
]