Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This isn’t about harassment — it’s about raising awareness. The rise of AI art h…
ytr_UgyMkQv2s…
G
@dioshoes9096well I can not credit something as my creative creation if i am no…
ytr_UgxLUCpvR…
G
With Trump loyalists taking over TikTok, CBS, and poised to take CNN, AI may be …
ytc_Ugz3UFEMQ…
G
Just another corporate jerk trying to flood AI tech for profit. If you think the…
ytc_UgwLBlS9F…
G
At around 7 minutes you literally explain Neural Network learning algorithms in …
ytc_UgyerXOPx…
G
While this is absolutely untrue, imagine if the very first instance of an AI bec…
rdc_n75ad2x
G
Try this approach, and let me know how it goes. I would be happy to work through…
rdc_jhit1i3
G
Oh my god this is so infuriating. You're not an artist if you don't make the art…
ytc_UgzlntvEQ…
Comment
Theoretically, AI can be infinitely better than us humans can ever hope to be. Where it falls apart is the training: if it isn't trained and supervised properly in the beginning, it can wreak all kinds of havoc. AI, in every way, makes decisions better: at least, it does its job better. If it's trained using faulty data, it will replicate that. If it's trained with biased data, it will replicate that. If there is subconscious bias, it will replicate that. We have to make sure we train these properly, and only then will we be able to have AI that is superior.
youtube
2022-07-29T16:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyHe5JxYYbnxRkmF8F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxNhYjshA_aWTZzQ4p4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyLWwPd2tIRdAuH9mJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxW9pLcQBKVTBEvwzp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"resignation"},
{"id":"ytc_UgxCqsdsdgRl3osDwlF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxeD7BMnzSHnr-GseB4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyuslZnph6FdmaOVid4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgySaZROHdupO4Q5xQ54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugz5NJCAkI9v2_MKl514AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzJpmIdpknNvVuR2fh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]