Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
totally agree corporations handling of AI training is a tragedy of the masses, t…
ytc_UgxT7IavX…
G
Why even go to an eye doctor with specialized equipment and years of on hand tra…
ytc_Ugw5dtyb1…
G
People are worried that AI will become self aware not self conscious; intelligen…
ytc_UgxLL_b_w…
G
Thank you for commenting, @EmanuelJose-jk7wv! A.I. dominating the future can be …
ytr_Ugyd0C_R-…
G
I let AI do my math. Because math SUCKS! But yea I do understand what you’re say…
ytc_UgzV1EzIt…
G
It looks to me like all of those "pro-AI" comments were generated by AI itself, …
ytc_Ugw0cjQ1t…
G
Have you seen Squid Games, its a critique of SK style of capitalism. Lots of gig…
rdc_lj9szr1
G
@ right, I’m not saying that it’s a good thing either, just poking at those “AI …
ytr_UgzkwSlo0…
Comment
"If it's not safe, we're not going to build it" is a mindnumbingly ignorant statement. A knife can easily kill a person, therefore we are not going to build knives. Hammers, cars, piano strings, all can be used to kill people, therefore we are not going to build them. Nonsense. AI and all the other items mentioned clearly have vastly more useful applications than killing people, but they still can be and are used to kill people. And that is the problem. the 0.5% of uses that are harmful to people are harmful, and we need to accept that fact. The problem with AI is the scale of the application of that 0.5% is enormous.
youtube
AI Harm Incident
2025-07-24T13:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugz8tos3_uAOTriLPjl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy0j4sDCpyyC2ULb0p4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgxKuXTdplk23q0LBsF4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugx1y7u04sYOGxC0X3h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxmZBrTJEEGxrwRNnZ4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyLlk8haloC4WTi1994AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwUw-7CRVaZmP0TF614AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_Ugwz93NAJrYNr0F453F4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgydIgRftN6B0npgGdp4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugzjg8A2szTCjfMnESl4AaABAg","responsibility":"user","reasoning":"virtue","policy":"liability","emotion":"indifference"}
]