Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The problem is when those humans get used to design AI to make themselves - and …
ytc_UgxkJfcOU…
G
Nothing's the prblm... Ppl will make more apps like telegram.... Nd bout deepfak…
ytc_UgwWeoqAC…
G
I think that as a tool it's almost useful, but it takes away from my work. I can…
ytc_UgwJaZbhp…
G
80% but lower cap on applicability and different conditionality. I could be wron…
rdc_fn5l1pv
G
All artists have the same opinion on AI for placeholders. Just ask other artists…
ytr_UgwPuGCiY…
G
Other than being less bias than the corrupt scum we have currently I don't see t…
ytc_UgxSzdWyN…
G
>A truly powerful AI with the ability to topple humanity on a global level (a…
rdc_kqtfbta
G
At least the driverless truck is trained! As a guard shack worker, they already …
ytc_Ugxi3O3f0…
Comment
Detroit become human. It would take another 100 years. People are too busy causing war and killing each other. AI would never advance because of that. Many nations don't even have the access to computers much less AI. It would never reach a level where it could outperform humans. Only the richest people would have access to advance AI. One guy mentioned if we don't do something soon we're near the end. They can't even keep people from burning down California. The other guy who couldn't answer the question about if he wanted humanity to survive. He wants a Orwell 1984 Big Brother controlled by robots.
youtube
AI Harm Incident
2025-07-24T15:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgytWNtXCmthsfs64pp4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugw1A9Xbnip-OBwkQmN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugxuk1SJ_mzF6k6xM8V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugys9IsmeYRN7-UJbst4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwgIGoWl1KLCV3PGb54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyRMil3azrEgqK3m1x4AaABAg","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgzyU_eDSAdGwH2555x4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxhiNuG3vugG6xhMHB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw-0N-UWZGTJui8QSx4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyzEyQrPwjlb4x5h-94AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"indifference"}
]