Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If ai gets good enough it will force out compition like even if soneone maced a …
ytr_Ugzrhl8Zp…
G
AI might just be a computer, but at least it knows what real love is.…
ytc_UgyiK7yLS…
G
No job, no money, no buying, and those who sell don't earn money, meaning they l…
ytc_UgzWYa8Jt…
G
This talk is a distraction from the real dangers AI researcher are racing to cre…
ytc_UgxJNt-pT…
G
Bro did you forget what elon musk said?, he said “ai can be more dangerous than …
ytc_UgyGgTgbP…
G
yeah at first it was a miracle for unit tests, but i agree. it does crazy shit a…
rdc_n7hkrj2
G
that doesn't make sense, how is more artists creating art to be viewed instead o…
ytr_UgxbJC2OM…
G
There should be strict guardrails and no companionship relationships, especially…
ytc_UgzRTPVkm…
Comment
The ambiguous thing about this is what it means to legally train the ai as fair use. Does legal mean all that anthropic has to do is purchase the books legally to train their ai, or do they also first need consent from the author? It seems like consent is not needed at all, and even if they trained it illegally, all that they end up paying is an extra penalty of 3k for each book even upon repeated offenses which is nothing for a company their size. This new law is weak asf
youtube
AI Responsibility
2025-11-11T09:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_UgwQF5HtknTVw_Vr_bd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyFtbuE903peXHHKlt4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgzYQDVtMU88VXneqAV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz8ZWHCP4odY7qR1zF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxRknPG21TClDBwIRx4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzntRYaB2cyxHBpLeB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"unclear"},
{"id":"ytc_UgxTZPCmnB3Dz17gISR4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyduiEaL4TY84RsTI14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgylSGo2IvvKQwtzwG94AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"resignation"},
{"id":"ytc_Ugz2a7hWKullSSRM-cZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"})