Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Elon Musk also believed that Twitter would use RPC in the app. A former employee…
ytc_UgxyBLKrk…
G
Culture industry by Horkheimer and Adorno, written in 1947 about this topic, gre…
rdc_ha1fl1z
G
The technology today was made collectively by millions of engineers, spending hu…
ytr_UgycxM9q8…
G
I won't deny that I've used AI to generate art before. Nothing I'd ever post pub…
ytc_UgxHtTe04…
G
Our demise will be believing videos like this are factual. Stephen Fry's AI voic…
ytc_UgyudrfQ1…
G
I do not agree with Ashneer.
I think he has no idea about the capabilities of A…
ytc_UgzYk_uOM…
G
The AI predicted a black man to get shot based on a number of factors, including…
ytc_Ugwel8CG8…
G
last saw it was illegal to have zero driver for driverless and was required by …
ytc_UgxFcU_91…
Comment
LLM's are not intelligent nor conscious. Models like Grok regurgitate what angry racists Xcrete from X. Long term interaction with a chatbot forces it to go to niche sources that are certainly weird and seem unique. We should be focusing on copyright infringement rather than worshiping 'AI' slop.
youtube
AI Moral Status
2026-03-18T02:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_UgyngfByqNncM5ZmxzN4AaABAg","responsibility":"psychotic humans","reasoning":"virtue","policy":"liability","emotion":"outrage"},{"id":"ytc_UgzsOyx8wCAlY3dY_Vt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},{"id":"ytc_UgwKt7IUDJzuefLpdxV4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},{"id":"ytc_UgwZdILpiVZrV4tcrZ14AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"resignation"},{"id":"ytc_UgxFve8I3gwjEQCZt1t4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"approval"},{"id":"ytc_UgxfFXkfLUJz8I2cQ914AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},{"id":"ytc_Ugyj25iGdk-ICPRMW6J4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},{"id":"ytc_UgylhQvzUZl9FQEuEYd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},{"id":"ytc_UgxR8iWlGj1_qtl4QfV4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_Ugyoh_T2iK8itGrCvBJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"}]