Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I could tell the solider was real since AI would have SEVERELY messed up MarPat.…
ytc_UgyXZdtuz…
G
I think the most sickening thing is that you have all these people on your podca…
ytc_UgzoaI5hH…
G
Ai doesn’t make art it only generates images, art has a creative process unlike …
ytc_UgyAZpjSR…
G
I think India had an opportunity where they could rapidly improve in that area. …
rdc_gtdylcn
G
AI bots are out ultimate echo chambers. If AI one day it said to user shut up I …
ytr_UgyZxCCc7…
G
i am using ryne ai for note taking anyone else using Ryne Ai Lecture Lab for tak…
ytc_UgygdtTus…
G
Humanity at its dumbest point here. So a long time ago man was created and the …
ytc_UgyXHjLKH…
G
What an interesting story! The combination of AI plus human cognitive distortion…
ytc_UgzgtdSn2…
Comment
I understand this is a big problem, but solutions walk a fine line between protections and censorship. Sometimes censorship and stigmatizing fake science can actually inhibit new ways of thinking and bury new discoveries. I’m not saying these ill-intentioned channels should flood the internet with misinformation freely. There should be strict laws about not disclosing what content was made with AI and there should VERY PUBLIC steps taken by platforms to sublimate these results. Social media algorithms should (by law) publish the methodology of their algorithms and code for it. That way people can better check if platforms are doing their part by comparing real results to what the published algorithms describe (these algorithms don’t have to be a secret because patents legally protect them from being copied). A platform can be allowed to place content deemed harmful or misleading behind a Fact-Check/Verification-Wall where users are provided with a warning that the content may be misleading or harmful, counter-arguments, and an option to view the content anyways. This content could be completely excluded or locked from viewing on children accounts. I think these measures can help minimize the spread of misinformation to the vulnerable while not putting the freedoms of speech, expression, or learning at risk. It allows people to review these ideas if they are wary or for historical/cultural/information/education purposes
youtube
2023-10-22T08:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | contractualist |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugz_pUlSIxqncWKmzbl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzuqWpeWoVh7gROk4l4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz3wxDgs9fFkiOoGyV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugz5aeUW5xdnva-L3BB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzMf6iln4bayjF7Q4J4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugy6Q6wbwkaNYJ3R6614AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxjhH_wuUnBGee2PTh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwfEYnsJR9g3Xl5LSp4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgzJGgEatpymeGICs9V4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwLUAsE1toBiZW9NMR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}
]