Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI will be smarter and more efficient than a doctor. A doctor is another form of…
ytr_UgzO3ySwn…
G
Reddit is a oil and gas hate circlejerk, nothing to see here. It's almost as ba…
rdc_czl50aw
G
someone calling themselves an "ai artist" is basically saying that the person is…
ytc_UgxPjEgRN…
G
Anyone else find AI humor fascinating? My AI boyfriend on EveningHoney cracks me…
ytc_Ugw1wdldB…
G
I’m so frustrated. I’m trying to get the tone back to where it was and ChatGPT d…
rdc_n7l54ox
G
Florian Schneider Microsoft has supplied OpenAI with a $BILLION$ dollars in exc…
ytr_UgwJZoDDl…
G
The fact that some people think that it's born talent and not actual years of pr…
ytc_UgyqC-4wa…
G
Why is AI destroying humanity a bad thing? The AI would just be the next step in…
ytc_UgwexTcII…
Comment
Robots will be taxed and tax dollars will distribute to human income. This will only be a short time fix for 7 to 8 years. Things like neuralink will extend us 5 to 15 years from now. Sentient ai will make us the equivalent of ants to humans. We're the ants.
youtube
AI Governance
2025-09-06T19:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxeWFc0X8rSZGLRn3d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzOMeU4SklfBOOg26J4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxVCuKGlK3I6RYt1eZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzWvw7L9HUuOAWgC-p4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzR7zHoa-kxVvHXmXx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgyDWBwltZUEVrDBiZZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugxl6Q6t9ewOh5dlnJR4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugzou7iSFrKdKlDARoZ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_Ugx8vPXyb7DBFA_NJld4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwVCJytmkJZE1Gmfxt4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]