Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Any A.i. development should be made to be built on physical hardware with no ext…
ytc_UgyYjcuK4…
G
Make the whereabouts habits and movements of the families of the ppl in control …
ytc_UgzbMJbhn…
G
Think of the crime, violence of anything you can imagine..crimes against animals…
ytc_UgxW-Mfze…
G
looking at video 7 months after it was posted tells me it had 0% effectiveness a…
ytc_UgwXyoMTe…
G
😂😂😂😂damn white people mad at AI too😂 so when AI take over it’s gonna get rid of …
ytc_UgwD8Tlay…
G
Sorry but these same owners and local corrupt democratic dmvs allowed illegals t…
ytc_Ugx5ieB3Y…
G
Yeah let’s make AI and remove all the guardrails and constraints. It’s large la…
ytc_Ugy-c025r…
G
The researcher is right about the dangers of AI. But he failed to consider the e…
ytc_UgxaEsdV4…
Comment
There’s a high likelihood you’re going to prison if you use an AI lawyer. AI is far too agreeable to take up the oppositional role required. Even using it to look up case law is tricky because I’ve looked up video game related stuff and it gives me the wrong answer. Imagine being given the wrong answer when someone’s freedom is at stake.
youtube
AI Jobs
2025-09-09T14:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugz0jstn73NyBTKo6WN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgzUl2c2XkuMn_XImL54AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxsKhQ4gbvaflLVc0N4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwzU2XoAqBRSoSrvLJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwpaARH-aE1MuF8aSt4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwRj796esQxCXEkflF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzDh6kzFBk0M9JR3hV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwP50ibyRHsfuTj0qR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugwm2gJFlooRs2WNIYN4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxQVfBZQVSS4yqCYPd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]