Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
As far as I can tell, people's biggest fear with AI is that it will do what huma…
ytr_Ugw-9vPRW…
G
would it not be less expensive just to pay a livable wage, rather than paying mi…
ytc_UgxNPFP09…
G
Exactly! In my other comment I implied that it's just an opaque process install…
rdc_e7j6cv6
G
And I firmly hold the opinion that AI can't think, it has no intelligence. (The…
ytc_UgwGVIKRJ…
G
We designed the gun for a good purpose, to hunt and protection, in the hands of …
ytc_UgxNCP5-O…
G
I just want people to actually think n look….a child sitting in a box talking to…
ytc_UgxHVYaET…
G
If we think like that then how should software engineers and AI engineers progre…
ytr_UgwMhYNNB…
G
They are definitely not programming themselves. Just shows how little people in …
ytr_UgzwO74sC…
Comment
"Let's model AI on the brain." Whose brain, Donald Rump's brain? Is that why it lies so much? AI is now being used to write legal briefs, and the damned thing is MAKING UP case law. Why is that? Because it's modeled on the human brain, and people with brains lie. It's not hallucinating. That's an excuse.
youtube
AI Governance
2025-07-08T20:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgypBYEOP2CI5VAvVV94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwUybGh6D11jaNgmLN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx4WERyh8ZnRiiwacV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwMBshTTk6BKSE8dQp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx-UBgIb6N71gV-aup4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugymmnnp2NkUN3exmgN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzHuJ43cswe6Wvfbxd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwbcS8dXeRQg14z9d54AaABAg","responsibility":"developer","reasoning":"mixed","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgxUiUt_Ufmx7OoqaCx4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgzrIoEZEscszxS2QDJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}
]