Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I used to call myself an expert Googler in mid-2000's when a lot of people strug…
ytc_Ugxz3Fdy3…
G
This is a Tesla issue. Is tech “there” yet? As it was pointed out Elon is playin…
ytr_UgzzcyQ5x…
G
@Jesspyre We don't know but there's no reason to believe AI is conscious. What i…
ytr_Ugyl3Pf-6…
G
Can I get an ai assistant or recent grad student to bypass all these annoying u…
ytc_UgxYk8yzd…
G
I 100% agree, I have been having this discussion a lot.
Since i have been asked …
ytc_UgyUD0dfZ…
G
Let me guess.. they shouldn’t be studying anything b/c there won’t be any human …
ytc_UgxnyzhkK…
G
surgery is already robotic in most areas with DaVinci and similar robotic aids. …
ytc_UgzSXnLrb…
G
How does knowing the training data help you, when the AI understands things abou…
ytc_UgyU4DW73…
Comment
ChatGPT couldn't even give me reliable information about how to properly set up Dolby Atmos for Headphones with Cyberpunk 2077. As someone who uses AI daily, I don't think most people know how much this tech hallucinates. Bad training data is something I hear very few people talk about. AI just seems smart because it can deliver false information with authority. I'm not drinking the kool-aid.
youtube
AI Governance
2025-06-16T15:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgwQ8eSwBsC_CtVA9H94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyUOEkZlek8P1GptZd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyYKPhC4bIVezzek3J4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwPRrLfbYjU65rkC2h4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx9rZxdbM76lfihsht4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugw8uTBXqsg_MjAv3h54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzIhNVeP1DlCY4-L014AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgwDQ-MhaxCh4OO07v54AaABAg","responsibility":"government","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyE-6M-aIdYY6WnSaV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwxVe07_-a_RVhK_QN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}]