Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The AI taking over the world because it felt like it’s art wasn’t appreciated :(…
ytc_Ugzd-9mEa…
G
Take the discussion from the “if you believe what you see” episode and apply tha…
ytc_UgzS-txnE…
G
Hey there! In the video, the presenter explains that the robot's creators wanted…
ytr_UgwQytVwg…
G
0:44 musty tech bro ceo scared that a chat bot people never cared about to begin…
ytc_Ugz_m_a8v…
G
Ai art will never thrive because arts value is the time and effort put into it n…
ytc_Ugz7ypYur…
G
it's just an algorithm ... it's a robotic puppet with a complex script so it can…
ytc_UgzKpHpU-…
G
I worked for Appen to train AI to differentiate and understand query and suggest…
ytc_UgwXySxPr…
G
"Ordinary users are understandably excited about the inexpensive abundance promi…
ytc_UgxjFLE2x…
Comment
AI needs to be quality checked ... by a human. Maybe make the process faster, and maybe fewer specialists will be needed. But thinking AI can replace a human, especially in health care, is a high-risk (100%) proposition. AI will always make more mistakes than a trained human. No matter how good the AI becomes, it will never be human and will never bring the vital context that only a human can bring.
youtube
AI Jobs
2025-07-25T03:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwsNf6hyyX-sshxpoF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz_6YjngRtx9inRUOZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgwVRBke2fJM_fk5vVp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwUba5NpCaSYplmPCh4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgwHaYi171Xurt7rqDx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgyFTqpqxuIMLoa_fcR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwqvgQud3hddcspw354AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyc9Zgi1rRGPjTnPg94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxsHODJpaYktnDtp3V4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwad10UGPYBoMyA2yx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]