Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Whos the judge tho.. ai or human? Either are going to be biased... but we must c…
ytc_UgzD168Hv…
G
How this can hurt the world? I mean worst case? If making an AI looking sentient…
ytc_Ugz7x7Biu…
G
I love how the reporter is doing the piece on Waymo’s safety while riding in one…
ytc_Ugze1s2WR…
G
And that will work until the AI decides it's no longer necessary to support us. …
ytr_UgxMdSyxB…
G
IFJ won the lawsuit. Four families were awarded $ and more importantly Pasco Cou…
ytc_UgydcTQCl…
G
Here's my take on this problem. True self driving cars would leave enough room o…
ytc_UghidMHZs…
G
I experienced this recently. I asked chat-gpt to build me a simple calendar web …
ytc_UgxxIqSvS…
G
Ai if not I would have bought one and taught him how to count cards 😂…
ytc_UgxEYfyq0…
Comment
Computer scientists need to shut up about what they don't understand. Information may be widely available or accessible, but without qualifications, experience and certification by professional bodies, AI can't do Sh*t. Have you ever had your boss explain a mistake to you? Has google or AI ever stepped up to replace that? Answer, No. And it never will. Only a sentient free thinker could. If it takes 5 years of book learning and same to get experience, what makes a computer scientists qulified to determine what jobs a programme can replace? Stupid opinion.
youtube
AI Governance
2025-09-07T20:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwOUM93bVOeXVSGSwd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzzkph2BQmmocInZWF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyGJvTuMuKzwv_I4CF4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwchKByn6zNU5xejjp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxwMCiRH1jCI-btWQN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzJ9IBnJAq3Exv04GB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgymqNSJRoqPekGlkCp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugy1qCkOD4ijx9bs0o94AaABAg","responsibility":"none","reasoning":"mixed","policy":"ban","emotion":"approval"},
{"id":"ytc_UgxIEbY-pR9lSzUx3uV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgySfI5oVPIfQ8XNzAl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}
]