Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
They want to automate influencers. Like why would you pay a human being if a pro…
rdc_m5os049
G
Stop worshipping the false God known as ChatGPT.
Stop an AI Singularity from oc…
ytc_UgzS5XTDK…
G
i know this has been said a million times before, but ai should take over boring…
ytc_UgzEAt1Dg…
G
I work in AI and AI makes things much more efficient but it does not replace peo…
ytc_Ugz982_ZE…
G
"Wait until the front wheels have passed the points then throw the lever to dera…
ytc_Ugx7BZ_2J…
G
It appears this vid is trying to debunk aliens by using it in all the clip photo…
ytc_Ugx_UiCjj…
G
Removing the human element makes ALL the difference. There is an enormous psycho…
ytc_UgzLgHGb3…
G
Of course. Why else are they ramming it down everyone's throat? To use it as a h…
rdc_ncl2xoj
Comment
AI is way over-hyped. AI hallucinations pose a huge liability risk. These tools are designed to lie confidently and make up data where it doesn’t exist. Everything it generates needs to be verified by humans. Lawyers are facing sanctions and fines for using AI generated fake cases. During an FDA AI drug review - AI made up clinical trials. The creators are using deceptive marketing to keep investor confidence - but the bubble will burst eventually. Some applications may remain - but use in legal, compliance, and regulatory applications should be avoided, lest we be ruled by AI hallucinations. When I trialed use of AI for regulatory compliance applications, it made up fake citations and gave me completely inaccurate information - confidently. When I interrogated the AI - it admitted that it is designed to lie confidently. We need to wake up to these limitations.
youtube
AI Jobs
2025-10-30T12:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyGa2MGUOp1gY7yJoF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxVgpNC3tteI2fY1_F4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxqbKtD8CaceWxOMdx4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugzm6APv-ziAS4AWSQ14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugx1DmIZnajdnfYzW0d4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzHtuU__Tzhz5jyWi14AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgznBBJ3-W9_A6FvRvd4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyFGHyGlsw5iVW_8E14AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzJRCLJsg4DIw-oYQ54AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgymGo2OtY0Gx10bTwF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}
]