Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
So when Disney starts using AI to create character designs, are they not going t…
ytc_Ugz1SyNFC…
G
why aren't we talking in ratios i guess :/ seems kinda like a cold dead world wi…
ytc_UgzKFHCvf…
G
WTF , I never thought artists could be such badass. Poison his work to make AI …
ytc_UgyIbUcbn…
G
This man is incredibly brilliant and really made me have to lock in to what he w…
ytc_Ugygc706C…
G
So much for 'full self driving " and "send it out to robotaxi for you ". Tesla …
ytc_UgwOO2dQ-…
G
4:30 No. The inteligence will not be in the data centers only. It's a matter of …
ytc_UgyYV1oh4…
G
if anyone even peeked at my personal ai stuff i'm either becoming a cannibal or …
ytc_UgwWQ1W9n…
G
I have no clue what the statistics say, but
this video only points out single in…
ytc_Ugyo7ajVr…
Comment
AI hype exaggerates current capabilities. The core problem with AI right now is hallucination: when these systems don't have the facts, they don't stop and admit ignorance. Instead, they confidently fabricate plausible-sounding information, often inventing sources or misrepresenting reality entirely. Since the AI's goal is to produce the most likely sequence of words based on its training data, not to verify truth, this flaw makes using it for any critical or factual task dangerously unreliable, despite its general impressive output.🤕
youtube
AI Responsibility
2025-11-18T09:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugx0RwwXySOsH_1xpBR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxnA3KPL8BAnr2wiBV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz_mYik8njTgtUuyRt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgzQsa5oatAo2lyABht4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwkQnoPL-kSlvRccfR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzZOMDEuLZF6coSayJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx07qJMUF3G1E1rfAV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxzawyYQ32Lnq7QrWx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz6WHuDAISruKoq9XB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwEGkkK8RySHDzadKh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"}
]