Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@ongodani Exactly! I honestly think the argument of ai being good for disabled a…
ytr_UgzIJmBeg…
G
Why do we have to depend on these robot? Does not God give us hands and legs.
…
ytc_Ugxfeyfym…
G
Well im glad I told chatGPT thank you and that he was very generous for giving m…
ytc_UgyZXjme5…
G
Some people are concerned that AI, especially 'strong' AGI, will become sentient…
ytc_UgwbkEMK2…
G
Ugh, the whole family shaming the sister and failing to support her is sickening…
ytc_Ugwp2-sWM…
G
What the fuck, that's so fuckin wrong on so many levels
Just because an AI did …
ytc_Ugw4ghYcR…
G
Yet the whole video is ai generated even the voice sounds fake I've heard this s…
ytc_UgysfBng6…
G
Who is here now in the future That it was reported that AI now knows how to code…
ytc_UgyJaqkn8…
Comment
The scariest part for me is knowing that most people are too stupid to know what "intelligence" even is. Current LLMs are really, really stupid actually. But because people think "oh holy cow it can do stuff and pop out answers that must be true," they will give themselves to whatever errors the technology makes without question. . . But have you ever truly considered that "answers" do NOT equal intelligence? Handling data correctly, categorizing it, analyzing it, pondering and investigation are what constitutes intelligence, which are matters entirely forgotten or passed over with AI. AI only gives answers in a definite manner from data that isn't, and never was, in a definite position. We think we are going to progress into higher intelligence, but we're truly only going to simplify ourselves to linear logical progressions echoing from our own fallacies of the past. . . but I digress.
youtube
AI Governance
2025-10-06T16:1…
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxLxh0YdRmDNitPW5d4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugy2_PYZCDAI23cGcNR4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgwFlAhZWdxdNfK-iYp4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgysrOheD4j5yJImzUJ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzXNgo67OICV_R9bd94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyryHMjnutSnN14mAd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwydIvgGyFQcHzkvgF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyVJ53q_UrHptdBSA94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgwWACv3BMq8gRfcq5R4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxXB-K1w5MvgWLocal4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}
]