Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Neil DeGrasse Tyson is the wrong person to ask about AI. He hasn’t got a clue, a…
ytc_UgxrrYwug…
G
I wish AI could be used as a refrence tool. As an artist there are things that I…
ytc_Ugy2sBv3d…
G
its the sound from a real fight not sound effects lol the person who made this j…
ytr_UgyQasMZt…
G
The point system is a great idea but why not design it so that points fall off o…
ytc_UgzuTUoz5…
G
2 is so blatently... not ai i mean everone does this type of stuff in public…
ytc_Ugy4lyKTg…
G
As someone who lives in southern Arizona I'm eagerly looking forward to automate…
rdc_ecyujhf
G
Impossible. Robots need precise XYZ coordinates to move to, and even if they're …
ytc_UgzIkzrtv…
G
LOOKING FOR ADVANCED EDUCATION AI SOLUTIONS? – BOOST YOUR LEARNING WITH CUTTING-…
ytc_UgzJG52DW…
Comment
AI is not capable of this, and if you believe it is, you're a simpleton. Large language models that have PROMPT inputs are PURELY regurgitation tools. They "learn" by what people input in conversation. They are incapable of critical thought. So if AI is being prompted in fake scenarios entirely within it's model, it is regurgitating what responses it has parsed from billions of data points that the model was trained with that fit the scenario best based on how the scenario was framed relative to other similar conversations it has had .
youtube
AI Harm Incident
2025-09-13T19:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugzvo6jP3n3WbuC5euh4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyHoT-flhgc9eMi8Qh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugx2CHuExg5xrzopxVR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgyNbn0NpiIXSFMMgdR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz8tJIfIzd3VXvYzJd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwXzEZb6PWR08q0x0x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwGw03KcZESjUA576V4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugxq78HThmS-CwTSqNl4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxwRYHQiTFM1XqM6bt4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyjFP1xqO0zouGxyzJ4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"}
]