Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Ist week quarantine- robots are cool but they are gonna kill us someday.
5th w…
ytc_Ugw6efH7r…
G
I’m a disabled artist. The only way my disability affects my way to create art i…
ytc_UgyDVQjg7…
G
I was laid off due to AI. The job market is HORRIBLE. I know its going to get …
ytc_Ugy5VxniI…
G
"Biased data" your literally talking about over 65,000 data tables that also say…
ytc_UgzRNkrlJ…
G
i still hate the “AI takeover” response
AI cant live without a base or manual t…
ytc_Ugz8iWUtj…
G
Anyone interested in leading an AI dating and relationship ecosystem? Looking fo…
ytc_Ugy0JMWjD…
G
Why will LLM's (large language models) NOT create AGI? Because people do not sto…
ytc_UgzlUHpj8…
G
*T-Smooth*
You read it wrong, he wrote: _"...human driven car..."_
With the ri…
ytr_Ugw9BwhkQ…
Comment
even if you trained an LLM on actual truth (and not human output, which will always contain little biases, up to large lies), it still couldn't guarantee truth because it doesn't actually understand anything - it's just predicting tokens (not even words! it doesn't even _view_ them as WORDS... they are "tokens") based on prior tokens (the training material and your interaction). So it will mispredict and fail - go ask one of them about the seahorse emoji, and then go look up the "why" (there's a great article), and then be enlightened.
youtube
AI Moral Status
2025-10-15T18:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgwQxETjDd9TbnXgnRR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_Ugwav4gCjElchZVlrhF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"ytc_UgwN7cEnnpKM_8kxY_F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_Ugzb0fnMqIi-uFKn8Yd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},{"id":"ytc_Ugzpa-fAQqZlfmoDW2V4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"},{"id":"ytc_Ugzd2fhoH-uQVfq-F3p4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},{"id":"ytc_Ugyzva2uydBYKhsJ79F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},{"id":"ytc_UgyQwSbgTZbNe2Nvfeh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"},{"id":"ytc_UgzTFnTUZXV6ae8Q5g54AaABAg","responsibility":"company","reasoning":"mixed","policy":"unclear","emotion":"mixed"},{"id":"ytc_Ugz8uMBI5i7c05n8gcJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"outrage"}]