Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
No clue on what you guys talk - if no jobs no consumers … then for whom are you …
ytc_UgwN0qW_s…
G
That robot is so wrong on so many levels they creep me the fuck out I can't see …
ytc_UgxUo1URH…
G
Remember guys that the CIA has a movie budget, think about IRoBoT and the fifth …
ytc_Ugz8K5nBx…
G
Yeah, just can’t wait to be seen by a medical professional who had the majority …
ytc_UgwqYyxwi…
G
Actual AI is not intelligent at all no more than computer... it's a good tool no…
ytc_UgzFxI9Zy…
G
Also the fact that police are just systematically racist, and automatically assu…
rdc_jv68wo5
G
By "Now has memory" - do you mean, as in - if I am having ChatGPT help me draft …
ytc_UgwAiUaKR…
G
A.I. salesman tells your gonna lose your job to an A.I? Lol, those jobs will jus…
ytc_Ugw2LqVnl…
Comment
I find it harmful to entertain the idea that we are even capable of building an actual AI, we should be extremely clear that these LLMs are not capable of any meaningful definition of thought, we don't even know how that works in our own brains.
We are dealing with very complex statistics executing on a pre existing data set. There is no intent, stop anthropomorphising these pieces of software.
youtube
AI Moral Status
2025-10-30T22:1…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugwii2xL_wLw9X4m5sB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugwe9A9OhO5R7E63gnF4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxMXjeBo75O87r3vyV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwqrJRQK1baOhiKY994AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxCF-XMAByCkSJHexp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugyekbg08B8sdfUGvkR4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzaG_vHof0oO2dScVZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgweU0HcOoZtKj0W0094AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugyu95NI4Me3QQ5E1cl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxJnj2av-p6Wwq3Owh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}
]