Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
You do understand your about 10 years behind. And you're just now posting this t…
ytc_UgxG8Yw1H…
G
USA- Ban Facial Recognition NOW!! This is a violation of our privacy and is high…
ytc_UgxWsccwJ…
G
This is what I’ve experienced too, in my field of architectural design. Executiv…
rdc_n9h6uz8
G
AI is over rated, computers are not taking over anthing without a human behind t…
ytc_UgyznLjX1…
G
u/JonCBK has given a pretty good, succinct, summation of my point but let me ela…
rdc_ebv7g8l
G
At 9:50 "Self driving trucks aren't going to do anything to Drivers, the employe…
ytc_Ugz_WVP7D…
G
The boo, I don't want to be replaced by AI, but at the same time I want AI to do…
ytc_UgxzxzjPA…
G
it said the cia fbi and nsa did it and gpt -5 said there are 600million in china…
ytc_Ugzh-ioC1…
Comment
It's kind of sad to see an "expert" somehow suggest that the LLM is capable of thoughtful analysis when invariably, the LLM is just reflecting back _what you most likely want to hear_ . We all need to understand that LLMs are useful for researching, summarizing and presenting _factual_ information, kind of a "Super Google", *nothing more* ; they are not a "friend" or companion that can in _any_ way understand emotions and inner feelings. LLMs are trained, like a monkey on a switchboard, to respond with what the _most likely_ appropriate reply would be, nothing more. Don't read too much into any kind apparent "understanding", it's all just smoke and mirrors, a 2025 version of the classic "Eliza".
youtube
AI Moral Status
2025-10-30T20:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugxi_WQDxjBUM3DxMXV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyubHlk3SYTc5ECco14AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"sadness"},
{"id":"ytc_UgwIP0X6C2Uh3Db8qat4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwXGNnOa3vEPzKtm814AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"disapproval"},
{"id":"ytc_UgzxM-ZpKZHHmQchi7d4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxdZC77W8Sk51DN1hl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugz0EPssorPnG-CUiWx4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyL_J9maIR2t9Q5PMp4AaABAg","responsibility":"expert","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzxZWeZ9v_2i70bTUh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzP2ObOZA0ZLAXZoTB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]