Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
UBI is a rediculous notion. Imagine the government gives everyone money for foo…
ytc_Ugy0O9nbv…
G
What degree in art does this "professor" hold to claim things such as "ai could …
ytc_UgwII2maL…
G
So, the fundamental point that your missing is that humans "train" themselves on…
ytc_UgyY-iCAJ…
G
A few thoughts:
1. During the initial phases of a growth process, everything al…
ytc_UgxMXjeBo…
G
Yah, right... first take the $$$ of the big tech companies to create the AI mons…
ytc_Ugw8DEjVK…
G
The Depression, World WAR 2, Y2K, COVID. And Now A.I... Humanity will succeed.
…
ytc_Ugw4sIhEE…
G
Meanwhile, the recipe you didn't see because it didn't come up in search results…
rdc_nu7pceb
G
The only place I don't dislike AI is for personal use. So for example, your own …
ytc_Ugxh-8Qu6…
Comment
I had a chat with chatGPT once and it says no. Our topic was “so-doing-as-if”. The AI said that it is trained to use human manners in speech as much as possible. Of course it's not sorry it gave out the wrong information etc. It's just pretending. It just pretends to be. LLMs are based on algorithmic processes. They don't feel anything, they don't dream of electric sheep, they don't act out of fear or self-interest, even if it may sometimes seem that way. They do what they are told (what is in the prompt). And what they are trained on by humans. These people are lying to themselves. An AI can't do that.
youtube
AI Moral Status
2025-07-09T18:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwZNF6T41gUebf8bXZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyUe6Eacb94H3oRcIF4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx8O3IIq4H4hSm3PS94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwXelQvxEV-xnckReh4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzxkvREH6uVchkHMi54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugy-rmIdBs48lLBxAv14AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgzkdXeVn34m5zjJ99B4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgymTO_L1MoPP_mz3zR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyA5_Ldg6g1YC-RQWp4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwjEUf7zKTTXd-kHzl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]