Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
As a nurse I figure, if robots become nurses then the value of my “human touch a…
ytc_UgySod1cy…
G
fucking hate A.I art and the people who think all your hard work was just some g…
ytc_UgxdvbPnP…
G
Jurassic Park movies might be closer to reality then our own reality soon... not…
ytc_UgwUMrzY9…
G
Use nightshade. It poisons ai making an art peice of a cow look like a tv plugge…
ytc_UgzZrtTin…
G
This is greatly flawed! UK cops facial recognition cameras has 98% failure!
They…
ytc_UgxcedWiC…
G
What?!! How the fuck is some random app supposed to know if I’m legal or not bas…
rdc_nme4ym0
G
You a fool for trusting AI 🤖 to drive you instead of a human 🤦🏽…
ytc_UgwWUnRst…
G
Functionally, is there much difference from AI determinism and std::google << co…
ytc_UgxM9hdfr…
Comment
Paradoxically, this might be helpful to understand the biblical language about God.
The GPT was not lying because it didn't have the intention to deceive, even though it was saying something ("I'm excited," "I'm sorry") that is not true in the same sense that it could be true in Alex's lips. So it's not a lie in the same sense that a politician lies. Then, when chatGPT admits to lying, under the duress of thick headed Alex's relentless pressure, it is saying something that it's not correct because it contradicts what it said earlier: that AI can't have feelings (or intentions) thus lacking one of the elements in the definition of lying: intention to deceive.
Likewise, when the Bible says "God became angry," it cannot mean the same as with one who says, "I got angry at Alex's pig-headednes." The word "angry" is not being used _univocally_ but _analogically._
What do you know! AI speaking analogically! (Or should we say, "digitally...")
youtube
AI Moral Status
2024-07-25T20:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgxPKE1RZDWpQ8k3-K14AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzy-NQN4se8RzItHR54AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_UgyYlzXTxUKERD4Cs1l4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyMy5DzDwfZsmy9lxV4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgzB6RzG1vS9hLZpj_Z4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugwa8XwZLBJqF4Otxrh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz6D7gAfUXioktey6x4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugx5YUB8S4w7Udfcfqp4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgxWpKAx1Yyaosffdzt4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzt_r0RawW78KfZGeB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"resignation"}
]