Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The 1.4k dislikes are from AI bros upset that artists are defending themselves f…
ytc_UgxpUCIc4…
G
You know, videogame piracy has had studies say the same thing about it: That peo…
ytr_UgxY5eNYR…
G
Very funny that I got this video recommended to me. I tried to quit my job and t…
ytc_UgzfuWpXX…
G
Open AI just signed a huge contract with Department of Defense around that time,…
ytc_UgzDhUkSi…
G
My question about the medical one is why does the AI need to have the race in th…
ytc_UgzeM2Qgy…
G
These ethical dilemmas, and most others, evaporate under closer scruteny. First,…
ytc_Ughowk26E…
G
Wait - what?? AI was created in the process of trying to simulate/recreate a hum…
ytc_UgzDrzdeW…
G
@simoneoliveira5806 Hahaha, thank you for sharing your concern about the robot-h…
ytr_UgzEodrgZ…
Comment
It's wild how yall boomers waste ur time on stuff like this- ChatGPT is DESIGNED to sound like a human, but that DOES NOT mean that it IS human. Humans DESIGNED it to speak as if it's real. In other words, its "apologies," "lies," and "understanding" were simply WORDS that made the conversation genuinely seem more realistic. Words like these do not suddenly give it conscience but make the conversation seem more realistic. If people designed it to not do such things, then no one would feel comfortable actually using it. ChatGPT runs on a code that searches the web for responses that are suitable to your questions/statements, THATS IT. STOP WASTING UR TIME ON TS
youtube
AI Moral Status
2025-02-24T04:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_Ugz7LsfeIt8n7srb-9F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgzuewQJ_MDwWHwAhdp4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"},{"id":"ytc_Ugwbi7hjaMmTR44f5R14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},{"id":"ytc_UgzCq9HyVpUnRZXO_aJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"},{"id":"ytc_UgwaaJH_PxiwL4xfDed4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgzuycL3jRrCB7ot_zt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},{"id":"ytc_UgzTr-4tYsi71WjdAOh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"ytc_Ugyp9y578Q-8d41xOmZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},{"id":"ytc_UgxbXQl8VnsDfqE8SPh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"indifference"},{"id":"ytc_Ugx1Pf4eU1rfPRBwNNB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"})