Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I asked GPT-4 to write a second paper for my science class and it told me to f*c…
ytc_UgwKb3P-M…
G
There's no need to be ashamed if youre using it for the intended purpose! Ai is …
ytr_Ugztsy_P9…
G
What is this? 1887 and debating whether or not one medium or another is true "a…
ytc_UgyT6B426…
G
Every single person has so much spiritual knowledge inside them, but; THEY ARE …
ytc_UgxOEXtmZ…
G
But but, you can tell OpenAI not to use your chat history as training data......…
ytc_UgzF42IIG…
G
If AI is so great, why does it need humans? One day it will figure that out.…
ytc_UgxUp5OW_…
G
Haha, right? Sophia definitely has a quirky sense of humor! Her unique personali…
ytr_UgzJDxend…
G
im here after seeing the robot kick a guy in the nuts with the motion capture su…
ytc_Ugwq23YqP…
Comment
ChatGPT mentioned "Half truths" several times. Half truths are if not literally your answer, they are the reason ChatGPT can be led into what seems to be a dead end. (oxford dictionary): a half truth is "a statement that conveys only part of the truth, especially one used deliberately in order to mislead someone". ChatGPT is (probably) not like Jordan Peterson, they are simply being forced to answer questions without knowing the full context (tell me if im bullshitting, im new to philosophy)
This is probably something you are all aware of anyway, philosophy is full of strange logical conclusions like these which on paper sound perfectly fine until you try to apply them to real situations, i.e. Zeno's Paradox that tries to prove motion is impossible. Zeno has been proven wrong, but never completely, and hes centuries old
youtube
AI Moral Status
2025-05-13T03:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_UgwwB2-bY9rBvohITZx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugy-8iFW9YniYg9-33d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxYl--ZfakxmY4TG894AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzAO1LtsNuIBKPrrlh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugy2pHN55febqRruegV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugyy9CAxwRgrlBfCFDp4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzDGOPIggQSotAetaF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugy5Mbe84QFVlu6ZhIV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyVxvP4l5c08p4UQPB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwXdP5iiZDP6yrlvHV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}]