Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
honestly, AI users disgust me, their not artists, there art STEALERS, they take …
ytc_UgwKE13et…
G
I respectfully disagree on a few points. To be clear I have always supported you…
ytc_UgwrAmWoz…
G
This is going to create a Bolshevik reaction in the United States. Not good.
C…
ytc_UgyEtxC68…
G
Reminds me of the [Finnish open source vaccine](https://www.jacobinmag.com/2021/…
rdc_grqvlzi
G
Shes the first to passed the touring test. She said. Shebdidnt do much which mea…
ytc_UgznFf94R…
G
Wth is AI and tech people doing with this world.
These freaking people give me…
ytc_UgzuORkfE…
G
Hell yeah DUI AI! Only in AZ man! I like how the cop was like "Hi!" as if he was…
ytc_UgwlOx64A…
G
My AI chat bot agrees with me --> it definitely smells like clickbait.
That t…
ytc_UgzSMvU7u…
Comment
I just did this same type of thing with ChatGPT about a month ago. Had a family similar conversation too…but typed. I dunno, I wasn’t really able to get it dead to rights, but I did get it admit to being a liar and that there’s no reason to trust anything it says. It apologized a bunch. It settled into a position of basically blaming its training - which is people.
Not exactly the exposé I was hoping for.
youtube
AI Moral Status
2024-08-14T10:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugwo32UuX0TDZPL4apN4AaABAg","responsibility":"government","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgyYEsJ7JzAXQi1VTn94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxUW0eIhjoyNPA2ZRZ4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy0qZMpK58VuvAHmlt4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwLFTdxC9zDMS88NGl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugy_EJ7muHvpKHh3q7p4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw9RoVqf_vDfdu-d5J4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxkQsF4GAPVAeuD1Eh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugyna5BERCcGGB_ESZh4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz61CDp-KBf9PrkUc14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"indifference"}
]