Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This was a particularly disappointing piece from The Economist.
I have come to…
ytc_UgxGrx4xU…
G
I think AI will corrupt the World and men in high places or men who work in spec…
ytc_Ugz9LU_po…
G
In the show “The Expanse”, Billions of humans lived on govt assistance bc everyt…
ytc_Ugwcekx7j…
G
Awful how humans are like sheep following any trend without working out the cons…
ytc_UgxNVB2ba…
G
If it were here already, why in the hell might an ai necessarily be compelled to…
ytc_UgzCHsucs…
G
I don't think he really knows how to use it if he is blabbing out this bullshit …
ytc_UgwnAWtLe…
G
@KukiolStuff <DOnt some cheffs just request what they seek to create from other …
ytr_Ugxdpo-kS…
G
This actually makes me happy because I've been doing art without AI for quite a …
ytc_Ugy2idi1T…
Comment
So if I say "Yes" if people ask me if I'm an AI, I'm not sentient? That... sounds like a very dumb way to check for sentience.
Dude sounds full of it to be honest. A language model making what you perceived as a "joke" says more about your own bias than anything else... Maybe experiment some more before making wild statements? Ask it the same question again maybe? If it makes the same "joke" 10 times in a row, it's a machine that tells a joke when you ask it a specific question. If it gets annoyed at you and tells you you've asked that question 5 times already last week and to stop wasting its time, then you might have some stronger basis for starting to wonder about it being a person.
youtube
AI Moral Status
2022-07-30T22:0…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_Ugy7M451P61dJn0HkZZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy6XC1XCE98hzg7PR14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwZH4OtbHeDPlX2BMB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwdzMuoRJYcYdWvQ1x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyvOeU4_gObNAwjsZB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]