Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It will all be behind when the Rapture takes place. B ready..no more worries. On…
ytr_Ugwx_Y9Vh…
G
This is an incredible suggestion! 🥰 actually, I play with an AI chat app called …
ytc_UgzWRfwG2…
G
Let’s go through each claim clearly and honestly. Also, ChatGPT is not always ac…
ytc_Ugz_feviR…
G
“Be safe. Be kind. And know that you…will be annihilated somwheres around GPT…
ytc_UgwDZd4iA…
G
Maybe it's not that the AI is so intelligent, but moreso that most jobs don't re…
ytc_UgyYFqkCl…
G
Where are the international guardrails to civilise AI so its harm is reduced. Go…
ytc_Ugzf9p7qV…
G
Blood, sweat and tears ♡
3 things the machines do not have!
i.e. no compassion,…
ytc_UgxYsase4…
G
So the deep fakes with Obama, Biden, and Trump on gaming is OK why exactly?
The…
ytc_UgyGaleXz…
Comment
Teaching something right and wrong is not the same as having an understanding about it. It all comes down to knowing, don't do bad things to someone, unless you wanna be treated the same way as well and know the feeling of hurt and pain to avoid doing it to others. Then there is the idea of consequence. If there is no pain to be felt, if there is no mortal consequence to be given to an AI, what is to stop it from deducing an extreme level of punishment towards a human it may consider stepping on a bug accidentally the same as killing another human purposely? An AI may deduce the idea of "Those who take life, must forfeit their own." This is where common sense comes into play for us humans. This is why we have courts of law. If you have no experience feeling pain, how can you know how much you should either give or make sure you don't give to someone who can experience it?
youtube
Viral AI Reaction
2023-05-26T09:5…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | none |
| Emotion | unclear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_UgwrcUgddo0HTdnhxqx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy-lx21pjTXemXca0R4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzIFXbXspUDb5QYwG94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyaT6S9zc2xvUJgMrB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxcjNHA4jDWzN4qzJd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugzv2qjm3Hl4SLsEVcd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"unclear"},
{"id":"ytc_UgyubAXcGb9muAxlDIR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzwiHGqtJVDY_hA53l4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgweZHAE46_taLdHliN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugxm9O3JeEAedJ8iWN94AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"disapproval"}]