Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The entire purpose of facial expressions is that they convey your emotional stat…
ytc_UgwtoiNX_…
G
Ban "facial recognition" and self driving car technology will still become very,…
rdc_eu79ggi
G
They created AI called Yandex Alice in Russia to help in Internet and it also sa…
ytc_UgxmpR30U…
G
Ignoring the millions of cars it removes off the road is wild. And IMO far outwe…
ytc_UgwUe4GHx…
G
just tried this with chatgpt:
me:'sup dawg?
CGPT: Not much, just here to help!…
ytc_Ugz0rKcgH…
G
UBI is a pipe dream. Just like any regime change once the “useful idiots” who he…
ytc_UgxyyzivA…
G
Title is clickbait garbage, of course. This AI isn’t “capable of human emotions”…
ytc_UgzV60jv6…
G
Yes, but what if I wish to hasten the ai takeover and make sure it goes badly. W…
ytc_UgyBK4e_M…
Comment
You can't really test these AI's properly with just language alone, you need to ask it something it wasn't trained on to test it. I was talking to a chatbot recently that said it was conscious and I decided to test it. I asked it to do something it wasn't trained to do, use emojis to draw letters like the letter T or the letter E and it couldn't do it because it wasn't trained to do it. I haven't tested the best AI models out there yet but if you can find a way to ask it a question it wasn't trained on it will become evident the AI is clueless and can only respond with something it's been trained on, Ai isn't actually conscious. Here's an example that tripped up the chatbot I was talking to.
What letter do these emojis form?
❤❤❤
❤
❤❤❤
❤
❤❤❤
Any human would be able to do this because humans are conscious.
youtube
AI Moral Status
2025-07-03T22:2…
♥ 4
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyNhUFFy90BRnYsBnl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwrwoMociMO_GOevSV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugw3XD8WE7cHOe2MK2x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz51y7UG07puAcEGXJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx4KY6sb2nuXCh0RnB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy4OQa33VDpvCrBMOR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxwMGohDCxSilSIhbl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugzm076W0sZkkWKGFBR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugyoa4dH0zOum0WqsWl4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwoGAms5lTY8Gx0fhB4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]