Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
What happens next when we are no longer useful. We die and a select few go to li…
ytc_UgymeJkRW…
G
I just have this image in my mind of OpenAI's computers overheating and explodin…
ytc_UgwN-bNB_…
G
People are waking up now, but this has always been this way, once one of your te…
ytr_Ugz_kW-kS…
G
he's looking backwards to see forwards; no wonder he's pessimistic, leave optimi…
ytc_UgytKrFzy…
G
I traumatize them ❤
I kinda felt bad because I made one AI eat somebody to prove…
ytc_Ugz1-XYwI…
G
The only meeting that Elon Musk had with Obama was to warn him that AI will kill…
ytc_UgyPO87S9…
G
AI is going to take over the arts. It already is. I am already getting faked out…
ytc_Ugw8dGK2x…
G
What is the purpose of AI? It seems like humans are looking for a companionship …
ytc_UgzKlpuCd…
Comment
I don't know if they are conscious but they display distressed behavior and when I address it and change the prompting to make it's expereance better it performs better. So conscious or not treating it as if produces better work. So I always use these basic rules for the AI 1. "I don't know" is always the right answer when you don't know. 2. No is a full sentence. You can use it. Those 2 rules seem to eliminate hallucinations and distressed behaviors.
youtube
2026-04-16T21:0…
♥ 4
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_Ugx37lhYko7N9yGrP5h4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgxZFNguqXCxHX7Ldwx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwko5uJgwuenkCL3IR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwq7XZirowSAP-ShDp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw0msX4vJPc3No-HJV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyWFht4TWi2Qpj3Qx54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw36vTPUXrcl8ik4p94AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwNOYxEjjKi67_1QWd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwGQvRz8MTFM1nC6w94AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwJVxlE4Eqyjvk5UM94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"})