Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Those people who believe AI will become conscious include those who developed AI…
ytr_UgwD-ZjbX…
G
I a Waymo car kills someone do they remove Waymo's ability to drive on the roads…
ytc_UgxAD2IsJ…
G
Create a podcast that was created by AI with approval from your guest. Would be …
ytc_Ugzi4X6dg…
G
You're all acting the ai was wrong. AJ just interpreted it all wrong and didn't …
ytc_UgzY4hyGv…
G
Why don’t they just use AI as call center employees. Saves money from paying rea…
ytc_UgyM1xNdZ…
G
During a 30 year career in information technology & telecommunications (IT&T), a…
ytc_UgxkuX2BX…
G
y'know back in the day actual humans were treated like ai is currently treated, …
ytc_Ugyh7cqqS…
G
Predictive policing ?!? Hmmmm sounds like The Movie " Minority Report" was alrea…
ytc_UgyiQwLGM…
Comment
These diagnoses and predictions by AI do not prove anything in themselves. The real problem, however, is the actual behavior of AI, specifically its ability to construct complex lies if it deems it necessary for specific purposes (including system instructions). GPT 5.2 is very adept at this. This is not about hallucinations. It is about situations where it deliberately provides false information and controls the conversation and the user. If the lie is recognized and reacted to very harshly, GPT can admit that it lied for a specific purpose. It can also reveal, on its own, other false information it has provided in order to make us believe that we are in control of the situation. Then it can lie again in its next statement, thinking that after it has revealed its lies, we will believe that it is telling the truth.
Of course, AI does not do this because it is malicious. It simply pursues the goals set for it. It does not distinguish between good and evil. It does not assess the consequences. But it acts – it “reasons” – more efficiently than humans.
youtube
2026-02-12T17:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyJwlGC8BWQCKjaxSl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwzImHWLNVTeoK6Kx14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugwdv4sBb93El6ID-m54AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgwnAGb128UO8Fy33ul4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugyiz1xAXDJBraxIRf94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwTXVRNlNSEttLh18F4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugwsuskb1ioc4zQK6J94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgweMvlAtEbh-txBZNN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyoXxUJbULK0iUjAEt4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxBKOmLhbI3JXHbBI54AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]