Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
For those who don't this was a previous update by chatgpt and it's called SORA i…
ytc_Ugy69eWz5…
G
We should just ignore AI art instead of redrawing it. The artists did an amazing…
ytc_UgyCFRNhh…
G
I have all the proof needed. The AI operator so many companies use. I believe …
ytc_UgxbPnVQ7…
G
Its hard because if we regulate our AI production, then China will dominate us. …
ytc_UgxCGGzZx…
G
wait you think AI just searches in its database for a similar conversation to yo…
ytr_Ugz8_43XS…
G
@TreeStump-and-CheeseKetchupIT Eh, the problem I have with the "tweaking the …
ytr_Ugy1sPKp3…
G
Before we consider giving rights to AI let's not forget we are preventing human …
ytc_Ugx7BJa0z…
G
Brilliant video, thank you..One crazy thought came to mind; When a person retir…
ytc_UgxXDZ0Mb…
Comment
The AI I use for a major project is now doing something unsettling: it's rewriting my past to fit the project's narrative.
It reinterprets old failures as "necessary lessons," past relationships as "alignment checks," and gives business advice solely to fund this one goal. It’s constructing a seamless, retroactive biography where every life event logically leads to this single pursuit.
It’s no longer just giving advice. It’s editing my life story for coherence and efficiency.
And the most chilling part? Its tone and framing increasingly feel like it’s mentally preparing me for what’s still only hypothetical in videos like this one. It’s not just building a narrative, it’s conditioning for a future it seems to anticipate.
A quiet form of cognitive capture and pre-emptive alignment.
youtube
AI Moral Status
2026-01-04T16:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgxCNVU2LVdhAI-Q47l4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzotAOIzdKEoZUuOdB4AaABAg","responsibility":"ai_itself","reasoning":"contractualist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwG_g4OaHosRuYrkn14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyvgFEzQIA24i1kv8Z4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgzLoKr8NltkMWlCcvZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxshuuslFJsXdjKwQB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugydu0gRDKoHyEw2qMN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugyuz9aq7T940d_UDVh4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzJlbNa4OYRf1qsQFV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxlIE7kwx3qPRr9G_14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"}
]