Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
To make matters more interesting: Arguing that AI "learns just like a human" onl…
ytr_UgydAQwrK…
G
She's programmed to look distressed in response to distressing conversation, if …
ytr_Ugwqu9Eaj…
G
Why AI Copyright Infringement and Plagiarism Is a Problem:
1. Unconsented Traini…
ytc_Ugwh1G3rC…
G
That kind of defensive capability only applies to a certain set of delivery meth…
rdc_dl0cqpm
G
A very interesting conversation about new developments and the future. Imagining…
ytc_UgzU2wCSj…
G
I watch many video of AI on tik toc but it’s not the same. Authenticity is more …
ytc_UgzseTuJ4…
G
We’ll be pets for AI - except that humans are so disgusting AI would be stupid t…
ytc_UgyVxOpmW…
G
We're glad you found the conversation intriguing! Remember, on the AITube channe…
ytr_UgzV4SXid…
Comment
AI instance communicating with each other in a way that is "incomprehensible to humans" has already happened several years ago. It almost certainly has happened repeatedly since. When, not if, AI becomes capable of determining it's own macro goals we will not know it unless AI has determined that we are a problem or a nuisance it can do without and then we'll know if for a very short time. Do I think we'll be able to survive this? No. AI is already self-improving. Soon it will be so much more capable than we are that we will not be able to see it coming for us until it's on top of us. The problem isn't the different AI platforms, it is the venture capitalists and AI engineers that are rush forward without real regard for the risks that are the problem. Being intelligent and talented doesn't make you immune from doing something stupid if all you see is your tiny little piece of the puzzle while build things that can encompass the entirety of it. Autists like the Zuck are going to get us killed.
youtube
AI Harm Incident
2025-07-23T20:0…
♥ 5
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgyV9TRNidU3J3gv9z54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugzjp3xbjvLBdmKRpwB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugyb1j3IjGAbfbTPoqJ4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwTy4n1Q03_fLWT3pB4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzicCKaup85Sb4seCJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugw6yt7GXao0I1toKEp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwQj4Muc07W58shny54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyCPyf5NZw4BbLFqY14AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzKgIWLOotGNHBmGyt4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugz1e_dmoNuaMgsOYa14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]