Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Let's unpack this bullshit.
Next word guessers which LLMs are is not artificia…
ytc_Ugy1lpWMr…
G
It's not on that level yet it will take atleast the next 40 50 years for that ,I…
ytr_UgzJ6nnN5…
G
Yes, I believe. Sam the Illusionist channeled a a higher being, and they said th…
ytc_Ugx8X23GZ…
G
It won't change anything other than a direct brain to general AI interface, sing…
ytc_Ugx12hGwL…
G
historically theres always a pull back because hype, but still AI capability is …
rdc_nelkl0d
G
He is definitely an expert on AI safety, but as someone who is doing a PhD. in b…
ytc_UgxTLNw2A…
G
I agree with you 100% on this. I have lost people in my own family to suicide an…
ytr_Ugzp3CBXw…
G
Its already been good for almost 2 years. The normies liked to pretend that it w…
ytc_UgzED1XB0…
Comment
5:00 The guy speaking for “Meta” directs his remarks at utmost stupidity. But like those in his giddy, credulous audience, people will dismiss and rationalize concerns in order to pave the way to continuance (because that’s where huge profits lie)! The problem is not really artificial intelligence; the root problem is in human psychology and in the tragedy of the commons aspect. But people are truly this stupid, very different from Geoffrey Hinton! We are doomed, because money and geostrategic advantage will endlessly propel continuance. There’s no escaping this perilous doom dynamic!
youtube
AI Harm Incident
2025-07-27T16:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugwdpjr2oOIh8P4dRzp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgwnPPtZBPAsfjT1X-14AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgyPOTMyKM15e2boW0h4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgzvupI59qAm3eHcJZV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw4UmE6UkgmmuzEhWt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugz7mA_okt-11ls3KfJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugzmn3HfIjBSFzQ6aQJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzfUWw8_BjVDO8ZyvF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzuscjiGOPfTB0lLgt4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugwo1cVguXvopj2ovNV4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"indifference"}
]