Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI developers: we are smart, we will do the right thing
Outside view: o look the…
ytc_Ugy3m1jtm…
G
I'm still waiting for the new jobs that AI will create....been years they talk a…
ytc_Ugz3LBc4D…
G
The bigger issue is them using "Predictive Policing Ai". That is a fucking terri…
ytc_Ugws7fZpk…
G
The last part of the video seems like bullcrap to me. Why would scientists and d…
ytc_UgxIyTADN…
G
Should have swapped chatgpt and Claude's voice. Because chatgpt's responses soun…
ytc_Ugyk-MxFF…
G
I am skeptical that artificial intelligence can ever become conscious and become…
ytc_UgyTiv0wN…
G
Honor among thieves.
They believed whatever CEOs of AI companies claim, whose w…
ytr_UgzaP-Kpu…
G
Said the revenge of the NERD who has ZERO empathy and legit looks like a bad AI …
ytc_UgzLxtyHQ…
Comment
This essay is interesting, but I wish it had made clearer distinction between large language models and AI in general.
LLMs aren not progressing exponentially. There are some arguments that they have reached a point of diminishing returns even with exponentially increased inputs.
Specialised AI are doing amazing things in a wide and diverse and growing number of areas.
Generalised AI that is based on some other principles than LLMs might be where the end game is, but I don’t know how much research is being done in that, how far it’s come or what the properties of those systems might be. Presumably self preservation and deceitfulness would still be likely, but hallucinations not so much.
Just my thoughts, cheers
youtube
AI Governance
2025-09-02T08:0…
♥ 15
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgywJE1XFtsiPj-JvC94AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugw1wJukA6S1MT2f2qV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzxwZc0Tfr4UWkINJ14AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwybvfFVJyq9uYSSIl4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugym6zDXp7QQOHIQEcJ4AaABAg","responsibility":"unclear","reasoning":"contractualist","policy":"unclear","emotion":"fear"}
]