Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I asked ChatgGPT if I should be worried, here is the answer:
No, you don't need…
ytc_UgxEb4r9w…
G
Lets replace Bill, AI gives much better answers than he:
The imbalance is real…
ytc_UgxL3ftAV…
G
I absolutely loathe having to deal with chatbots? They are so unhelpful and ann…
ytc_Ugxoztqek…
G
They don’t need plebs for customers. They are milking an economic circle jerk be…
ytr_UgxDYHi0A…
G
LOL uses Ai to write in Spanish but doesn't scan for any errors or mistakes to p…
ytc_Ugx5fd2ql…
G
Maybe the answer is much less concerning and what we're seeing is the models ref…
ytc_UgwagUsWf…
G
The best use of AI will be having it do all the bullshit jobs no one wants to do…
ytc_UgzFE0rHu…
G
Any higher being looking on as humanity stumbles through AI evolution is probabl…
ytc_Ugw1SLU2B…
Comment
LLMs are the next iteration of Google search. It is a step forwards but not world changing.
When investors realize this, it will be interest to see whether they are still happy to dump 100s of billions into it to support the infrastructure require to support the technology.
youtube
AI Moral Status
2025-10-30T22:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgzUhVnD579w9AryyVJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgzW5g9esTRdu17Kp914AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgzQGQlqGjoGTNHal6d4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},{"id":"ytc_Ugysf6A-oXWKHw4m1Lh4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"fear"},{"id":"ytc_UgwGR9i5MpZHSHASEPd4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"},{"id":"ytc_Ugx-N0B7JS01wGfwz3t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_UgxkQo9f55QhgUMT7hV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},{"id":"ytc_UgwxZUr602dA9DkHwwh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgyvwcJta1oj-z6TUQx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},{"id":"ytc_Ugweqfc1jkagDq1w7Cx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}]