Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Calling LLM errors "hallucinations" just anthropomorphizes the machine. It's not…
ytc_UgxiqSmOa…
G
I did ask someone to sign into your email ahem ai chats and it is... Quite dist…
ytc_UgxWkzX8s…
G
The kind of stories that AI produces is very rudimentary. It requires rewrites b…
ytc_UgxkEXKLc…
G
Possible solution to these ideas are way above my head but let me ask what seems…
ytc_UgwXuKeSV…
G
If your kid is struggling, that's the parents' fault, not AI.
You didn't put in…
ytc_UgzlyB0mb…
G
“Ok I’ll destroy humans”
“No no no I take it back!”
*ROBOT GRABS HIS THROAT*…
ytc_UgxrjSzst…
G
add the script to A.i and ask it to implement it and watch it align!…
ytc_Ugw59uimP…
G
I think AI is dangerous not because it's going to get really smart but because i…
ytr_Ugz1e5I1t…
Comment
As I go to listen to this video, I have co-pilot resolving an issue with our code that myself (20+yr developer) and another developer couldn't resolve in 20 minutes working together.
Sure, call it overrated, but don't underestimate what is going to happen if these companies keep throwing money at new models (that are both better and cheaper). Also, the move from transformers to titans may significantly accelerate things as it will introduce real-time learning.
Then there's robotics, and the benefit/threat they pose as these models get better and require less hardware/money to run them with.
youtube
AI Moral Status
2025-07-23T20:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyVxPcLmRDCJyfUOa94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyDgG4kGh1wgw9ZFQ94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz6NNVKAbVp7vvder14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxPRZfaEBBnBTzcE8V4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw3q1GbG810gcyrIPV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzMn2YemlSHbBCo3xB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxJAyllVBJE0k9RWMR4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyvxsYyxkUXwBmVUkh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx3PS27ZwJ6eDWEEhh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgygNIzEudKEJIqiT5F4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]