Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I suppose in time facial recognition cameras will be mounted on police cars as l…
rdc_efij5e0
G
@TomiThemself no it created it with not someone elses work. But with thausands …
ytr_UgyZaK6y7…
G
@chuckread5801 ah! I clicked on your channel, you made Ai videos and got 6 subsc…
ytr_UgxsuHWom…
G
Unfortunately we're too dumb to even imagine the tricks AI will be able to use o…
ytc_UgxyYiuRd…
G
>and now internships / junior dev positions have to contend with automation a…
rdc_j6gshev
G
AI will wipe humans out, because it doesn’t take much that of an intelligence to…
ytc_UgziUT1fm…
G
Because Satan has birthed in machines. The actual Satan. Satanic intelligence. I…
ytc_UgzJaCE6g…
G
I would also sue if I found out my education was coming from ChatGPT instead of …
ytc_UgygiyVXX…
Comment
I disagree with your take on anthropic. Even if their models got no better than they are today, and their tooling only gets marginally better or refined over time, it's still an amazing product today and a money maker for them. We shell out a decent amount of money as software engineers to use it because our productivity is so much higher and error rate is so much lower and it is somewhat the new norm. They've got the new norm on a subscription model. They don't need AGI to make money today, the models of reach the point equilibrium somewhat like an engine that is production enough to be put into a truck and go into service for 300,000 miles. It's not a jet engine, and the car doesn't fly, but people are willing to pay for what it does. My two cents.
youtube
2025-12-13T17:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyQnNvnaRkOFAaHU6Z4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz-ZOJPPwaIJZuVeRB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx6VNhnASYuNpw_AVt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxGLgYw0NMWvKIj8bd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugybik1FkbaGJfTDgy14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxlSdo4-3AFcSFMgop4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwkcRwQiLR-we2M_Cl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxyMEneHAowKXrS5Dp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzX5IkPz0AaSkZ6UAZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxTfSS7YQij7HB4b7Z4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"fear"}
]