Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
21:10 and i strongly believe those in power will create cathergorized types of i…
ytc_UgwsyvE5o…
G
I love how the AI just goes to "Uhmmmm...." when it gets really confused on misi…
ytc_UgzpfjKp0…
G
“Noooo you can’t just make a tool with thoughts and pain it will overthrow us an…
ytc_UgwYq03i3…
G
Still it is vay vay better than human driving cars, such rear incidents should n…
ytc_Ugzd_0hNU…
G
No job = no tax payer.
What the government must do to help people get a job is…
ytc_UgwTa7HWr…
G
@thewannabecritic7490 i can explain it to you again, if you want.
AI art takes n…
ytr_Ugxm0d_Tm…
G
So if this guy doesn’t want to die (trans humanism) then he is on the same boat …
ytc_UgyG2dxoB…
G
I always get frustrated watching these, because ik regardless, ai is still being…
ytc_UgxLux3lm…
Comment
I've used Claude Sonnet 4.5 at work to analyze and work in my Java codebase and it's impressive what it can do which is sort of a really fancy context based text search but I can't really say it's intelligent, it's just a really good helper and still needs a lot of guidance. AI in an engineering environment is really useful to take away the grunt tasks which are repetitive or the tasks where you want to analyze a really large data set which would be too complex for a human to do. The output will still need to be validated and the "correct" response selected and refined. However, in the medium term I think the really dangerous scenario is really what humans are going to do with this technology. One can imagine what it is doing with surveillance applications or in police enforcement and military theaters - that is the scary thing - humans will use it for nefarious purposes and there will be bad outcomes.
youtube
AI Moral Status
2025-10-31T05:4…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | industry_self |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyUSZEt_D_L-srdtY14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwAR-miK3McSNbQPlh4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwsUjPct9PdMZ4XAVV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwyUpBv84xu5HK-UJ14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwP7jRZYjiOlpH3Ve94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyAxxpxkUA1pNFS3IF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx0bCq7miXbvb3zCFR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgyTcGoRQ6hE812SaF14AaABAg","responsibility":"none","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzoyNSQG_OyUFPjpMB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz_mNWRN9AgxSfaC994AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]