Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Currently people, today's generations, are lazy. The dumbing-down of humans = AI…
ytc_UgyainpMt…
G
When you have workers trying to vote in unions, openly saying they want much mor…
ytc_UgwPzMmOI…
G
I remember being told that if there’s a tool available to help you that isn’t un…
ytc_UgxlNgHFC…
G
A large language model, is not any sort of viable form of artificial intelligenc…
ytc_UgxBjgjT-…
G
I'm surprised people don't want AI. They make your job too easy, what more do yo…
ytc_UgzwUSnUb…
G
This is England's answer to Yuval Noah Harari. Another 'super intelligent' man d…
ytc_Ugy4eSbld…
G
Erm... look at the headline. They're is an epically disturbing notion that we ar…
ytc_Ugw3bd7NK…
G
There is a reason why we have judges. You cannot just trust an AI to make comple…
ytc_UgyQglA8B…
Comment
This is an outdated view of how AI models work. Reasoning models like o3 are capable of more than token prediction. It does understand code and can reason about code. Without reasoning capabilities, AlphaEvolve wouldn't have been able to improve an algorithm no humans have been able to improve for 56 years.
youtube
AI Jobs
2025-05-19T15:5…
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzT8ExKUwd1Hy7MFrx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx6_q6o-0_4UHRLvSt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxwIgNrrPexpctrWlR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugxz_91_oGTyx3gUEul4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugxt7aHFrSTVexCGfMZ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwGZByxb18vDLOAnl14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugyx8xbQAinCKBRft0t4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgyotbqNwu3aPfR2E854AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwTPKIH90ugL_PPqU14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzzjtqdUjkPJAmE4Ut4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]