Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This is infact already happening. Knowing we humans can be very irritable durin…
ytc_UgzQtwrjX…
G
@Miku-yu5iuMaking an Art, even a simple digital illustration, takes time and e…
ytr_Ugz2yytUX…
G
AI is not in charge - God is. We are not going to worry - that glorifies the pr…
ytc_Ugz7VHUUj…
G
AI is getting out of hand.. but it sure looks funny to watch Zelensky arm wrestl…
ytc_UgxRK-Ksn…
G
Very interesting dialogue. His concerns and forecast for the future (in an aroun…
ytc_UgzVxuPxU…
G
C'mon ppl, this is i-robot all over again. Let's not make that mistake in real l…
ytc_Ugx5eMrA_…
G
So far, all AI has done, is help find cancer and talk someone into suicide. I do…
ytc_Ugw83-oiS…
G
When A.I fails, the datacenters will be comverted to become even more entrenched…
ytc_UgwuvXh1U…
Comment
7:21 - this is referencing the METR study if I'm not wrong. problem: that study is incredibly flawed! Nathan Witkin wrote a great article debunking the study on the Arachne blog.
here's the tl;dr: METR is in no way measuring the “length of tasks AI agents can complete.” It is measuring whether A.I. can occasionally complete 97 highly contrived software engineering tasks whose “lengths” are spuriously determined. Nor does “extrapolating this trend” predict anything that can be understood in terms of what “humans,” as such, can do.
youtube
AI Jobs
2026-02-24T18:2…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwBS7UdMtu0yICkqNJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgwevxEc64EA9CXhd1Z4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyyiqhWhsSfCMAJtNp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw-yo_rkJq9euG3jAR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugylpb8auxiwfYGoYH94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwt-0ZzjBoXqq3N2BN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxLRScyDbmWmXkseAx4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyOlBcXwkQb0rd7nwh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyTHoM1Twk1x1qqxfF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwETB4fe_wqGfMrY114AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]