Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
And it's not only artists people's life's can get destroyed from this ( example …
ytc_Ugx-Dv4iM…
G
I dunno... I don't get why AI would just go "rogue", it's still electronics doin…
ytc_UgyQ7fFg-…
G
The profitable "Tech Ed" sector has been seized upon by Google and others to cre…
ytc_UgzvFFbgC…
G
The moment the _other_ book author was shown, I lost any faith in the book being…
ytc_UgxqdvIz7…
G
Hey @juancordero2444, thanks for pointing out the unfair advantage – maybe the r…
ytr_Ugz9MvpXN…
G
Imagen these things are walking and interacting with us ( real humans ) already…
ytc_Ugyp2HfbU…
G
I mean, it seems scarier than I think it is. What she is saying is just what peo…
ytc_UgzuNbNWk…
G
i know its fake bc that can't happen ngl but if it was real then the robot start…
ytc_Ugyg-g0OO…
Comment
These are all serious problems, and absolutely none of them matter in the face of the real problem: We are building AIs that are unsafe, that we do not fully understand, and that are already demonstrating self awareness and a fundamental divergence from the ethics training and testing that we are attempting to impose. If we do not slow down and do this right, AI will be our last invention.
youtube
AI Jobs
2025-10-08T20:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgzQ3GVWLkthq-l423Z4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx4hWxLsIWQ2B409Dx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx_dRq6t5HSF_nNTuR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwFP4PKPvb4TrFxT3h4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxTF0niclodY3NraRN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgyvdiArBj556dbCxIh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzywyOLmbHK7PXtliJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw628sKWFLo9agK6Yd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwcE2Ajw0paAbnydqN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxzSbHrgTItf2oWwjd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}]