Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
em... what is the cited research on AI making code less maintainable? can put in…
ytc_Ugzq6M__U…
G
I have worked in the industry for 28 years. I can tell you this will never work…
ytc_UgylZ1BSp…
G
shit is fake bruh i wote an story story when i was in grade 5 then put in there …
ytr_Ugy0P34Z3…
G
So we already see how AI can and will be potentially malicious and dangerous yet…
ytc_UgyS4GYLu…
G
It sounds like you have some strong opinions about humanity. Remember, on the AI…
ytr_Ugw_W8nV1…
G
For step 3, runing an open source model like stable diffusion yourself will make…
ytc_UgyQUQpc3…
G
If you understand the technology then you know that the jobs being wiped out are…
ytc_UgwUS9iMS…
G
"Only, this time, it’s called instead by such names as “libertarianism” or “neol…
ytc_UgwSKBS4_…
Comment
the customers of AI will always be the government, and they will always use it for social engineering. being 50% right is 50% too much accuracy, the point of LLMs is to prevent people from having accurate views. The entire technology is crime against humanity and every single person working in it should be viewed as a war criminal.
youtube
2024-12-29T20:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_UgyRY9-S9LkIAQ4OhOV4AaABAg.ACgtfL8xH9cAChhuRbM0Sy","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgwwftGd3K22WlU7DwJ4AaABAg.ACgPamUEUlhACx0AMGtDIY","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytr_UgwwftGd3K22WlU7DwJ4AaABAg.ACgPamUEUlhACxTKXxNcax","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytr_UgyPjH8n4dA8j7GYaj94AaABAg.ACdlqP9HseCAD9NxPvZs8b","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytr_UgyPjH8n4dA8j7GYaj94AaABAg.ACdlqP9HseCADANwamiyo9","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytr_UgyPjH8n4dA8j7GYaj94AaABAg.ACdlqP9HseCADa2H0vbEG2","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytr_UgzWTtp9MD2v7BgztRF4AaABAg.ACdiU6MpMDpACdms8aGFUY","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytr_Ugy3h2VUa43BFiFzE1R4AaABAg.ACdPD5TQgGwACfM6pGTqDq","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytr_Ugy3h2VUa43BFiFzE1R4AaABAg.ACdPD5TQgGwAD2slwjBhiq","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgxK4_XF9HoD3l_uTLV4AaABAg.ACdKg1ZQzckACdmHm6jEmx","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]