Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
So an AI is going to know that I'm upset my friend keeps last minute cancelling …
ytc_UgzO-Jida…
G
The only brutal truth is that after all this hype ai couldn't build me a simple …
ytc_Ugwqvkf1K…
G
Its unbeliveable how little knowledge these guys have on the subject of AI. Ther…
ytc_UgyCeiS-n…
G
If we need a course on AI, then AI is a failure to begin with!!!!!! Think about …
ytc_UgwGRpSdU…
G
The funny thing I've realized is that a lot of the time, if you had trash taste …
ytc_UgyH41sfg…
G
A day or two ago I saw one of those memes on TikTok with a girlfriend standing u…
ytc_UgxaVNWe2…
G
The way AI is being used today is a one way ticket to anti-intellectual regressi…
ytc_UgyzUIzo9…
G
Erm 🫤 one looked at her
The robot:👀👀👀👀👀*side eye* ✨
The girl: are these robo…
ytc_UgwFeMK7T…
Comment
Anyone ever met a developer that can produce flawless code consistently?
AI in and of itself is not the Danger. It's human Arrogance in thinking they can concieve of all the possible outcomes the things they create can Cause; especially so when it comes to things missed. Whether it's a line of code that determines rules of engagement, or an error in the code that results in unplanned outcomes, Even the best Development teams aren't up to the task of Programming Killer Robots.
youtube
2020-01-23T01:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugx9cb65B9oojBKLTOB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzlaZE3OguKqrIZm8R4AaABAg","responsibility":"none","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwpeNlfoo5SxLW2V2V4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwGpNdDMurRR-6OLH54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwZIQCIMpNeR3KAW0R4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwpVTOpudVsGr_1srh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzELMs50ID3XIbViwR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxuuZhyWITOckHJI5F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugx8FH3md_J2QNUt79Z4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_UgyMd2NqcBFjhZjqwHp4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}
]