Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I don’t think we’d be able to run and hide from these programs. In fact, when it…
ytc_UgyvTUHoo…
G
For this an any technology one death is too many. I wonder what is the average o…
ytc_UgzcMDePj…
G
What confuses me more is, why the robot grabs him in a strong way but when he gr…
ytc_UgxRf7mmu…
G
My essay got marked for being AI, because I used big words to illustrate a furth…
ytc_UgyKbA9yP…
G
It`s going to be a classic "addict and extort" mechanism with those AI. As a com…
ytc_UgzshxDdA…
G
Ce n'est que le début et ça casse déjà les cojones d'en entendre parler tout le…
ytc_UgzRqeK0c…
G
Well, few teachers can sometimes detect AI if students used AI to complete their…
ytc_UgzKHB3Gp…
G
Hinton is killing his guilt, trying to make his conscience clear. He should be r…
ytc_UgxBBhuiQ…
Comment
I'm a software engineer of 5+ years that's fully integrated AI into their workflow. I use it as an assistance tool. It helps me write small blocks of code which I then review and clean up. If I were to let it write everything, it would make a decent effort with step 1, and then forget what it did and struggle to advance/fix anything that breaks. It's a great tool, but it's a tool. It's not an SE replacement.
youtube
AI Jobs
2026-03-07T12:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgxnypCJyhJp7h22cKZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyVoiW_PSj5ks79daR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx6cHvLfmtASvGMHaZ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgyLQcsG5GdZaBHBudx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwnOPgjDV6aTOF-luB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgzyYqr7zcFl5lETrq94AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwcwvYgXrnF-0C3f3h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxg8K9tDw-CzeLuE2l4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgwYNeMmoljc26INoC14AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxPi6sgKt3LgQ23E554AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"]}