Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
So I think people and I’ll explain! :)
So she/he (I’m new I was scrolling threw …
ytc_Ugys_IfxO…
G
Hmph there no true AI yet there is a person or persons human that influence or p…
ytc_UgxRp-b2T…
G
Neo the new robot the first generation think about it about 10 years time when n…
ytc_Ugy4RwIPk…
G
Howie!! Grow up stop acting like it's all about you mr comedian.. why aren't you…
ytc_Ugxqn-9iY…
G
How could AI NOT be used for control. That is what humans do and we have to real…
ytc_UgznOsnpU…
G
Interesting that the "anti doomer" clip focused on "AGI"
Well nobody seems to ag…
ytc_Ugzt9sdXm…
G
Oh shit !!! I am back to coding !! Was hoping to lay back on the beach while A…
ytc_Ugz4IFc9i…
G
Lo único que nos queda es revelarnos contra la inteligencia artificial. Sino que…
ytc_UgzivMwYf…
Comment
I’m a software engineer. Our business has recently gone ai with Devin. It isn’t to replace developers, it’s to give each developer a virtual team of junior developers to do the work. Human devs take the role of senior reviewing dev. I now spend my time either telling the ai what i want it to do or telling the ai why it’s proposed solution isn’t quite right. Any business that lets an ai loose without oversight right now deserves what they get - it’s a tool, to be used by humans who know what it should be doing and can pull it up when it gets it wrong.
It probably only gets it right first time on maybe 10% of tasks. On the flip side, it only gets it so wrong I have to intervene to write the code myself maybe 10% too. Can usually tell it why it’s wrong and it’ll get it right on a retry.
youtube
AI Responsibility
2025-10-10T08:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | industry_self |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzpXL_DHu-27znxXjR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyhSeqx6rT3qMGGXN94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyh1w_1_zyVl-Q1d2J4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugxqx3TKt19eTR8H8_h4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyxzsEn_0DcVuM8T4d4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugy_SjaptTiiBPUuwQ94AaABAg","responsibility":"company","reasoning":"deontological","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgzAS2DgJnmGZYIYmgJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwajUsn1XG8EYvgVPl4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgwkbOlq5XYueMH3iZh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgwKZmMP1qdC5cu6pCF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}
]