Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Ai can do all of that. I'm just not afraid of it like the rest of the cowards…
ytc_UgyQBb6BG…
G
I wonder if we are living in the final days of the human species... I have never…
ytc_Ugyyp49VP…
G
Ah yes, ai chatbots and the dark secrets they make us spill. Fuck me dead if tho…
ytc_Ugx_bQf-t…
G
This guy is a m0r0n, don't let the fancy degree's and high pay fool you. He's an…
ytc_UgzQgPjir…
G
One issue not being addressed is the prediction that fossil fuels will run out i…
ytc_Ugy5noRfV…
G
Notice he said people need to feel like they have agency because currently we do…
ytc_UgxVAPOyZ…
G
I talked with gimini so long the ai talked in a sentence and then after I asked …
ytc_UgyrXgvFw…
G
Have you considered that the customers who _don't_ encounter any issues with the…
ytr_Ugyk-dyD8…
Comment
Multi agentic approach has not worked well at all for me in terms of engineering. The agents aren’t reliable enough to work autonomously, ESPECIALLY on anything involving security or databases. I also worked on a project with very low tolerance for slow/poor performance, and I have found AI does not do well within those constraints. When I complained that it slowed me down at work, I basically got told I am bad at prompting…
youtube
2026-03-15T22:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzKoCUto3F02TXWyTN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugxq9-CaS40ARObpVKF4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyFwi0SZh4cUSc_v154AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxCCTQo6ciPW5n3qw14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxT7gCSECRzzVKCb694AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxlpQwI9xcg6bAhHmx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwaiz7eZc-u-hu5TMJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxoHsTkeBYifApgOD54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyGhZFRRNd4FZ1wvqN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyJI377JkvYRdbiq8R4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}
]