Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
You can't tell me "resilience, creativity ,high familiarity with the tools"
Was…
ytc_UgxHZDi1U…
G
But are they real activation codes? Usually with things like these ChatGPT will …
ytc_UgyTUGf65…
G
I mean, if you went on the front page of deviant art and used that to claim huma…
ytc_UgzpeKyGd…
G
Years ago ai was struggling to do basic algebra now it can solve complex problem…
ytr_UgyzsF4Vy…
G
All these replies about AI coding agents being able to produce great code have m…
rdc_mt7vh2j
G
I asked Google Gemini if it could pass the Turing test? It said, basically, 'wha…
ytc_UgyfeIq2Q…
G
@Eren_Yeager_is_the_GOAT „you can try it yourself by downloading stable diffusio…
ytr_UgzrDwg51…
G
We don't need ai for that,, at the rate were going, we'll be extinct in a coupl…
ytc_Ugyuplr_i…
Comment
1:09:14 As someone who’s had to call a warranty company several times about the same issue, and had a completely different experience with each different human agent…I can see the value of an AI agent so long as they’re programmed to always be fair and just. I’ve had human agents express empathy and do exactly what they’re supposed to do to help, and I’ve had some who clearly don’t care and are irritated at the world and just hang up and close my case without resolving. There’s no way for their boss to really monitor that quality control without spending hours digging into each case and listening to call recordings. It’s actually wasted a lot of their company time having a couple of lazy agents, because I’m now at five separate agents working on the same issue - it just needed one to actually see it through. Not to mention a huge waste of my time. But they’ve had five people now do the work that should’ve been done by one.
youtube
AI Governance
2025-06-16T12:1…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | industry_self |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgybwkUlLjNpqGHwCDN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgyK86HVCnfEp2YpsRB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwRFOZQ8KEQpf1QQAV4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgwW7Yd339WGmxUzp5Z4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugw9KbxUxbs29lCHsw14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxWYEf4uCGRc2uDYGR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgwLAGWBiGFois9PJLN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzvVe01CWiXNQ3S-ed4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgynndGtTRlHcJfrBp94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyWl78BZDKiSIboABl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]