Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
My freaking Spanish teacher looked on Google for an image to use for a class pre…
ytc_UgzFN9jkx…
G
Unbelievable Conversation With AI (ChatGPT) - Part1
https://www.youtube.com/watc…
ytc_UgxuJF2qH…
G
Ppl calling theirselves artists just generating pictures is wild, but the hate o…
ytc_UgwVwh71X…
G
Weird. I find this a really interesting time for developers if you're working wi…
rdc_lqrpoug
G
Boycott and bullying can stop it. Just make everywhere not a safespace for AI br…
ytr_Ugx6OkLZv…
G
Automation isn't new, it's just jumped from rail to road, autonomous trains have…
ytc_UgzOOeia2…
G
'not used for immigration enforcement' ?
Why would they say that ?
If I'm going…
ytc_Ugz16TiKu…
G
Not a dilemma for me. If I pay for the car, my life must be prioritized (just li…
ytc_UgiH4bJgU…
Comment
@nomore6167 THIS!!! Also, AI statistics on giving factually correct information are terrible. And so is it's statistics on giving people directions to the nearest bridge.
AI is not a person subject to ethics and morality, which therapists are. If they screw you around, you can seek recompense. LLM models are being marketed as anthropomorphised to encourage us to trust it, but it's not and you will have no protection when things go wrong, as they do, with generative AI.
It is currently a hyperadvanced predictive text and you better hope the text it's predicting from was pulled from a good place.
When I see films, series and pop psychology articles, I cringe as someone with an education in the topic. And that is what they're training AI off of. (That and Reddit, which we all know how lovely that cesspool is.)
youtube
AI Moral Status
2024-09-03T08:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytr_UgxNDqj1FlDsiAFspGl4AaABAg.A7ps2H0nu-eA7vD3mWnFO","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytr_Ugy46Wf0e71bUX0wpaJ4AaABAg.A7pWr6FJeKyA7pbBkKZ6qF","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytr_UgyNzMF9XuRVCKMzlCt4AaABAg.A7oyWSEGHODAI_z0yFw4vY","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytr_Ugzi6o59I-V8wTiIvop4AaABAg.A7or9F8pyZ-A7rWw1uOVTw","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytr_Ugzi6o59I-V8wTiIvop4AaABAg.A7or9F8pyZ-A7r_bI_QWOA","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytr_UgzVIinh10SJs8Gpc_x4AaABAg.A7oGBfO-GFWA7q2PFqoFYl","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgwGwDG6rKFj6Cy7r5J4AaABAg.A7o8YfEGH6qA7t7UP6kWDw","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytr_Ugxz0i65KH2Jil0V69p4AaABAg.A7myu7y_5ZwA7oMJLZnVCs","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytr_UgwXitAvpkr_fWPJhZ94AaABAg.A7mU_knRuTAA7oL-CGzYK2","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytr_UgwFO4vYTHYH2f6Eil94AaABAg.A7mK0I42AQeA7oIh6FA2K2","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}
]