Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
yep been using GPTHuman AI for a while now, it's really solid when it comes to m…
ytc_UgxebWKzN…
G
AI can be abused just like many other things. We can’t blame it. I blame our hum…
ytc_Ugx_AC-4T…
G
I would love to know what counter measures or bipassed controls were compromised…
ytc_Ugzw-KSz-…
G
I wondered where these questions came from some of the questions were good. Most…
ytc_UgzVLmeWb…
G
I once asked ai to make an exact copy of bohemian Rhapsody word for word, note f…
ytc_Ugxe10RcY…
G
Krystal's heart is in the right place in her support for American workers in gen…
ytc_UgwcFFjF4…
G
No ai shouldn't be stopped because humans have limits of inteligent or physica…
ytc_UgzJIJmMC…
G
Luckily, this whole exercise is constrained by the natural world. AI depends on …
ytc_Ugy5EtHXk…
Comment
My lecturer said something interesting the other day. We work in computers every day in this degree, and she said that she thinks the AI excitement will likely die out in a few years. Just like with things like VR. It was this amazing thing until people realized it's uncomfortable after 30 minutes. Then it kind of just became another things. Or like the whole Apple glasses thing. I'm not sure how right she is, but I think it's interesting to think about.
youtube
AI Moral Status
2025-08-20T17:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgwHaqFAx0o2LyR91-94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxFfIwW8gnD7maa_B54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzYQ42Aw_ZvYXaldkV4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwAbZ1al9G37dDf9FB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzqaR2KHOFWnGrL-Bx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwhfbVwYhyMJTfI6u54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw3-tqe6gtHzXKQcJh4AaABAg","responsibility":"none","reasoning":"contractualist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx23ueriAnCSIiwxq94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwtk-E7LCX5djtJYJt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxQQHmMzXUdomqgsJB4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}]