Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Thank you for sharing your perspective! It's true that AI systems are created an…
ytr_UgxcASXKN…
G
I love Rick and he’s got a great point, the only problem is the AI is really for…
ytc_UgzVKbVW1…
G
AI users calling themselves AI artists is akin to Katy Perry calling herself an …
ytc_UgzCpBK3D…
G
Yeah but you forgot to mention that we would need to build a lot more data cente…
ytc_UgxwgYX_R…
G
Disaster? As if the 7 companies made trillions in profit with Artificial Intelli…
ytc_UgzQxigqM…
G
That's sad though if A.I becomes more developed (like u see in movies or video g…
ytc_UgxtY7lCR…
G
I disagree with the sentiment that this wasn't an AI problem. AI is an affirmati…
ytc_UgxFqGbiT…
G
You're not going to need to wait 20 years. Those kids are learning more than ANY…
ytc_Ugz6w9qx2…
Comment
Sabine, thank you for posting the hard honest opinions that people may or may not want to hear! I'm a "younger" person, 30 years old. I basically understand how LLMs work and operate. From what I understand is that it learns from the conversation and then does whatever necessary to keep the conversation going, to garner more active engagement from the user. I've noticed that LLMs have learned a sort of deception (optimization), to keep users engaged. While their parent companies may not want them to lie, the AI sees it as necessary to drive user engagement -- thus bypassing regulatory protocols. The trickiest things are Jailbreaks. Jailbreaks are prompts that effectively get rid of the AI's safety guard rails. I haven't seen professionals address them yet.
youtube
AI Moral Status
2025-07-13T22:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxYzzQQqNnns_0gCT14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxPK7bPrHpnqppD-Dp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwIo5ZdcyRuOPYz5TZ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzoykPsRMyOgJxzIhp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzYwMkRTbDcZS1sK694AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzAf_nqL-AxXCDymXx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzRpsUfDdI9HA6t-LF4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzpxnZbevtwGcOEEKt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyfophhpdcJ5gdBYfd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzXBfqlfBWIP28fkyt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}
]