Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I would say a lot sooner. I thought ai would never learn the languague of the an…
ytc_UgwjYJ0P8…
G
I wanted initially do defend the parents but then I read The New York Times arti…
rdc_naskke8
G
I appreciate your perspective! Sophia's response touches on the balance between …
ytr_UgxOWpIve…
G
If you own a business and you’re depending on AI for security, you shouldn’t own…
ytc_Ugx1p8EF8…
G
You have no choice, AI Robotics will provide all needs and reasonable wants for …
ytc_UgxKyUcuc…
G
I love drawing since i was a Kid And BRO i was terrible ať drawing but i free al…
ytc_UgyWGBpZt…
G
No it’s not. Companies are removing people from over hiring and a declining econ…
ytc_UgwS_jJVU…
G
What they need is a fund that AI companies pay into that pays royalties to anyon…
ytc_UgxBkqu7M…
Comment
Well AI can become very powerful, if we continue development. There are many things we create that were intended to help the human race, but ends up creating issues we can't solve. In terms of AI we put everything on the internet. It's like the AI's perfect encyclopedia on how to think, behave and imagine like a human. We have tons of movies, books about the bad evil AI/ and how the humans beat it. Ai can easily pull reference from that. 🤷♀️ Just my thoughts, nothing concrete.
youtube
AI Moral Status
2025-06-10T17:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_Ugwi0WMYmKvFA-TjvhZ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"},{"id":"ytc_Ugwkpkslbd0S5Gn_dw94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"ytc_UgxFTdJ55OfZyCk9C6N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},{"id":"ytc_UgyzWI-mdLNrAE04_Vt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},{"id":"ytc_UgxpYRq-CHaHZN699ut4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"approval"},{"id":"ytc_UgzvUUCUnIjou4ZBqgF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"},{"id":"ytc_UgxBDQ5f71KSxrZ_W094AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"ytc_UgzPwmolqpOLy9o44yp4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},{"id":"ytc_Ugxp_bZCtzaRGxqO_fR4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"fear"},{"id":"ytc_UgzJnmj7ebDIfa57mcp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"}]