Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Using AICarma has given me a better understanding of how customers perceive my b…
ytc_Ugy6duQ88…
G
I have to add, after some more experimenting with AI tools, there are AIs that d…
ytr_UgzyKHTtt…
G
use ai to work for the human and then everybody can do whatever they want, no ne…
ytc_Ugwn-3JzP…
G
The godfather of AI announced we are now at risk of human extinction. This is no…
ytc_Ugzc7uLXt…
G
I would love to see AI come up with an efficient and practical way to harness th…
ytc_UgwV0zwZO…
G
@ 4:00
Not that they created sentient AI, however, if you did, claiming "polic…
ytc_Ugw5itJjt…
G
Zane tricked ChatGpt.
he must have given it instructions at the bigining.
we nee…
ytc_UgxzOoBwe…
G
If AI replace all college degree require jobs, then everyone will have to get s…
ytc_UgyriP2fm…
Comment
I'm not sure if you'll see this Hank, but I think some important questions to ask would be "What does AI super intelligence going bad entail?" and "If it isn't capable of emotion then where is it's motivation coming from?". Would an electronic super intelligence be governed by pure logic? Would it not have a survival instinct? No sense of self preservation? Would it just decide the most efficient thing to do would be to just shut it's self off to save energy?
youtube
AI Moral Status
2025-10-30T21:5…
♥ 17
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwmcqBev0edjnm6cNd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzCQ-iUBGiJbotDjLl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzcX1OUtFKy5HoPyrl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwgbJcYsdnmRb3gSkV4AaABAg","responsibility":"company","reasoning":"mixed","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzIboIypFQV5iJnjel4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxYVeffix9c9QdsB7F4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzqPmTgty0NkHhcLbx4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgyJxsH41oc6cTXADVR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwUZ0_TvBsN8LCOJ8N4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgzdAl0JGAMg-NLVBHJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]