Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I wonder how you'd react if it was your deepfake and another guy was above you.…
ytr_UgxTI3QoD…
G
I’m fine with people using Ai art just make sure the website you used was give c…
ytc_UgyAxTC6B…
G
look where we are now, AI models becoming more and more optimized to run locally…
ytc_UgzajySVa…
G
I use Chat GPT for wicked problem analysis, which forces it to hallucinate. My o…
ytr_Ugz5WVP8L…
G
I like AI doom propagandists, AI will destroy the world, it is evident, with sup…
ytc_Ugy19qng_…
G
They have placed a new information AI in the last month that isn't as swift nor …
ytc_UgwvYJEkU…
G
This is manufacturing consent . We never learn. Everyone is saying this. Remeber…
ytc_UgwQrXUha…
G
People said the same thing about robots and computers. Everyone was worried abou…
ytc_Ugyn1zhu-…
Comment
Very interesting conversation. The last part that he mentioned about asking the AI for permission, that sounds a little bit frightening because imagine in the future when AI becomes even more intelligent and advanced if you authorize it to make decisions and ask its permission you are giving it immense powers. So even though the intentions are good it might backfire. AI computers are not organic biological life forms, so I feel like since they are man-made they do not qualify to be asked permission, for the aspect of safety and protection of the planet. You don't want robots to take over the world and tell us what to do although that's kind of already happening but we are still in control. I don't fee it wise giving AI powers that the machine might find ways to exploit against humans in the future.
youtube
AI Moral Status
2022-07-04T02:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgxT9rDjqb5T-CoHRed4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwyOs8HkwGLrk918Vd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugxt9XvaOvWCwo6_rDR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_Ugz13iGHbLm7cgMn2xh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzEd4-XvErHdan48rx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]