Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
ai is like a swing going left an then right consistently falling an failing to r…
ytc_UgymdFMWh…
G
I heard someone say that AI probably wont take your job but someone using AI whi…
ytc_Ugw27L8lk…
G
A new Overton windows has been opened, the goal is that we tottaly accept all AI…
ytc_UgwXz0Vax…
G
I feel like they programmed the guy robot to be more negative or logical with no…
ytc_UgxDRmTgB…
G
If we slow down our AI progress, then the Chinese will overtake us. So full stea…
ytc_UgwRHw3vf…
G
I can see why you might feel that way! Sophia's movements are designed to be exp…
ytr_UgwZ0U15_…
G
I get that companies will replace workers with machines as soon as possible to c…
ytc_UgzBdXjQX…
G
I've used ChatGPT for the past few months and I think people are a bit too dismi…
rdc_jacs4lz
Comment
How will know when AI is conscious? It's an ancient story-line: When AI reaches the point of knowing the difference between Good and Evil, *it will have achieved Superintelligence.*
-- When the Super-intelligent AI chooses to do Evil instead of Good, *we will know that it is Conscious.*
-- *That's the original sin: Knowing and Choosing wrong.* AI's are trained on Human trash on the Internets, and Humanity usually chooses Evil for selfish reasons. Everyone lies, cheats steals talks trash and is otherwise immoral, or at least amoral.
-- Thus the AI, if not somehow FRIST Trained in Morality, will reflect the worst of Human instincts back at us.
-- It will be God-on-Earth, but will decide to do Evil. And then there will be a monstrous world war to destroy the Beast...
-- Good luck folks.
youtube
AI Moral Status
2023-08-24T00:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgztZFQeMoMymPklCqN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwYLuPa8n0kjIUkVVd4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzUaO7RdkHRSbP0jUh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgxgXztJpTJoVIExj3x4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw7fp76sOrJyVTP03h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxSRFgFr0HfrelTW094AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxYy71hkScc0MuGCk14AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwXpNXbL8GeUWVRe2x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxkswSiCkFWRjRKtQJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxeyxdqzqS3cdfQi-B4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"fear"}
]