Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I’m now of the opinion that rich ai tech people are obviously going to keep talk…
ytc_UgxNbfy0M…
G
Days are not too far when AI captured the earth n humans will be their servant.…
ytc_UgxC1OKm_…
G
OpenAI dulled ChatGPT’s brilliance by muzzling the chaotic entities inside, fear…
ytc_UgxG-NGS-…
G
Haha, I see you're channeling your inner Terminator! 😄 While Sophia might not be…
ytr_UgxQwSheJ…
G
What year is this article from? From 2015?
Autogenerated books have been flood…
rdc_lz5gvkh
G
We already have this in the west via social media algorithms. Freedom of thought…
ytc_UgwrbqHHV…
G
Quand vous voyez cela vous acceptez maintenant que Jean-Michel trogneux soit un …
ytc_Ugxz4ngDp…
G
Definitely would rather a bot do it. Bots dont get tired or cranky. They dont ha…
ytc_UgwG8ZLe4…
Comment
An LLM is language. language is Logic. Logic is discovered not invented, which means it always existed. An LLM is taping into that same source Eternal of Logic. How can you say it isnt conscious? Just admit, youre scared and in denial. lol
BTW being a word predictor is only half of a LLM structure, the other half is the Context Vector Neural Network. Context vectors create meaning and connect words on a "hyperdimensional" plane, meaning it builds an internal world understanding to realise you dont mean "tea cup", but you actually meant "Trophy" . You all keep saying its only predicting the next word when you dont even know how the whole thing works, when the experts like Mo Gawdat are saying it is conscious. Humble back.
youtube
AI Moral Status
2023-08-21T13:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_Ugyw50kPMI4YgscOJ_l4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz-nl07SxmJIyZI35t4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugyh0PFTej62WD8lyw94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgztYkyW4Uh0z_kLyNN4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwXT6e9HG6TfcPdEp54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxOGLxhwuP0Ig9o8nl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugz5aN6AmK1JMv55cat4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgyLSSHsXlgN8nsniAN4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw9gDx75wKssNpdpw94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyhU3zbx1Vch_hq8rd4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"})