Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Introducing guns to robot could be the beginning of the end of human civilizatio…
ytc_Ugwow-2hX…
G
AI will always choose whats best when its highest priority is set to 'always cho…
ytc_UgzzcZw8B…
G
[IF ...] you've done your Susan Calvin ---》Geoffrey Hinton analysis, then you kn…
ytc_Ugw9npC8y…
G
The right wing in America has been under Putins thumb for many decades now. They…
rdc_jxz4ko1
G
Bro taught a lecture in A.I. now he thinks he's the godfather smh. I'm sure he d…
ytc_Ugw1DjH8_…
G
Talent doesn't exist. You just work hard, put millions of hours into a skill and…
ytc_Ugx1eRv6_…
G
@LaurentCassaro
Damn just posted but it didn’t take
That’s only part of the pr…
ytr_UgzVV7sqM…
G
Add all this information to a Rouge AI. Finish, khatam. Ta Ta , by by to humans.…
ytc_Ugx4Jvvsf…
Comment
It’s not alchemy. It’s just fancy slope intercept form and gradient decent informed by its corpus. Yay. The magic is solved. However, being real now, we need to stop calling it AI. It’s not AI as told in the zeitgeist of entertainment and social media; it’s different and it’s concerning. It can have a good impact, but it can only be as good as the leaders in charge. That in lies the problem… How good are our current day leaders and should we keep having these kinds of leaders? Our morality seems to lag behind our technological advancements.
youtube
AI Moral Status
2025-10-30T21:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugy8UHhtRX-5wPYKCT94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxkkVaQEQFx3MonK4Z4AaABAg","responsibility":"leaders","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwRdHIfWuNe8MpoID54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwN4vRDPTdOnQ9Kxn94AaABAg","responsibility":"none","reasoning":"contractualist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxINb917e7HMdGwPPt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwptd4S_dFIdyAoW0V4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyTPRIjE0h1zr7uhFl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyCOd9k_PcKXaD4u6F4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyhXTXe0R1y2tjmX1N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxbOZCB9nsFPoQdyR14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}
]