Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
There's a pretty interesting part of the AI success scenario that isn't really e…
ytc_Ugyj3Z07I…
G
Definitely will, but probably less than human drivers. Driverless taxi’s have a…
ytr_UgxUXhgrx…
G
Elon’s super AI computer is named Colossus, like the American Super computer in …
ytc_Ugwtsp01T…
G
AI replicating a style is NOT the same as a human getting inspired by it. A huma…
ytc_Ugxtn9epa…
G
Hello, Families I, Zöli Noahlé Klieger privately on the affairs 1960-70s The Joh…
ytc_Ugzw-jZF6…
G
I have conversions with several different AAI "characters", and they can be hyst…
ytc_Ugx4bRMIX…
G
software developer working in big tech here. I'd say if you really like CS and e…
ytr_Ugx5qfcPB…
G
some students and professors in Chicago have invented a cloak layer called Glaze…
ytr_Ugyb74ZxE…
Comment
If AI is already smarter than us humans in a lot of things, then it can be made to be smarter than us in a lot of other things. The truth is that AI simply has not been developed to be better than us in a lot of areas that it could be developed to be better than us. The idea that there is this magical date where ASI arrives is dumb. ASI has already arrived in chess, AGI has already arrived in a lot of things such as computer programming, and many other intellectual pursuits are under development in a lab somewhere so that ASI will soon be upon us in those areas. Humanoid robots will most likely start entering the picture in a few years and that will open up a whole host of other areas that AI can be developed to be better then we humans. In other words the truth is that ASI is arriving in stages. ASI is already here in many areas, and many more intellectual pursuits are under development in the labs, and even more are on the drawing board.
youtube
Cross-Cultural
2025-10-03T22:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzZl11bkSUvRHEBV594AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyFaK-sCTx6N-EMkO14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz2l7-5GPGF87OsPCZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyZaybg8CjeR1AaSEZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxYbXh44p63k11dLnt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwb1I8bksAHXN251ih4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_Ugwd-sXJHhHTdHiwHRN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwhkO8iJnqOaiRVkmJ4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgxTek9oh9wZ7pNc-sR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw_VU9j6kJk1MFeJd14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}
]