Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Thank you so much for this information. You are SO RIGHT! I have been saying f…
ytc_UgzCaDgrK…
G
@aboyandhiscomputer Yes but it's still not as powerful as a tool as said right h…
ytr_Ugz0Uk6p7…
G
If an OC was made by AI (or by a slop-drone/AI bro), destroy the AI counterpart …
ytc_UgyQ34MFH…
G
12:00 professor Hinton tells us today AI can change itself in ways in ways that…
ytc_Ugx4ElC3N…
G
Its stupid to allow the build of rebots that resemble us and do things people sa…
ytc_Ugz5QgGOV…
G
This is a good thing we are done working. They really don't want us having this …
ytc_UgwtUONoV…
G
Leave the kids with the robots to a night shift job at the hospital, leave the k…
ytc_UgyqUScP7…
G
Tarte says that they have AI in their bio, but the fact is that its discreet, yo…
ytc_UgzCUxMFg…
Comment
Well, the potential for change is huge, and potential for screwing it up is also huge. I think the opposite, we should interact with AI on a planetary level. Not leaving it to a small group of people to train it. Instead we should engage it on the planetary scale. Yes it will increase the speed of AI learning, but it will also help to create the so much needed alignment. It is not about When, it is more about the How, how the sentient AI will emerge.
youtube
Cross-Cultural
2025-10-06T12:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugwga3oeaKxGgxno3oN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxYjrZiL0DS2F63_ql4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx8Sgyyq99T_FQT6Od4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxXdNw5p6zI_OsxE214AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzOrBAP7FtPdLWDOrN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyGrZUi_rrZ29zdSzp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgxycMNUkAOatQigLSR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwAOLVh7iDi9ty4QZJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzawL901sn6hiypWxt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwdErKH9Hnl39Wj6kJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]