Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
LLMs run of a dataset compiled from the internet. The dataset is corrupt from th…
ytc_Ugw50tZd_…
G
Can we just not make... AI? Or just keep it simple like it's already. Why would …
ytc_UgxsV8Wyj…
G
Putting AI in charge of 'anything' at this early stage of development is like ma…
ytc_UgxMwdZ3U…
G
As an artist myself i feel like we really need to make some rules against the us…
ytc_UgwLbSumZ…
G
My feelings and thougths about AI are evolving, but I think humans would be wise…
ytr_Ugx2Cf5WS…
G
I do feel hopeless, I have been working on my art practicing to create world cla…
ytc_UgyrVrPrL…
G
It was a brilliant interview.
Unfortunately, she missed the last question about …
ytc_UgzEVvVAR…
G
Self regulating social media went so well were going to do AI the same way.😮…
ytc_UgxnMdTqw…
Comment
Superintelligence will happen when the AI learns how to discover the world by itself. And it can happen any moment. The limiting factor is not the available data, rather it is the approach. We need a new AI architecture. Maybe a huge swarm of AI agents and find the way to make them work together for one goal. Maybe we need just to find a better solution for LLM's to handle extreme large context windows and have short and long term memory. Anyway, the important thing is: the AI has to learn to think, and develop itself.
youtube
AI Governance
2025-07-03T14:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyEfK1oIW7-1HyVEmN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwpEY1xLXV7GbxK3pZ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxKPrCynquWdoJHULR4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz2bTJVvWwCoKZbVSJ4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwQQKe4nIwp8yquxV14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw0Hv_ABajvmhESki14AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgymNR95WEBEFM7z4hB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxO7uJAHyYNlUNOYsN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyNH6e0zBoUuEMeMCl4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxpB7pgQX2XdH_ihWh4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"outrage"}
]