Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This Chat GPT isn't smarter than us. It was programmed to give responses, that w…
ytc_UgzW0GPbC…
G
That's dumb to fight and Robot there mind out of Meadow if you want to win Again…
ytc_UgyQ7-KN3…
G
No, I disagree.
Just because it was public and free *does not make it yours*
*…
ytr_UgzkAuC5d…
G
She is literally from a game where AI like her develope emotions and it doesn't …
ytc_UgwwAWeDG…
G
Only people who are totally detached from reality believe UBI will ever happen. …
ytr_UgyHDrdxb…
G
We use it for tiny quick projects and POC. Like simple automation tools and reac…
ytc_Ugyt7QbtV…
G
One thing to account is that some or many superiors than can't shift guilt on em…
ytc_UgyexxrGy…
G
This is possibly the best podcast episode I have ever seen, wrapped in the presu…
ytc_UgwDP78pQ…
Comment
There is more to worry about than effective tweets. The singularity is more than just the development of AGI. It is the necessary convergence of humans with AGI. Musk knows it, too. That is why he developed neuralink. His stance is that AGI and the singularity are inevitable, and he is trying to ensure these developments are in ethical hands.
Problem is; Everybody else who works in AI thinks the same way, even if their ethics are perverse.
youtube
AI Governance
2023-04-18T17:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugy1yXSq9QA_S5Zh0Ml4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw1aD5wNQOEPQA45EB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzWY1ltIrhtZcRccI54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwEs4KRepJrYTgOnLV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgxyyoHfeGjorKdzGhl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwxxaW1E0AfiVTJBrB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzlkJogxkpjjW37CrJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxRKldcsk09Ai3Bnnh4AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugz1glUxdIvJjCroRJN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugzcrr0Fj-kMBtsPEp14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]