Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
You should also mention the objective bottleneck of LLMs: all of their learning …
ytc_Ugz8K7gIf…
G
someone used sora ai to make a video of a crime i didnt do and we went to court …
ytc_UgzjYKbph…
G
While ai context engineering, I take these hallucinations into account. Normally…
ytc_Ugy_RNRZu…
G
I had an alternative google account where for the longest time wouldn’t show the…
rdc_n3wvk9m
G
AI data centers are the worst thing to happen to the environment in recent years…
ytc_Ugzs3Li1C…
G
Hmm... 🤔 but the concept of "artist" is simpler and more unambiguous than "art".…
ytr_UgzJu1L3m…
G
I think it's fair to say this is one goal. Global Governance with A.I at the hel…
ytc_UgzjGXbWo…
G
Ask John Searle, he's kinda the final arbiter. But FIRST, can your AI draw a ful…
ytc_UgxNHwuWF…
Comment
In the development of AI, there are good actors (defined as those aware of the risks and working hard to mitigate them) and there are bad actors (those who eschew the risks and are concerned only about their personal potential for profit). If the good actors stop working on AI to assuage their and our fears, that will leave only the bad actors. Nothing will assure the negative outcome more than that. No law passed in the West will stop the Chinese Communist Party from developing their own AI to better control their people, and if you think that won't be used to crush the West, you are a fool.
The only path forward is for the people most aware of the danger to be the ones to develop AI. Only the ones aware of the danger have any chance at all of navigating that danger to bring us to a healthy future.
youtube
AI Governance
2023-06-04T13:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_Ugygp6mAZ7gRnzNKrH94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugx2p18RhOOfL6pBpX54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyywFWt0EdFDZ5Nr6B4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyQJRRMBUJUOQUWAhZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzBbCXdJBdgusnGB0N4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugw8RftwbShS_UK65YR4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugz-Eaq_aZqrwn8Fr2p4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugyvk8Cbf7XJR_ezk414AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugyl7Xs8NbLVoxxgci14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugyx1DgdlYrkD_I4yNR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}]