Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
A big reason for me why I don't use AI is cause I want to learn stuff. Could I a…
ytc_Ugy8rmSlb…
G
Simple. If AI does take off .. I'll side with it and get converted into a machin…
ytc_UgzcDgg_0…
G
it won't be a 'you need this job, stop complaining' convo, it'll be you pay us m…
ytr_UgxkcBUJc…
G
In 2023 there was an article about this by 2030 robots and ai will take over alo…
ytc_Ugx6ui2VI…
G
The only acceptable use for AI is the stuff people like DougDoug and Neuro sama …
ytc_UgzBs1A34…
G
I mean buying a ai art is stupid, but attacking the ai itself is stupid 😂…
ytc_Ugz8lg55R…
G
Ai is as credible art wise as most idiots nowadays. If it bothers you in the sli…
ytc_UgwLf0BM9…
G
Simply, yea, they “could” create a truly sentient ai (it would be nothing like C…
ytc_UgxYLqonU…
Comment
As a background, Elon Musk knew about the dangers of unregulated AI long before this AI boom.
He was the one who started OpenAI many years ago, with the vision of AI safety while the technology was still at the cusp of development so that in the long-term it doesn’t end up detrimental to humanity.
OpenAI is the organization that created Chat GPT. It has been bought by Microsoft and Elon Musk has disassociated himself with OpenAI now. Elon now says OpenAI is far from the original vision he has intended and is disappointed with what it has become.
Too bad humanity is late to realize they’re wrong and Elon was right all along.
youtube
AI Governance
2023-04-22T03:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugxdw6g3OIgZ7PMY15J4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxNlvVoc95CntF7t194AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzNh3T2SHiX14j_pJ54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwyBmSLWWTmGaU8Xkh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugyb3PeG8sI4lq_v6aN4AaABAg","responsibility":"company","reasoning":"unclear","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgxnTa00XUXLi_b4XBR4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyzZTGenW9NXR3G4654AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzN-npA1BNeKWUeDIB4AaABAg","responsibility":"unclear","reasoning":"virtue","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgzD1Qya2Ugo7JFs1Zh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugw2r9vFxnoKeQDRZ1Z4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]