Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
chatgpt still even was using the "not just x, but y" when telling someone to rop…
ytc_UgxkZwSF5…
G
James i like humor (including that part) also in the beginning there was the but…
ytr_UgiF1Gtuw…
G
It’s not ai that’s the danger. It’s the humans that control ai that pose the thr…
ytc_UgwhTMdf5…
G
The meta writing is on the digital wall. It's becoming increasingly clearer that…
ytc_UgycNJHQC…
G
... І люди з часом втратять здатність до спілкування з собі подібними.
Це, схож…
ytr_UgzaxVTOx…
G
*This is such poor research, or worse...*
*Most* of the complaints about AI in …
ytc_UgyUdDtwJ…
G
This book is whispering its tales to you "Game Theory and the Pursuit of Algorit…
ytc_Ugw5fxLXv…
G
The logic is so loaded in this one. He is ASSUMING self driving cars would kill …
ytc_Ugw6bpKGX…
Comment
Elon is aware of the possibility of AI being built on “perceived “ truths which will then influence beliefs of what is true. So his goal is to build AI whose algorithms are built on data sources that are as close to the truth as possible. To achieve this there has to be a cleansing process which builds the AI Data Source on as clean (True) data as possible. Then the AI algorithms can be built and passed thru a vetting process that insures that the AI Results of any objective or query fed to the AI Tool from the Clean Data Source is not harmful to humankind.
youtube
AI Governance
2023-04-18T03:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugy4Lk4Hwb_t1Ynw8eh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxEDFr0EaBAw0M-BcR4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugw_Y-nGre8nh5ouNxt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxzAQbVn9bOeHw5Z554AaABAg","responsibility":"developer","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugz3PqkvW4b2D3Ox2WZ4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz1Ws-eE7fK_XJfnSt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugxc9USbP-7ab37vbet4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugzqec7V6Elq_7Ok28J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzSuHHXmuv4nZNzacp4AaABAg","responsibility":"none","reasoning":"virtue","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxC3MegP7vcgiFlnpx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}
]