Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
What an ai so dumb didnt understood between 1 and 50 not 1 to 50 😂😂…
ytc_UgyteQuhs…
G
AI is created by humans. It's role should only be for the purpose of facilitatin…
ytc_UgwQGmupl…
G
No me gusta esto ,, ¿ Hasta donde van a llegar esta 💩💩💩💩 de gente…
ytc_Ugy5WWnHV…
G
Buy land. In the future as a multi planetary species earth is going to be in dem…
ytc_Ugz0LSKK3…
G
So far I’ve seen AI make a video of scooby doo getting pulled over, as a graphic…
ytc_UgzJB9Boa…
G
All these mega-wealthy guys sure aren't worried about endearing themselves to th…
rdc_m52y36t
G
In my casual survey of AI, it is at least 30%-40% in error--often laughingly in …
ytc_UgxuK_B2o…
G
That wouldn't be an AI that fits their criteria. On the other hand, if someone …
rdc_dy4eaai
Comment
"OpenAI was established to promote AI safety" - liar. It was established to make AI open source and take it out of the hands of the big corporations. Altman betrayed, and continues to betray, this trust. Multinationals love regulation - they're the only ones who can afford to pay the price. The issue everybody is dancing around here is, to almost all of these so-called AI ethicists, "safety" means "how can we accumulate more power by using AI, while appearing to care?" The near-term danger is not AI - it's these craven exploiters. The outcome will ultimately be better handles on steering wheels for horseless carriages that don't chafe the hands (since that was such a problem with buggy whips) while repressing almost all small or open source AI work. They are incapable of anticipating the vast space of very real AI dangers, especially the unknown unknowns. Will these regulations apply to the military/police/CCP 🤣/government agencies? How many black projects will be seeking "gain of function" in AI, circumventing any regulation, arguing we can't know how an malicious super-intelligent AGI will behave unless we create a malicious super-intelligent AGI to observe it?
youtube
AI Governance
2023-05-22T20:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyeAAeWN2XU06nJ-ip4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgwOeFDrs98XW-WMb4p4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgycyKEmoD_NubVmP8p4AaABAg","responsibility":"government","reasoning":"unclear","policy":"unclear","emotion":"curiosity"},
{"id":"ytc_Ugyb9KKjlO-HKvhexZN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzQobz4yapxWFOKb3h4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy3XGV1uO2r8FM5vfl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzwbXEqCieBAJNaDjZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyD8c5y7HHDc73EJQ94AaABAg","responsibility":"government","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyHoMtvQSbyEBRPKfp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugwe7u4Gik5Vk9pm0s94AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]