Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
They are delving into perfection.
They are not human.
They have no feeling or …
ytc_Ugz19Iiat…
G
AI needs to be deprivatized. Can’t have private ownership of the technology that…
ytc_Ugz2uPicu…
G
Jobs do not exist because the job itself is necessary. Jobs exist to serve human…
ytc_Ugw1UGx8l…
G
Ai can't expose anything because it can't create anything it uses what's already…
ytr_UgxRGXGG2…
G
You cannot regulate AI. AI regulate human being one way or another. And it only …
ytc_Ugw9YAgV-…
G
hey ChatGPT i have a question..... is there people dumber than believers.
The a…
ytc_Ugyw2uW_p…
G
Essentially. This report does not address any counterarguments. The reason for t…
ytr_UgxGZ0ooX…
G
My art styles are not cohesive at all so I don’t think its a dead giveaway. But …
ytc_UgyhtbZM8…
Comment
Often, I hear individuals with left-leaning views express concerns about unconscious bias against various communities. However, I’m reminded of Senator Blackburn’s recent experience where she was unable to get ChatGPT to write a favorable poem about Trump, while it readily produced one for Biden. This incident, along with several others, raises questions about the intentions of activists who advocate for more AI training in the name of trust and safety, but then set rules that may seem arbitrary. It’s seriously worth contemplating whether it’s worse for a society to suffer from accidental unconscious biases or intentional, consciously set biases that don’t take universally shared values into account. I think we can all agree that it’s good to avoid recipes containing chlorine gas, but when it comes to politicians, religious topics, moral arguments, etc., I don’t trust corporate trust and safety teams to act in the universal interests of their users.
youtube
AI Responsibility
2024-02-20T19:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyhX1bLYVaXaWys16B4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwJ6XXnt3BknYD75194AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyd0VOOFhIgKWV_qDN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyzuHjd9BKtUxlQSLt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzW5jSwYFEbumylX3V4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugx2gp957etl9p3Ck1N4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw7HpySi8YMZLCCjNx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxMS6s7X58GmHFoiXl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyx-4wX03RPyG2pmFN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxzYJCJ70faZQaS5nF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}
]