Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Has anyone considered that AI developers have to develop a set of World Morality…
ytc_UgwvM82dm…
G
One aspect of AI safety nobody talks about: how about by trying to make AI safe,…
ytc_Ugwn2IBMe…
G
@NecessaryTet Your original comment talks about the need of AI's with "same gen…
ytr_UgyjVZzW3…
G
Have artists thought about pursuing this legally? Especially when there is proof…
ytc_UgzPG7l3p…
G
Automated customer service are nightmares for customers and just a cheap alterna…
ytc_Ugwd3dO88…
G
One of the premises of Christianity is that God gave man free will instead of ju…
ytc_UgyXF8A8_…
G
Maybe if you were paying directly through OpenAI/CGPT, but I doubt that App Stor…
rdc_njh9mzi
G
Guy is so uninformed the ai robots are way smarter, just an old man doesn,t unde…
ytc_Ugy-egU9B…
Comment
There should be security measures inplace as A.I. tools such as ChatGPT move forward. if they notice that the person is asking questions that may be used to harm people, it should have a "sorry I can't help you with that response" or it should respond with "why are you asking me this information? I can tell that you are planning to use this information to harm others, just know that this conversation is being saved and forwarded to local law enforcement (or other security alerting protocols/protections)
A. I. still has a long way to go.
youtube
AI Responsibility
2026-04-23T13:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugyl_WuRaB7p3KKC3ux4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzAQtGR0J_0b8ywO2Z4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgyBjINsC4IIu6cbuTV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzsZf0LxxtvyEVbNfB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy0IEfG5GNhA8lkhkx4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzwWTNW8qTofYTeo2N4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgxHPwe7K8YHHXC_OiF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwpHJtoIe9mwXKe9rB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwQXqpLfL-VStLnNDJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwTfBfKfphpni0b-ex4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}
]