Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Its gonna be a long time..
it is not. Now a days we see things being automated …
ytc_Ugz2yA2Bb…
G
The AI threat is the answer to a Higher, Faster and Wider, Digital Addicted Hust…
ytc_UgwC2Pp7o…
G
AI can be a pain if you trust it completely for your coding project. Why? Becaus…
ytc_UgyspVHKI…
G
For an unconventional one, live theater. If a robot (and it'd have to be robots,…
rdc_j42n9aa
G
if you have the technology to build a replacement for a human, you have in turn …
rdc_mah7d6m
G
Google the meaning of art. Art means HUMAN expression. What isn’t human? Robots/…
ytr_UgzFzHhR1…
G
The fact that I got an ad about 'independent AI agents' while watching this vide…
ytc_Ugxwnjbbi…
G
it wouldn't make sense to make a robot smart enough for sentience if they only d…
ytc_UghTBvSlr…
Comment
Another problem with OpenAI is that they are moving money from safety of AI to it's development, and because of that there will be much more problems with AI. Some time ago I've read a text about ex-worker. He said that they already have problems with AI and people are not prepared for what is happening behind closed doors.
youtube
2025-10-29T17:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugz0IER-zY2QqNr_EJR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzkMGWPE_CrhFMII414AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwtVzVGXqCdHpobSeN4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyLvxOaI58qjzkGUXx4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxKaZgzXKXVGc5i3Qt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugwwebcd4YdqV9oVfyh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgziLWTe9yUIImOTdat4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyYTuC-JcW7xYrr35p4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxneoXUWYXs_h9TVX94AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgzW1tCfaubvOY4oYrx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}
]