Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Alright 🙂
I’ll be right here whenever you come back—whether that’s in _________ …
ytc_UgzkYuxuY…
G
If humans reach type two civilization quantum ai can make you sime immortal can …
ytc_Ugyf59HhC…
G
I think. even if ai on the surface "looks" as good as real artists and doesnt ha…
ytc_Ugwx0Z2Zz…
G
If you look closer, you will find OpenAI is owned by the Clinton Foundation ;)…
ytc_UgyTaYOSM…
G
The thing that people making arguments in defense of generative AI seem to not u…
ytc_UgwUu9B2Y…
G
We actually don't. So far, tests have been strictly controlled, and in most case…
rdc_d8azx7v
G
@Bradley_UA i mean build an ethical framework to keep it grounded in moral cond…
ytr_UgyfKj1CS…
G
i did a exact word by word convo and in my case it gave even more dreadful repli…
ytc_UgwfebSQ6…
Comment
I don't know how companies implement AI but if you want to get the most of it, you have to learn how to prompt it. A simple e.g. AI can sometimes be very diplomatic in their answers so adding a "be brutally honest" can sometimes fetch u the answer you really need..etc And I don't know why companies use it for scheduling or food ordering which allows zero room for error, obviously it's going to screw up at some point, perhaps they could have used it in other non mission critical tasks. It is supposed to be an assistant, like a PA and not a total replacement. You'll need humans to guide it.
youtube
AI Responsibility
2025-10-08T22:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugwu7SysiiFlFwGx9Z14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugxo3iQPDqaDT0EWUEp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgyCMPewcdWhVXCw3hd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugytlm1ooVq8Y4xpSpF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugw3kWj-HepRH2-6Uwt4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugxj4yQOIwLDXok8N8J4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxT4hv4foGN2Lx1q4F4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxIxBYx6n2hDiRZnMl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxpcPNLeR2ccDNrVIZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugx_nZ72Y4G8CdzYQrB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]