Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Very good for brainstorming, very bad for final art. I believe AI art designer w…
ytc_Ugwh7HAvE…
G
facts don't matter anymore.
i don't fuck with openai but they're cutting sora f…
rdc_ocpnug9
G
The title says hot robot 😂😂😂😂, are you serious whoever wrote that based that off…
ytc_UgwWq3abz…
G
Quantum computing and ai has already put many agencies on notice as no encryptio…
ytc_Ugwv-hwsN…
G
We must not confuse, ChatGPT being conscious, with ChatGPT being very good at co…
ytc_UgyJUJZ-x…
G
And being blocked by AI screening systems. There was a lawsuit against Workday f…
ytr_UgyQgIZyu…
G
So This video I hate TO BREAK IT TO YOU BUT YOU LITERALLY SAID IN YOUR DO ROBOT …
ytc_Ugw1Nks6j…
G
I think the people that stand to gain the most want less people around that they…
ytc_UgwEdjpPb…
Comment
If your architect built a neural network that produces lies then there is a flaw in the design, they must to go back adjust the weights and feedback to ensure the system doesn't produce a lie. That is, assuming you don't want your system to lie. Maybe you do so you can scare people. But I don't want a system that lies . It's a Digital computer running an algorithm that analyses and processes data for a specific purpose. It's not a monster you can't control.
youtube
AI Responsibility
2025-12-19T22:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxNqbyIl7M4bQEgaIR4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz50j53w51HNfgjGDJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxkAENTzKERlhCB0854AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy_YUljBu2eXWNdgM54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz3bcKLUZ_-1hFZ5Yh4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyL4o5yAXpCoN0HjMN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzeXwT00RfpAoslDod4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytc_UgxiSqNjho88B-HHWAp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxGY8thiQWtQyo7cGp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyT9P4d5S5HBM85GAx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]