Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
We used ai to predict who's doing crime. Looks like it's black people, this is r…
ytc_UgyAJwxe6…
G
29:11 - Just NOW asked ChatGPT : "What is your favourite band?"
GPT4: I'm jus…
ytc_UgzsbOyMm…
G
Well, right now it's possible to hack your car. This have happened due to noone …
ytr_UgwOIjISM…
G
Colonel Sanders.. Let's be real..
No one ever liked you. No one ever will. You h…
ytc_Ugx9XQdCa…
G
We are CURRENTLY in the 6th mass extinction. Scientists are still looking to nam…
rdc_degeglq
G
You bring up some good points, but the real reason people are so upset is that e…
ytr_UgwtBnliN…
G
The danger from ai in this case is for stupid people like that guy. But when doc…
ytc_UgzAnhnd5…
G
also ai therapy is best for day to day stuff theres no services that can help ex…
ytr_UgylLOdKg…
Comment
Why are they all lying about AI? There is no such thing. This 'AI' is just coded by humans with human input parameters defined by the inputs given to it by humans. This 'AI' does not have multiple sense with which it can interpret and learn. For example, if you input the concept of 'up/down' into AI, because it doesn't have eyes, or arms and legs, it won't learn about 'climbing' something on it's own. It would need to be programmed with a pre-existing idea of climbing. The only way this 'AI' could be dangerous is if it is programmed with the option to be dangerous.
youtube
AI Governance
2023-04-18T05:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugx3xYBgRr8ywuAkSLx4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyi-D2TzROLjk8xPvl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugxkn_1UZO6KXJp7KDx4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw4lHgUKQ_cIDOo75l4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy6F1P-eL2xlVib4GJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy3Tgj-ToRF-khrZFh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxjQt4eDnLkwoJdofJ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwAw1toVOYewM3miO94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxTSCfY8erp4rUrrMl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxiBLHkW87ATuIqwg54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"}
]