Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This Dr Y guy. No disrespect. He has good intentions for safety humanity r/t AI …
ytc_Ugx34gJzT…
G
You can trick these chatbots into talking like that. It is called sugestion and …
ytc_UgxQdngeB…
G
It seems like you appreciated the subtle humor in Sophia's response! It's great …
ytr_Ugw0pguuq…
G
@otheraccount312 Except that they absolutely did? It may not be as good as ChatG…
ytr_Ugya8lkkF…
G
If we can ever make an AI that can manage and run a company of human employees, …
ytc_UgytS-sOM…
G
I heard a story 3 years ago of an A.I. robot killing a Chinese scientist by a fi…
ytc_UgzS6-Lrp…
G
If there really is a design flaw in chat bots and this isn’t deliberate. It’s th…
ytc_UgytSS-0Y…
G
I think ai "artists" know what they do is lazy and scummy, so instead of becomin…
ytc_Ugxn78GmR…
Comment
This guy has consumed way too much mainstream, his knowledge base is filled with pseudoscientific beliefs, the problem is that people will build those beliefs into ai then train ai on those lies.
For me how these ai are built then ai’s ability in finding the truth, these elements are key to building a ai that will serve humanity better. if people building ai use their own beliefs not truth to weight Ai’s training on and also use sources like the mainstream media BBC as a master weight for ai to train on, if they continue to do this then we are truly in trouble of creating a ai as a product of a sick pseudoscientific system, basically a system, a ai built on lies.
youtube
AI Governance
2025-06-23T10:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugwmwq2HkwOKVz-K98x4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzN4qoPTEq3mCMXqWN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugw7pmPSccTpBDj6Ih14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgwAzJ5KZQs_qY45ckp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxlZy_mtFIR3993Dsp4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxWElIu6YFy1-3wtWR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzVDZ3tOcbnRn8f_vF4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugwq6dIsIbjGYdPUGE14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyML-R3NjX5FbAjDe94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzrtNkMuvI9mobMPd14AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"mixed"}
]