Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Our greed and need for power make AI very dangerous. Extremely scary times. How …
ytc_UgyFllom3…
G
Bro don't make him angry or he'll hack your computer and fill it with Meta AI ph…
ytc_Ugz88wHPS…
G
Before saying we couldn’t have a UBI because people wouldn’t know what to do wit…
ytc_UgzSIWZvr…
G
Did this guy say “me too” to this AI saying he wanted to create singularity tomo…
ytc_UgwcUqPIV…
G
Eventually universal basic income will become necessary. I think thats the only …
ytr_UgzoW1fO4…
G
@my1vice of course it will, most people have simple, repetitive, process-focused…
ytr_UgwGI1HKZ…
G
not to mention, chatgpt is basically an echo chamber. it’s designed to support y…
ytc_UgwMxxgV3…
G
What these people don't take into consideration is the divine. Consciousness is …
ytc_Ugwd2cfv-…
Comment
Decision Engines need to be incorporated into ALL artificial intelligence / machine learning. It is an audit trail to uncovering why an inference was made over another with root sources for logic. Its the big fail of GPT / Bing right now. It can create fake answers - ghosting. With a decision engine it would have a chain of logic as to what led up to the decision. Even GPT knows it needs one. Just ask.
youtube
AI Governance
2023-08-06T22:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyCpTxsAB4pHDqvYUh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyMZwNMe5E5lybxC9V4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwQMSUp6D3OpMO9yfh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgzAeGg0QeFT4Ujc0Ep4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxVYRVA_34r6oiNHyB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugzl9k3eE2xswnaYePJ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwnPmRYF5vSRXFnBMZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwDUV_Xyv9YKtOBFQ54AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwtVlRmfkMWfaTMTW14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzXThTvEfSBzbpAvnN4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"outrage"}
]